id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
238765046 | pes2o/s2orc | v3-fos-license | Online learning in the post-Covid-19 pandemic era: Is our higher education ready for it?
This study aimed to describe the challenges of higher education in implementing online learning during the pandemic Covid-19 outbreak. This study was a qualitative research with a phenomenological approach. Data were collected from 408 students and 20 Lectures from 6 public universities and 6 private universities in Java, Sumatera, Kalimantan, Sulawesi, Nusa Tenggara, and Maluku, by filling out online questionaires and in-depth interviews via social media. The process of data analysis was data reduction, identifying themes, mapping interrelationships between themes, and concluding findings The results of data analysis showed that there were two main challenges, both for lecturers and students. First, limited resources, such as electronic devices (laptops/smartphones/others), learning resources, electricity, and internet connections. Second, lack of knowledge/skills on how to use the online learning media, finding and/or providing learning resources, managing online learning, providing online measuring tools, and carrying out online assessments. This condition has the greatest impact on students from low economic families, and who live in areas with limited access to learning facilities, such as electricity, and internet connections. Most of them lose learning opportunities because of these limitations. Third, the difficulty of time management during the online learning period.
INTRODUCTION
The Covid-19 outbreak has shocked the whole world. Spreading very quickly forced many countries to quarantine/ lockdown their territory, in part or in whole. Such restrictions lead to paralysis of various public activities, such as economy, public transport, public services, tourism, including education (Arora & Srinivasan, 2020). Statistics from UNESCO (25/03/2020) mentioned that 1,524,648,768 students were affected by the covid-19 virus from 87.1% of the total students enrolled. India and | 140 China have the largest number of students affected by the Covid-19, which is more than 270 million students, and in Indonesia per Wednesday (25/3/2021) as many as 68,265,787 students affected by Covid-19 (UNESCO, 2021). In Indonesia, the Covid-19 pandemic has been declared as a non-natural national disaster on March 16, 2020. The spread of Covid-19 in Indonesia until March 2021 has reached 1 million confirmed cases. Covid-19 infection has also spread in all provinces in Indonesia.
The Indonesian government was forced to cancel an important agenda, to anticipate the spread of the virus increasingly widespread and uncontrollable. The government also made a prohibition and restriction policy for the people to carry out activities that cause a crowd of people, ranging from social activities, religious activities, or other activities that increase the risk of transmission. Social and physical distancing, as well as awareness for self-quarantine at home continued to be campaigned by the government to break the chain of the spread of this virus.
The most significant effect due to the Covid-19 outbreak for education was the loss of classroom learning activities as usual (Liguori & Winkler, 2020;Lynch, 2020;Zhang, Wang, & Yang, 2020). Schools, training and career centers, including higher education institutions are forced to be closed, students must study at home totally, no scientific meetings or discussions at schools, no face-to-face meeting between teachers/lecturers and students (Adedoyin & Soykan, 2020;Arora & Srinivasan, 2020;Crawford et al., 2020;Halil, 2020;Hodges, Moore, Lockee, Trust, & Bond, 2020;Laloo & Kharkongor, 2020;Lynch, 2020;Simamora, 2020;Smalley, 2020;Toquero, 2020). Suddenly, everything must change quickly because of the Covid-19 outbreak (Crawford et al., 2020;Hodges et al., 2020). Most recently, the National Education Standards Agency of the Republic of Indonesia, as the authorized party in the implementation of the National Examination (NE), canceled the last NE in 2020 for secondary schools, which was an effort against the spread of the Covid-19 outbreak. This NE is planned to be the last since the policy was changed by NE with Asesmen Kompetensi Minimum (AKM) and character surveys (Retnawati et al., 2019).
In order to maintain the sustainability of education, the Ministry of Education and Culture (Kemendikbud) of the Republic of Indonesia urges all educational institutions to switch face-to-face learning in class to online learning. In fact, the readiness of educational institutions in implementing online learning has never been evaluated. As a result, the challenges and problems that may be faced by educational institutions in implementing online learning have not been revealed (Chung, Noor, & Vloreen Nity Mathew, 2020;Tereseviciene, Trepule, Dauksiene, Tamoliune, & Costa, 2020).
Changes in the implementation of the education system around the world today are indeed different from normal conditions (Supariani, Rinda, Herlianti, & Djidu, 2021). Everyone is forced to be able to use technology suddenly without any preparation. In Indonesia itself, the transition to using technology in the implementation of education, especially assessments (National Examination) has been carried out since 2015 (Retnawati, Hadi, et al., 2017). The challenges faced today are slightly different from what happened during the transition from pencil-paper-based national exams, became a computer-based national exam at that time. The transition carried out at that time was carried out in stages, starting with a study, and adjusting to the conditions of school readiness. However, the implementation of the NE still faces many obstacles, as reported in the study conducted by . Today, the national emergency situation forces all educational institutions to take online learning alternatives, although with various limitations.
Several higher education institutions have indeed built a Learning Management System (LMS) and implemented the Blended Learning (BL) learning model for the past few years (Joubert, Callaghan, & Engelbrecht, 2020;Owston, 2013). BL is implemented by combining face-to-face learning activities in class, and online learning (Kaur, 2013;Nazarenko, 2015). However, what would happen if learning now had to be carried out 100% online, without face-to-face?. The question of how practicum activities can be carried out with online learning, whether higher education institutions are ready to implement it, and whether students are also ready to take part in this online learning, are still big questions for us. Therefore, this study will describe the challenges of higher education in conducting online learning during the Covid-19 pandemic.
This study aims to answer two questions, namely (1) how do universities implement online learning during the covid-19 pandemic?, and (2) what are the challenges faced by higher education in implementing online learning during the Covid-19 pandemic? We hope that the results of this study will become one of the considerations for higher education institutions and the government in making policies to improve the quality and equity of education today and in the postpandemic period.
Research Design
It is noticeable that the research objectives are aimed to explore people's views and social behaviour. The philosophy of interpretivism focuses on people behavior and endeavors to explain it (Bryman, 2012). Taking this fact into account it seems logical to adopt the interpretivist perspective, which is connected to the qualitative research strategy. This study was a phenomenology qualitative research. The focus of this research is to describe the challenge of higher education in implementing online learning during the pandemic Covid-19 Outbreak in Indonesia.
Participants and Data Collection
Participants in this study were lecturers and students from 12 universities from western to eastern Indonesia (6 public universities and 6 private universities), with ratings of Unggul (Excelent), Baik Sekali (Very Good), and Baik (Good). 20 lecturers and 408 students responded to an online questionnaire (http://bit.ly/kendalapembelajarandaring) related to the challenges in implementing online learning during the Covid-19 pandemic (see Figure 1). During this process, the research explained that the data was only used for research purposes and that all the identities of the respondents were not displayed. The questionnaire procedures neither affected the teachers' career nor anything else, only used as a research purpose.
Data Analysis
The data were analyzed using two approaches, namely descriptive quantitative and qualitative. Data related to the number and types of learning methods were analyzed using a quantitative descriptive approach, then the results were presented in the form of a diagram. Meanwhile, qualitative data derived from the responses of students and lecturers regarding the challenges faced were analyzed using a qualitative approach. Their responses to open questionnaires, as well as interviews were recorded and reduced to obtain themes related to the challenges faced. The process of data analysis was data reduction, identifying themes, mapping interrelationships between themes, and concluding findings. This approach is adapted from Bogdan & Biklen (2007).
Ethical Considerations
In this study, the only relationship between researchers and participants is to obtain data related to challenges to implement online learning in Covid-19 pandemic era. All data collected in this study only sourced from the responses of 15 lecture and 408 students who participated in this study. Furthermore, the identities of all participants were anonymized and written with code: L1, L2, L3, … (for lectures), and S1, S2, S3, … (for students). Each Lecture's response was recorded and inputted in a table to be analyzed and described. Some of the Lecture's responses that can provide a general description of a phenomenon are selected and presented as examples in this article so L1 -L15 or S1 -S408 is not mentioned entirely in this article.
RESULTS AND DISCUSSION
The data that has been collected using an online questionnaire has been analyzed and grouped into 2 (two) sub-sections to answer the questions of this study. First, to answer the question how do universities implement online learning during the Covid-19 pandemic, we collect data related to the learning method that most frequently used during the Covid-19 pandemic. Second, to answer the question what are the challenges faced by higher education in implementing online learning during the covid-19 pandemic, we collect data related to the challenges faced by lecture and students during online learning. The results of data analysis lead us to the answers to these two questions, as described below.
Learning Method during Covid-19 Pandemic
This subsection aims to answer the question-1 how do universities implement online learning during the covid-19 pandemic. We asked students about what media/learning methods were most often followed during the covid-19 pandemic. In this section, it is revealed how the learning process has been implemented in a number of higher education institutions in Indonesia, (in this context 12 higher education institutions participated in this study).
The results of data analysis (see Figure 2) show that most of the learning activities carried out using the a-syncronize method utilize social media. With a total of 101 responses, this learning method is more widely used than the Learning Management System (LMS) which only reached 86 responses, and learning using video conferencing media (77 responses). This condition is relevant to a number of challenges faced by both lecturers and students in implementing learning using LMS. Learning through social media is carried out by creating groups or channels on social media. These groups and channels are used by lecturers to send learning resources for students such as textbooks, text messages, audio messages, and videos. The rules for implementing learning, sending assignments, including exam questions, are conveyed by the lecturer through social media. Furthermore, learning with asynchronize mode is also carried out using LMS and e-mail.
Compared to social media, unfortunately, LMS is still not widely used. There are two reasons why LMS is not more widely used than social media in its use. There are still many higher education institution that have not developed LMS to support the implementation of learning. In addition, online learning was still unfamiliar among lecturers and students before the COVID-19 pandemic. As a result, the suspension of face-to-face in class causes confusion for lecturers and students in carrying out learning. The following are examples of statements made by lecturers and students.
"We've never done online learning before ... if going to class is prohibited, how are we supposed to teach?" [L-2] "This is the first time we have participated in online learning activities ... we've never studied using this method before" [S-17] Second, there are still many lecturers who have not been able to use LMS properly. As a result, the LMS that has been provided by the institution is not used in carrying out learning. This means that some LMS that have been developed by higher education institutions have not been fully utilized, instead of using them, lecturers still choose to implement conventionally (face-to-face learning in class). This condition is related to the challenges faced by these lecturers in utilizing the LMS itself (discussed in the next section).
Meanwhile, synchronous learning, using learning platforms such as Zoom, Google Meet, was only carried out by 77 out of 408 students. These two platforms provide facilities for conducting online meetings and facilitate new categories for most lecturers and students. This condition is also caused by a number of challenges in its implementation, both technical and non-technical problems.
The challenges of higher education implementing online learning
In this second section, the challenges faced by lecturers and students in the implementation of online learning are described. These challenges have a correlation with the selection of online learning implementation methods, as explained in the previous sub-chapter. The problems that were revealed, some of which are not new problems, but have been revealed for a long time and have only recently been revealed.
Figure 3. Challenges during online learning
Lecturer and student response data, obtained several themes related to challenges during online learning. After data reduction, we found the 10 most common challenges faced by them (see Figure 3). From these challenges, three main themes of online learning challenges were obtained.
First, limited facility resources, such as electronic devices (laptops/ smartphones/ others), learning resources, electricity, and internet connections. Second, lack of knowledge/skills on how to use online learning media, find and/or provide learning resources, manage online learning, provide online measuring tools, and carry out online assessments. Third, challenges related to time management. Limited facility resources Online learning requires the support of facilities to be effective. Unfortunately, not all regions in Indonesia have facilities for online learning. There are still many areas in our country which, in fact, are isolated from the outside world, without access to electricity. Many students have to climb to the top of a hill/mountain, roadside, or climb a tree to get an internet connection. Indeed, internet connection problems can be solved with this, but it is not a guarantee that they can attend the learning process well. Then, what about those who don't even have access to electricity? "In my village, there is no electricity... Then, how do we find internet connection? … during this pandemic, we find it difficult to learn" [S-33] "I have lost contact with some of my students since face-to-face learning in class was discontinued" [L-1] In addition, the implementation of online learning is still dominated by the use of social media, rather than using LMS. This is due to the unavailability of online learning platforms at these higher education institutions.
"We don't have our own server support to develop LMS in our institution…The only way is to use social media, because it is well known and used by students almost every day" [L-3]
Efforts to get an internet connection actually only solve problems related to student participation in the learning process. But what about the quality of the learning they get? In fact, most say they have difficulty understanding the concepts that are the topic of learning.
"It's so hard to study online… lots of subject matter is hard to understand" [S-6] "Our lecturer's voice is sometimes not heard by us… a lot of information that we can't hear well" [S-21] Another problem faced by students with low family economic income. Usually, under normal conditions they can access the internet easily and for free through the facilities provided by the university. Now, when access to universities is restricted, students have to spend money to pay for the internet connection fees needed during the learning process. Those with low economic income are unable to purchase credit to be able to access the internet. They don't even have the facilities to participate in online learning (e.g. smartphones, laptops, and other supporting devices). As a result, some of them are not able to follow the lesson at all. Not only students, higher education institutions also face problems that hinder online learning, one of which is the absence of an integrated Learning Management System (LMS). Lecturers are forced to seek independent media to carry out online learning. As discussed above, most of the lectures are carried out using only social media due to the absence of an LMS. As a result, the quality control of the implementation of online learning is not optimal.
"We don't have our own server support to develop LMS in our institution" [L-3] The most mentioned cause related to the absence of LMS from higher education institutions is server capacity that is not supported/sufficient. The need for this server is very urgent to be owned by a large institution such as a university because they manage thousands, even tens of thousands of student data. Likewise, to support the needs of online learning during this pandemic.
Lack of knowledge/skills on how to use online learning media
The second challenge is the lack of knowledge (lecturers and students) about how to use/operate various online learning media. Those who are accustomed to conventional learning (face to face) in the classroom must strive to use/operate various online learning media. Their resistance to online learning innovations has to deal with pandemic conditions that force everyone to be able to use online learning media.
"So far, I have been lazy to update my knowledge regarding web-based learning media innovations or the like… now I have to learn to use them" [L-10] "I have never tried to teach using video" [L-7] Another challenge is the availability of online-based learning resources. Usually, textbooks are the main choice, compared to online-based reference sources, current research results, or other references available online. Students and lecturers still face difficulties in finding these online learning resources. The inability to use online learning media has resulted in learning management that cannot run optimally. The assignment method, in the end, became the main choice by most lecturers. Social media is a medium for delivering assignments, sometimes references, while students must study independently from home. On the other hand, low English proficiency makes it difficult for students to study electronic text books, which are mostly in English. Meanwhile, the Indonesian language text books which they usually read in the library are not yet available electronically.
In addition, the lack of knowledge of some lecturers regarding online-based assessment techniques exacerbates the learning crisis. 7 out of 15 lecturers stated that they still did not have the knowledge and experience to carry out online assessments. The most common obstacle was the difficulty of monitoring students in taking tests. Many students still do plagiarism in doing assignments so it is difficult to get valid information regarding their performance. Not only knowledge assessment, attitude and character assessment is also very difficult to carry out during online learning.
"It's difficult to assess student performance… I find many students 'nyontek' (cheating) on their friends' work when doing assignments" [L-5] "I find it difficult to assess student attitudes/characters through online learning" [L-9] Challenges related to time management.
The use of the assignment method by almost all lecturers has an impact on students. Many of them complain that they find it difficult to manage their time due to too many assignments in one time period. In addition, the implementation of learning does not always follow a predetermined schedule. Often they receive assignments from lecturers not on schedule.
| 147 "Almost all lecturers give a lot of assignments (not as usual) … I am confused, which one should I finish first." [S-10] Online learning time management is also faced with poor internet connection, and the absence of supporting tools. These challenges have made online learning ineffective.
"Some lecture sessions had to be rescheduled because at the time the internet connection was not good" [L-7] Switching face-to-face learning in class to online learning is not easy. At least, there are four main aspects that must be prepared to effectively organize full online learning, namely, learning resources, software support, infrastructure, and regulations (Chung et al., 2020;Joosten & Cusatis, 2020;Zhang et al., 2020;Zulkifli, Hamzah, & Bashah, 2020). After these four aspects are met, skills/knowledge/readiness are still needed to manage online learning (Basilaia, Dgebuadze, & Kantaria, 2020;Chung et al., 2020;Joosten & Cusatis, 2020;Lynch, 2020;Zulkifli et al., 2020).
Learning resources that were previously presented in print, whether accessed through the library or in the classroom, are no longer accessible. Students who do not have their own learning resources in print can no longer access the library because library services have been restricted during the pandemic. On the other hand, most of the electronic books (e-books) available are in English. The lack of English proficiency of students' has become another new problem that must be faced by students.
The characteristics of the courses have become crucial in this online learning period. Learning materials that are theoretical in nature and aim to develop the ability to master concepts and understanding can still be sent online. However, the learning material, which is aimed at developing skills, becomes very difficult. Students who are supposed to practice their skills (e.g. in the laboratory, studio or classroom) cannot be replaced online. In addition, the delivery of lecture materials that require direct demonstrations is still not running well. To overcome difficulties in delivering material that requires demonstration, lecturers must provide learning resources in the form of videos, or conduct online lectures that are synchronized classes using various applications, such as Zoom, Google Meet, Facebook Messenger, Big Blue Button, or other services. Unfortunately, synchronized online learning cannot run well due to the limitations of supporting infrastructure, starting from the network, the availability of internet connections, and even some areas are still constrained by the unavailability of electricity connections.
Although there are many challenges, it turns out that online learning also has a positive side. One of them is that students are more active in asking / answering questions during the learning process. There is a tendency that they do not hesitate to express their opinions or questions during online learning compared to direct learning (face to face in class).
"My students are more active in asking/giving arguments in online learning than face-toface learning in class" [L-5] This pandemic has served as an effective catalyst to expand educational opportunities particularly with respect to knowledge sharing through various technology. However, efforts to develop student skills are still not optimal through online learning so far (Stambough et al., 2020).
COVID-19 crisis sheds light on the need for a new education model. With the conditions described above, it seems that our higher education is still not ready for online learning in the post-pandemic era. There are still many improvements that are needed and of course must be supported by policies from the government. After dealing with this pandemic for more than a year, lecturers and students will be more familiar with various learning technologies in the future.
CONCLUSION
Online learning carried out in higher education during the Covid-19 pandemic still faces many challenges, which results in hampering student learning activities. There are three things that become a source of problems in implementing online learning.
First, limited resources, such as electronic devices (laptops/smartphones/other), learning resources, electricity, and internet connections. Second, lack of experience, knowledge, and skills on how to use online learning media, difficulties in finding and/or providing learning resources, difficulties in managing online learning, and difficulties in providing measuring tools for carrying out online assessments. Third, the difficulty of time management during the online learning period.
RECOMMENDATION
The results of this study also show that various problems are faced by both lecturers and students, regardless of the grade of the institution. The availability of supporting resources that are indirectly related to the geographical location of the area of residence, and the economic conditions of students, as well as experience in managing and participating in online learning are some of the most common factors found in this study. However, this study does not specifically discuss the size of the influence of these factors quantitatively in the implementation of online learning. To find out more about the contribution of these factors in the implementation of online learning in higher education, further studies are still needed.
Finally, the COVID-19 outbreak has made everyone aware of the need to prepare supporting infrastructure for online learning, to face various challenges that may come in the future. All parties are also forced to try to learn and use various online learning media, which so far have never been tried or are indifferent to their development. Finally, we all hope that this pandemic will end soon so that the learning process can return to normal. | 2021-09-09T20:45:45.154Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "471e0b469ce186272db83f18e2b6bb21257599dc",
"oa_license": "CCBYSA",
"oa_url": "https://journal-center.litpam.com/index.php/e-Saintika/article/download/479/277",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bc89db6653b9712b724636659a34178b49cf28d9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
9123017 | pes2o/s2orc | v3-fos-license | Nitroacetylene as dipolarophile in [2 + 3] cycloaddition reactions with allenyl-type three-atom components: DFT computational study
Abstract [2 + 3] Cycloaddition reactions of nitroacetylene with allenyl-type three-atom components take place according to the polar, but a one-step mechanism. Alternatively to cycloadducts, during the reaction between the aforementioned reagents, zwitterionic structures with “extended” conformation may be formally created. However, this route is supported by neither kinetic nor thermodynamic factors. Graphical Abstract
Introduction
Nitroacetylenes belong-because of the presence of a strongly electron withdrawing NO 2 -group-to a class of strongly electrophilic acetylenes. They are relatively unstable, and their physicochemistry is very poorly known [1].
Reactions of these TACs with nitroacetylene are a potentially highly effective and selective method of synthesis of nitro-substituted, five-membered, unsaturated heterocycles that are valuable from the point of view of organic preparatory work [10][11][12]. For this reason, identification of factors determining their course is so important. With this in mind, the following work was performed: (1) an analysis of nature of reagent interactions based on reactivity indices theory, and (2) simulations of a theoretically possible reaction path in the presence of a weakly polar (toluene), and strongly polar medium (nitromethane). It must be stressed here that the problematics of the [2 ? 3] cycloaddition reaction of the aforementioned reagents is also very interesting from a mechanistic point of view. These reactions may also take place according to a onestep mechanism, as well as to a two-step mechanism, with a zwitterionic intermediate [13][14][15]. The two-step mechanism is facilitated in this case by (1) the strongly electrophilic character of the dipolarophile and the nucleophilic nature of the TACs and (2) unequal screening of the reaction centres of both reagents.
Results and discussion
Nitroacetylene (1) is characterized by high global electrophilicity [16,17], exceeding 3 eV. Using the scale proposed by Domingo [17], it should be classified as belonging to the group of strong electrophiles. On the other hand, TACs 2a-2c have a significantly weaker electrophilic nature (x \ 1.6 eV)-using the aforementioned scale they are classified as moderate electrophiles. Thus, they should behave as nucleophiles in reactions with nitroacetylene. The nucleophilic character of these compounds is indicated by values of the N indices. As can be concluded from the data summarized in Table 1, phenzyldiazomethane (2c) is the strongest nucleophile in the analysed series, while benzonitrile N-oxide (2a) is the weakest nucleophile. It should be noted that in cases of all reagent pairs, the difference in global electrophilicities exceeds 1.5 V. The title reactions are thus polar processes [18].
Next, it was decided to determine which of the theoretically possible directions of substrate transformations would be favoured by the electrophile-nucleophile interactions. In the case of every single one of the cycloaddition reactions studied, two regioisomeric reaction paths can be studied (Scheme 2A, B).
Analysis of local electronic properties allows one to state that the most electrophilically activated centre of the nitroacetylene molecule is the C b carbon atom. Local electrophilicity on the C a carbon atom is more than three times weaker. On the other hand, the terminal X atom of TACs is the most nucleophilic activated centre in these species. If it is assumed that the reaction route is determined by an attack of the more strongly nucleophilic reaction centre of the TAC on the more strongly electrophilic reaction centre of nitroacetylene, then the favoured direction of reagent transformation should always be determined by the A path (see Scheme 2).
Reaction profiles
Taking into account the nature of the electrophile-nucleophile interactions, reactions of nitroacetylene with TACs 2a-2c must be recognized as polar processes. This, however, does not determine their mechanism. There are two possible variants of the conversion of reagents into adducts: (a) a one-step polar mechanism and (b) two-step zwitterionic mechanism. In the first case, one should expect a transition state (TS) in the energy profile, in the second two TSs connected by a valley corresponding to the zwitterionic intermediate (respectively, 5 or 6, see Scheme 2).
As suggested by quantum-chemical calculations, in weakly polar toluene (e = 2.38), the reaction of nitroacetylene with benzonitrile N-oxide takes placeregardless of the regioisomeric path-as a one-step process. All attempts at finding paths leading to zwitterionic structures ended unsuccessfully.
Formation of a pre-reaction complex (local minimum-LM) always comprises the first reaction step. This is related to a certain drop in enthalpy of the reacting system. It should be noted, that LMs are exclusively enthalpic in character because Gibbs free energy gap between suitable intermediate and reactants is always greater than zero (DG [ 0) due to the entropic factor (TDS). Therefore, prereaction complexes may not exist as stable products.
Only then does the system start heading towards the activation barrier. By analysing barrier heights on paths A and B, it must be said that both regioisomeric substrate Table 1 Essential electronic properties of nitroacetylene (1) and TACs 2a-2c Global properties Local properties transformation routes are possible from the kinetic point of view. However, the favoured path is the one leading finally to 4-nitroisoxazole (3a). This conclusion is in agreement with forecasts based on reactivity indices analysis. A similar picture of 1 1 2a reaction is provided by calculations at more advanced levels of theory B3LYP/6-31?G(d), B3LYP/6-311G(d), and B3LYP/6-311?G(d) (Fig. 1).
DFT calculations also indicate a one-step mechanism of cycloaddition reaction of nitroacetylene to TACs 2b and 2c. The performed simulations suggest, however, a different reaction regioselectivity than would result from reactivity index analysis. In particular, in the case of cycloaddition reactions using phenyl azide (2b), both regioisomeric reaction channels should be permitted, but path B will be favoured. However, in the case of a cycloaddition reaction using phenyldiazomethane, path A must be treated as kinetically forbidden. DFT calculations indicate that the only permitted cycloaddition channel is the path leading to the 4c adduct.
Analysing conversion routes for reagent pairs 1 ? 2b, paths leading to zwitterionic structures were found (Fig. 2). However, these are not the expected zwitterions 5 and 6 with ''cyclic'' conformation (see. Scheme 3) but zwitterions with ''extended'' conformation 7 and 8 (see Scheme 4). Their conversion to adducts can be executed via a step of dissociation into individual reagents and (in the next step) a stage of cycloaddition according to a one-step mechanism. These paths should be treated as formally forbidden from the kinetic point of view. Thermodynamic factors also do not favour formation of zwitterions 7 and 8.
Introduction of a more polar medium of nitromethane (e = 38.20) into the reaction mixture does not result in qualitative changes in energy profiles of the reaction. However, their quantitative description does change. In particular, activation barriers in cases of relatively less polar cycloaddition reactions 1 1 2a ? 3a/4a and 1 1 2b ? 3b/4b increase. In case of the most polar of the considered cycloaddition reactions (1 1 2c ? 3c/4c), activation barriers become slightly lower in a more polar medium. However, in the case of all reactions leading to zwitterions 7 and 8 an increase in the polarity of the reaction medium facilitates the lowering of the activation barrier. However, this lowering is not significant enough to treat these processes as permitted kinetically.
Key structures
Within the area of located pre-reaction complexes LM distances between reaction centres still remain far outside the range typical for r bonds in intermediate states. Some LMs are orientation complexes. None of them has, however-regardless of reaction medium polarity-the nature Table 2]. New r bonds are formed after the system reaches the TS. Analysing geometrical parameters of TS leading to cycloadducts 3 and 4, it can be noted that the advancement stage of new bonds depends on reagent nature and-to a smaller extent-on medium polarity.
So, in the case of a reaction with benzonitrile N-oxide in toluene, the bond at the b carbon atom introduced from nitroacetylene always is more advanced at the TS (Fig. 3). Formation of bonds in the area of the less favoured energetically TS B takes place in a more asynchronous manner (Dl [ 0.2). Both TSs have a polar nature, which is reflected in the value of GEDT index (see Table 2).
In case of a similar reaction with phenyl azide in the same reaction medium, both analysed TSs show a similar level of asynchronicity. The more energetically favourable TS B is the more polar one of the two.
On the other hand, the nature of TSs of the reaction using phenyldiazomethane shows complete difference. In particular, the less energetically favourable TS A is characterized by an almost ideal synchronicity of new r bond formation and a very weak polar character. On the other hand, in the area of TS B new r bonds are formed in a strongly asynchronous manner, accompanied by a strong charge-transfer effect towards the dipolarophile substructure. The asynchronicity of this TS is the greatest among all considered critical structures.
With the introduction of a stronger polar medium of nitromethane into the reaction medium, key parameters of the analysed structures do not change significantly. In the area of most TSs, asynchronicity of formation of new r bonds increases. However, it does not increase enough to enforce a change in reaction mechanism.
Within TSs on paths C and D only one bond is always formed. It is always the bond with the more nucleophilic atom of the X 1,3-dipole (Fig. 3). Progress of this bond is always greater than 50 % (see l values in Table 2). It should be noted that TSs on paths leading to zwitterions 7 and 8 are always more polar than competitive units on paths leading to cycloadducts 3 and 4. Also in the case of these structures, the polarity increase of the reaction Table 2).
Conclusion
DFT calculations for various theory levels show that [2 ? 3] cycloaddition reactions of nitroacetylene with allenyl-type TACs take place according to the polar mechanism. This is not, however, the expected two-step, zwitterionic mechanism, but a one-step mechanism. Zwitterionic structures with the ''extended'' conformation may theoretically form along competitive paths. However, this route is supported by neither kinetic nor thermodynamic factors. Despite the clearly polar nature of the reactions discussed, the influence of the polarity of the reaction medium on their kinetics and the nature of critical structures is relatively small.
Computational details
All calculations reported in this paper were performed on ''Zeus'' supercomputer in the ''Cyfronet'' computational centre in Cracow. Global and local electronic properties of reactants were estimated according to the equations described earlier. In particular, the electronic chemical potentials (l) and chemical hardness (g) were evaluated in terms of oneelectron energies of FMO (E HOMO and E LUMO ) using the equations: Next, the values of l and g were then used for the calculation of global electrophilicity (x) [16,17] according to the formula: and the global nucleophilicity (N) [18] of TACs can be expressed in terms of equation: The local electrophilicity (x k ) [19] condensed to atom k was calculated by projecting the index x onto any reaction centre k in the molecule by using Parr function P k ? [20]: The local nucleophilicity (N k ) [21] condensed to atom k was calculated using global nucleophilicity N and Parr function P k - [20] according to the formula: Reactivity indexes calculated on this way are collected in Table 1.
For the simulation of the reaction paths hybrid functional B3LYP with the 6-31G(d), basis set included in the GAUSSIAN 09 package [22] was used. It was found previously that the B3LYP/6-31G(d) calculations illustrate well the structure of TSs in polar [2 ? 3] cycloadditions involving conjugated nitroalkenes [14,23,24]. The critical points on reaction paths were localized in an analogous manner as in the case of the previously analysed [2 ? 3] cycloadditions of (Z)-C,N-diphenylnitrone with gem-dinitroethene [14]. In particular, for structure optimization of the reactants and the reaction products the Berny algorithm was applied. First-order saddle points were localized using the QST2 procedure. The TSs were verified by diagonalization of the Hessian matrix and by analysis of the intrinsic reaction coordinates (IRC). In addition, similar simulations using more advanced B3LYP/6-31?G(d), B3LYP/6-311G(d), as well as B3LYP/6-311?G(d) theoretical levels were performed.
All calculations were carried out for the simulated presence of toluene or nitromethane as the reaction medium. For this purpose PCM model [25] was used. For optimized structures the thermochemical data for the temperature T = 298 K and pressure p = 1 atm were computed using vibrational analysis data. Global electron density transfer (GEDT) [26] was calculated according to the formula: where q A is the net Mulliken charge and the sum is taken over all the atoms of dipolarophile.
Indexes of r-bonds development (l) were calculated according to formula [27]: where r A-B TS is the distance between the reaction centres A and B at the TS and r A-B P is the same distance at the corresponding product.
The kinetic parameters as well as essential properties of critical structures are displayed in Tables 2, 3, 4. | 2016-05-04T20:20:58.661Z | 2015-01-27T00:00:00.000 | {
"year": 2015,
"sha1": "82f180ea0dcbcea1836537b3562b0dc39e310e25",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00706-014-1389-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82f180ea0dcbcea1836537b3562b0dc39e310e25",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
51712851 | pes2o/s2orc | v3-fos-license | Effectiveness of Prevailing Flush Guidelines to Prevent Exposure to Lead in Tap Water
Flushing tap water is promoted as a low cost approach to reducing water lead exposures. This study evaluated lead reduction when prevailing flush guidelines (30 s–2 min) are implemented in a city compliant with lead-associated water regulations (New Orleans, LA, USA). Water samples (n = 1497) collected from a convenience sample of 376 residential sites (2015–2017) were analyzed for lead. Samples were collected at (1) first draw (n = 375) and after incremental flushes of (2) 30–45 s (n = 375); (3) 2.5–3 min (n = 373), and (4) 5.5–6 min (n = 218). There was a small but significant increase in water lead after the 30 s flush (vs. first draw lead). There was no significant lead reduction until the 6 min flush (p < 0.05); but of these samples, 52% still had detectable lead (≥1 ppb). Older homes (pre-1950) and low occupancy sites had significantly higher water lead (p < 0.05). Each sample type had health-based standard exceedances in over 50% of sites sampled (max: 58 ppb). While flushing may be an effective short-term approach to remediate high lead, prevailing flush recommendations are an inconsistently effective exposure prevention measure that may inadvertently increase exposures. Public health messages should be modified to ensure appropriate application of flushing, while acknowledging its short-comings and practical limitations.
Information about New Orleans Sewerage and Water Board Water Treatment System:
The New Orleans Sewerage and Water Board (S&WB) operates two water treatment plants-one on the East Bank and the other on the West Bank of New Orleans (NOLA). This study focused on homes served only by the S&WB's East Bank or Carrolton plant. The Carrolton plant provides an average of 135 million gallons of water per day to an estimated population of 286,603 (S&WB 2016). The plant uses a conventional treatment system to purify water from the Mississippi River. Ferric sulfate and polyelectrolyte is used for coagulation followed by flocculation and sedimentation. Chlorine, in the form of sodium hypochlorite, is used as the primary disinfectant and chloramines are used as the secondary disinfectant. Lime is used for corrosion control pH adjustment and sodium hexametaphosphate is added as a sequestrant. The final step are fluoridations followed by filtration through rapid gravity filters (sand and anthracite) (Black and Veatch 2016). The city's water quality parameters are presented in Table S1, Supplementary Materials. Source: NOLA S&WB. Samples collected 1-1-2015 to 12-31-2015 from 11 points of entry to distribution system.
The New Orleans Sewerage and Water Board's Consumer Confidence Report Prior to Study Commencement (2015):
After EPA regulations on flush time recommendations were relaxed, the city's water utility, the Sewerage and Water Board, continued to promote the original flush recommendations from 2009 to 2015 [41,46]. At the commencement of this study, the utility encouraged residents to flush their taps "for 30 seconds to 2 minutes before using water for drinking or cooking" daily under normal use conditions [41] ( Figure S1, Supplementary Materials). Figure S1.
New Orleans, LA Study Participant Premise Plumbing and Service Line Lengths:
To determine the most probable location in the water distribution system or premise plumbing that each sample type may have been sitting during the stagnation period, an estimate of the volume of water and flush times required to purge the lines was derived based on estimated flow rates at low flow (3.0 liters per minute) and high flow (8.3 liters per minute); typical premise and service line pipe diameters; and survey respondent measurements of service lines and premise plumbing ( Figure S2, Supplementary Materials).
A 250-mL sample is estimated to represent water in approximately 2.4 meters (or 8 feet) of piping. Figure S2. Percent of survey respondents by reported length of premise plumbing + service line measurements (meters)(n=80) The lengths of water service lines and premise plumbing pipes were estimated based on resident measurements reported on returned surveys ( Figure S2, Supplementary Materials). Residents were asked to measure the distance from the middle of the street to the water line as it enters the home (service line length) and the distance from where line enters home to the kitchen tap as measured along wall (premise plumbing). Researchers also derived google map measurements of potential service line lengths for all sites, based on measures taken from the center of the street to the front of the home in the satellite view of Google Maps using the distance and area tool.
Difference in Water Lead Levels in Flushed and First Draw Hot Samples Compared to First Draw Cold Water Samples:
The results of the WLL differences from FD to flushed samples (Table 2, Figure S3) demonstrate a small but significant increase (median=0 ppb and mean=0.6 ppb, p=0.04) in WLLs from FD to F30S sample. No significant change in WLLs was observed from FD to F3M samples (p=0.219). Small but significant declines in WLLs was observed in F6M (median=0 ppb and mean= -0.2 ppb) and FDH (median= -0.1 ppb and mean= -0.4 ppb) samples, compared to FD samples (Table 2; Figure S3, Supplementary Materials). Figure S3. Distributions of the difference in water lead levels (WLLs) in cold water samples collected at first draw (FD) compared to WLLs in samples collected after various flush times Key: FDH: first draw hot water; F30S: 30 second flush; F3M: 2.5-3 minute flush; and F6M: 5.5-6 minute flush samples. Diamonds inside box: mean value; line inside box: median value; bottom and top edges of box: 25th and 75th percentile; lower fence: 1.5 IQR (interquartile range) below 25th percentile; upper fence: 1.5 IQR above 75th percentile. Notes: Measures outside the range of -3 and 3 ppb were not shown. Refer to Table 2 for the minimum and maximum values.
Water Lead Levels And Flushing Efficacy Associated With Atypical Use Conditions
While not a planned part of the study, conditions arose which allowed us to evaluate the impact of a onetime 15-minute utility flush on WLLs after lead service lines replacements at five residential sites. Lead service line replacements and construction are known to increase lead in water due to construction disturbances and galvanic corrosion for periods of weeks to years [19,30,38]. Five of our study participants contacted the city's water utility after our testing to request removal of their lead service lines. All but one of these residents received a partial lead service lines replacement (i.e., only the utility or customer side was replaced); while one had the full lead service lines replaced (from water main in the street to the home). All of the sites were sampled prior to, and after the line replacements and the utility or contractor 15-minute post-replacement flush. Only one of these homes was unoccupied due to ongoing home renovation work. Table S2 shows sampling procedures and WLL results for each site-unfortunately, collection procedure for the samples varied from home to home. No definitive conclusions can be drawn from the post-line replacement samples due to the small sample size and variance in the sampling procedures. However, the persistent elevation in WLLs (exceeding the EPA AL) can be seen within the week after the line replacement in occupied homes, in both the full line replacement site (6 days later) and partial line replacement Site 3 (1-2 days later). Post-line replacement WLLs reached as high as 226 ppb one day after the partial line replacement (after a post-stagnation 30 second flush). These results suggest that rigorous extended flushing protocols may need to be repeated on a daily basis for an as yet indeterminate time period following line replacements. It is widely acknowledged that sites with partial replacements may have higher WLLs; and may require more rigorous and regular flushing than normal-use residential sites under typical conditions. [38,70]. Post-replacement flush guidelines are not always consistent, and some guidelines (i.e., one-time 15 minute high velocity flush) may not be effective for maintaining low WLLs over a long period of time. In some circumstances, utilities are not required to promote flushing, such as after voluntary lead service line replacements in Pb-compliant cities.
When lead service line replacements are conducted in cities compliant with the Lead and Copper Rule, educating consumers about flushing is only required once a year, in the utility's annual Consumer Confidence Report for utility customers [3].
New Orleans has been undergoing extensive road work, including thousands of partial lead service line replacements [48]. For homes undergoing partial lead service line replacements, New Orleans officials recommend on their Roadwork website, that residents "Run cold water at a high flow at all of your faucets for at least 5 minutes each, one at a time, starting with the faucet closest to your water meter"; clean faucets aerators; and continue to flush for at least a month before using the water [71]. At the start of this study, this information was not consistently communicated nor readily available to New Orleans residents undergoing roadwork [72]. However, the persistent elevation in WLLs we observed days after the line replacements indicates that care should be taken to flush systems rigorously and regularly after line replacements ( Table S2, Supplementary Materials).
The EPA's Science Advisory Board (SAB) stated that "the lack of mandatory water lead testing and homeowner education associated with voluntarily partial lead service line replacements suggests that in practice, voluntary replacement might be associated with greater exposure of the public to lead" [38]. The SAB recommends that utilities test the water and tell consumers to flush the lines "over a period of months" after a partial lead service line replacement; but concluded that while "line flushing appears to provide some benefit, the … time to realize the benefit (up to several weeks of flushing in the reviewed studies) likely precludes any practical implementation of this technique" [38]. Despite the general knowledge about the ineffectiveness and potential danger that partial lead service line replacements pose, they are still required by the Lead and Copper Rule when certain compliance conditions have not been met [3].
More research is needed to evaluate how frequently flushing would need to be conducted to maintain low WLLs after a partial lead service line replacement. One study simulated partial lead service line replacements in New Orleans, and observed that intermittent flushing over a two week period was not long enough to stabilize WLLs [19]. In keeping, previous studies suggest several weeks, months, or maybe years may be required to remediate increased WLL exposure after partial lead service line replacements [38,74]. These facts do not discount the benefits of more rigorous flushing protocols as an effective Pb remediation method for some systems when high WLLs are present. Improved remediation has been observed with higher velocity flushing (full open tap); continuous flushing (as opposed to intermittent flushing); increased flushing frequency and duration; and flushing at multiple taps [9, 11-12, 19, 24].
However, residents should be alerted that when conditions are severe enough to warrant more rigorous flushing protocols, as observed here after partial lead service line replacements, exposures to high WLLs are always a possibility. Flushing can mobilize particulate-bound Pb throughout the plumbing system, which can then serve as a long-term source of acute Pb exposure. Even after flushing water for 10-25 minutes, some Flint homes still had high WLLs [14]-at least one Flint tap still contained WLLs exceeding 15 ppb (217-13,200 ppb) after a 26 minute flush [75]. This was likely due to the presence of highly unstable lead scales and the continuous sloughing of particulate lead during the time in which corrosion control was not used by Flint officials. Factors associated with maintaining low WLLs under such conditions, such as flushing frequency, must be determined on a case by case basis.
Post-Study S&WB Risk Messages:
Homogenized exposure reduction and lead remediation guidelines are always susceptible to error, given the wide variability that can exist between buildings, e.g., in pipe age, lengths, materials, and diameters; scale buildup; and home occupancy and water use. Promotion of these practices need to be reconsidered as other more effective, evidence-based, low-cost technologies, such as NSF-certified faucet mount filtration devices, are now widely [82]. In acknowledgement of this issue, the US EPA's Lead and Copper Rule (LCR) Working Group recommended to US EPA officials in 2015, that the Consumer Confidence Reporting Rule be revised to exclude the currently required messaging: "When your water has been sitting for several hours, you can minimize the potential for lead exposure by flushing your tap for 30 seconds to 2 minutes before using water for drinking or cooking" [71]. Rather than promoting one-size-fits-all flush guidelines, greater effort should be expended on motivating and enabling proactive evidence-based solutions. Yet it was only after the preliminary release of our results in 2016 that S&WB revised their risk messaging and increased their flush guidelines to "30 seconds to 5 minutes"; however elsewhere in the same material, the messaging remained "30 seconds to 2 minutes"-conflicting messages which can increase confusion [84] ( Figure S4, Supplementary Material). Figure S4. The New Orleans, Louisiana Sewerage and Water Board's informational brochure: "Tips for reducing lead exposure from drinking water" (Source: NOLA S&WB's 2016 Consumer Confidence Report) The following pages present the participant survey: b. Indicate if you went through this process to find out: Yes, I followed these steps No 15. Any partial or full replacement of water lines outside home? A full replacement is replacement of pipes from the home to the water main in the street. A partial replacement is just replacement of pipes from the meter to the main or from the meter to the home. Thank you for participating in this study. We will contact you shortly as soon as the water test results are received, and give you guidance on next steps if needed. | 2018-08-06T13:23:14.235Z | 2018-05-28T00:00:00.000 | {
"year": 2018,
"sha1": "b0f3c019543bcf66677a1ee587dda4b823fc3671",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/7/1537/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "807f19c4a7d84f7e7a8c5c722c3b72d640e4719f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
} |
5886990 | pes2o/s2orc | v3-fos-license | Frank's constant in the hexatic phase
Using video-microscopy data of a two-dimensional colloidal system the bond-order correlation function G6 is calculated and used to determine the temperature-dependence of both the orientational correlation length xi6 in the isotropic liquid phase and the Frank constant F_A in the hexatic phase. F_A takes the value 72/pi at the hexatic to isotropic liquid phase transition and diverges at the hexatic to crystal transition as predicted by the KTHNY-theory. This is a quantitative test of the mechanism of breaking the orientational symmetry by disclination unbinding.
The theory of melting in two dimensions (2d) developed by Kosterlitz, Thouless, Halperin, Nelson and Young (KTHNY-theory) suggests a two-stage melting from the crystalline phase to the isotropic liquid. The first transition at temperature T m is driven by the dissociation of thermally activated dislocation pairs into isolated dislocations breaking the translational symmetry [1,2]. The fluid phase directly above T m still exhibits orientational symmetry and is called the hexatic phase. It may be viewed as an anisotropic fluid with a six-fold director [3,4] which is characterized by a finite value of Frank's constant F A , the elastic modulus quantifying the orientational stiffness. At the second transition at T i > T m , the dissociation of some of the dislocations into free disclinations destroys the orientational symmetry. Now, the fluid shows ordinary short-range rotational and positional order as it is characteristic of an isotropic liquid.
Following an argument given in [1,4], T m and T i can be estimated using the defect interaction Hamiltonian H d between a pair of disclinations (d = disc) and a pair of dislocations (d = disl) which for both defect pairs and at large distances goes like H d ∼ c d log r with the dimensionless strength parameter c d depending on the defect type. Defect dissociation is completed at a temperature where the thermally averaged pair distance r 2 d diverges. Evaluating this expression for H d one generally finds divergence if c d = 4. The unbinding condition c d = 4 translates into lim T →T − m βK(T )a 2 0 = 16π for dislocation pairs (β = 1/k B T , a 0 is lattice spacing) and into lim T →T − i βF A (T ) = 72/π for disclination pairs, where K is the Young's modulus of the crystal. Connecting thus the defect pair unbinding condition to the two transition temperatures T i and T m , two expressions are obtained that summarize the microscopic explanation of the KTHNY theory for two-stage melting.
In this Letter we study the temperature-dependence of Frank's constant of a 2D system in the hexatic phase. We first determine the hexatic → isotropic fluid transition temperature T i and then check if Frank's constant takes the value 72/π at T i , thus testing the KTHNY theory and its prediction that disclination unbinding occurs at T i . In addition, we analyze the divergence behavior of the orientational correlation length at T i and of Frank's constant at T m .
Different theoretical approaches invoking grain boundary induced melting [5,6] or condensation of geometrical defects [7,8] suggest one first order transition. However some simulations for Lennard-Jones systems indicate the hexatic phase to be metastable [9,10]. The transition in hard-core systems seem to be first-order [12] probably due to finite-size effects [11]. Simulations with long-range dipole-dipole interaction clearly show second order behavior [13]. Experimental evidence for the hexatic phase has been demonstrated for colloidal systems [14,15,16,17,18,19], in block copolymer films [20,21], as well as for magnetic bubble arrays and macroscopic granular or atomic systems [22,23,24,25,26]. Still the order of the transitions is seen to be inconsistent. The observation of a phase equilibrium isotropic/hexatic [17,21] and hexatic/crystalline [17] indicates two first order transitions. In our system we find two continuous transitions.
The experimental setup is essentially the same as in [27]. Spherical and super-paramagnetic colloids (diameter d = 4.5 µm) are confined by gravity to a water/air interface formed by a water drop suspended by surface tension in a top sealed cylindrical hole of a glass plate. The field of view has a size of 835 × 620 µm 2 containing typically up to 3 · 10 3 particles (out of 3 * 10 5 of the whole sample). A magnetic field H is applied perpendicular to the air/water interface inducing in each particle a magnetic moment M = χ H. This leads to a repulsive dipole-dipole pair-interaction with the dimensionless interaction strength given by Γ = β(µ 0 /4π)(χH) 2 (πρ) 3/2 . Here χ is the susceptibility per colloid while ρ is the 2d particle density and the average particle distance is a = 1/ √ ρ. The interaction strength can be externally controlled by means of the magnetic field H; it can be interpreted as an inverse temperature and is the only parameter controlling the phase behavior of the system. For each Γ the coordinates of the colloids are recorded via video-microscopy (resolution of particle position dr = 100 nm) and digital image processing over a period of 1 − 2 h using a frame rate of 250 ms.
To set the stage we first visualize in Fig. (1) the three phases and their symmetries by plotting the structure factor as calculated from the positional data of the colloids for three different temperatures. Here, α, α ′ runs over all N particles in the field of view while denote the time average over 700 configurations. In the liquid phase, concentric rings appear having radii that can be connected to typical inter-particle distances. The hexatic phase, on the other hand, is characterized by six segments of a ring which arise due to the quasi long-range orientational order of the six-fold director [28]. In the crystalline phase the Bragg peaks of a hexagonal crystal show up with a finite width that is due to the quasi long-range character of the translational order.
To quantify the six-fold orientational symmetry the bond-order correlation function is calculated with ψ( r) = 1 Nj j e 6iθij ( r) . Here the sum runs over the N j next neighbors of the particle i at position r and θ ij ( r) is the angle between a fixed reference axis and the bond of the particle i and its neighbor j.
here denotes not only the ensemble average which is taken over all N (N − 1)/2 particle-pair distances for each configuration (resolution dr = 100 nm) but also the time average over 70 statistically independent configurations. KTHNY theory predicts that lim r→∞ G 6 (r) = 0 crystal: long range order G 6 (r) ∼ r −η6 hexatic: quasi long range G 6 (r) ∼ e −r/ξ6 isotropic: short range , η 6 < 1/4 and takes the value 1/4 right at T = T i . All three regimes can be easily distinguished in Fig. (2) showing G 6 (r) for a few representative temperatures. Note, that G 6 (0) is not normalized to 1.
We next fit G 6 (r) to r −η6 and e −r/ξ6 to extract η 6 and ξ 6 . The fits are performed for radii r/a ∈ {0..20} [29]. To check for the characteristics of the orientational correlation function, the ratio of the reduced chi-square χ 2 goodness-of-fit statistic of the algebraic (χ 2 alg ) and exponential (χ 2 exp ) fit is shown in Fig. (3) as a function of Γ for three different measurements. For melting, a crystal free of dislocations was grown at high Γ and then Γ was reduced in small steps. For each temperature step the system was equilibrated 1/2 h before data acquisition started. This was done at different densities: melt 1 with average particle distance of a = 11.8 µm and melt 2 with a = 14.8 µm containing 3200 respectively 2000 particles in the field of view. The measurement denoted f reeze in Fig. (3) (a = 11.8 µm) started in the isotropic liquid phase and Γ was increased with an equilibration time of 1 h between the steps. For χ 2 alg /χ 2 exp > 1 an exponential decay fits better than the algebraic and vice versa for χ 2 alg /χ 2 exp < 1. We observe in Fig. (3) that the change in the characteristic appears at Γ i = 57.5 ± 0.5. This value is the temperature of the hexatic ↔ isotropic liquid transition.
In the vicinity of the phase transition, approaching Γ i from the isotropic liquid the orientational correlation length ξ 6 should diverge as [4], with b a constant and ν = 1/2. This behavior is observed in Fig. (4a). ξ 6 indeed increases dramatically near Γ i = 57.5 ± 0.5 irrespective of whether the system is heated or cooled. Before discussing this feature we first address the finite size effect. To this end, we have computed G 6 (r) and ξ 6 for subsystems of different size ranging from 720× 515 µm 2 , 615 × 405 µm 2 , 505 × 300 µm 2 , 400 × 190 µm 2 to 390×80 µm 2 . The resulting data-points are plotted as triangles in Fig. (4) and belong to the black filled squares which they converge to. No finite size effect is found for Γ < 56, but a considerable one at Γ = 56.9 close to Γ i where we obviously need the full field of view to capture the characteristic of the divergence. At Γ = 58.0 there is a huge finite size effect indicating that ξ 6 is much larger than the field of view. However, inside the hexatic phase, ξ 6 is no longer well defined as the decay is algebraic. We fit our data to eq. (3) in the range 49 < Γ < 57.5 and find the critical exponent ν = 0.5 ± 0.03 and Γ i = 58.9 ± 1.1, a value which due to the finite-size effect is slightly larger than Γ i obtained from Fig. (3). The exponent η 6 is related to Frank's constant F A [4] So the critical exponent η 6 (Γ i ) = 1/4 corresponds to βF A (Γ i ) = 72/π at the hexatic ↔ liquid transition. This quantity is plotted in Fig. (4b). Indeed, F A crosses the value 72/π at Γ i = 57.5±0.5 exactly at that temperature which in Fig. (3) has been independently determined to where c is again a constant andν = 0.36963. Fitting the values of F A to the expression in eqn. (5) in the range 57.5 < Γ < 61 we obtainν = 0.35 ± 0.02 and Γ m = 61.3 ± 0.4 as an upper threshold. Again triangles represent evaluation of our data in sub-windows of variable size (same sizes as above). The finite size effect for Γ = 57.0 is negligible. Close to Γ m it increases but the values saturates for Γ = 59.1 and Γ = 60.8 and remain within the error-bars for the biggest sub-windows.
In conclusion, we have checked quantitatively the change of quasi-long-range to short-range orientational order and extracted the correlation length ξ 6 in the isotropic fluid and Frank's constant F A in the hexatic phase from trajectories of a 2d colloidal system. We find a hexatic ↔ isotropic liquid transition at Γ i = 57.5 ± 0.5. Three observations support this result: (i) the change of the distance dependence of G 6 (r) (Fig. (3)), (ii) the condition F A (Γ i ) = 72/π for Frank's constant and (iii) the divergence of ξ 6 . For the transition hexatic ↔ crystal F A diverges at Γ m . Both divergencies (extracted from just one correlation function) lead to critical exponents that are in good agreement with the KTHNY-theory. The measurements for melting and freezing support each other; so we may conclude that there is no hysteresis effect of the phase-transitions. At the two transitions, the order parameters are observed to change continuously (within the resolution of Γ ∝ 1/T ); no indication of a phase-separation (as for example strong fluctuations of the order parameters) has been found [30] as has been reported by [17,21]. So we believe that in our systemhaving a well-defined, purely repulsive pair-potential and a confinement to 2D that is free of any surface roughness -the transitions are second order.
In [31,32] we verified that the Young's modulus becomes 16π at T m . We have now checked that F A takes the value 72/π at T i . These two findings together confirm the two-stage KTHNY melting scenario with its underlying microscopic picture of breaking the translational symmetry by dislocation-pair-and orientational symmetry by disclination-pair-unbinding. P. Keim gratefully acknowledges the financial support of the Deutsche Forschungsgemeinschaft. | 2016-10-31T15:45:48.767Z | 2006-10-12T00:00:00.000 | {
"year": 2007,
"sha1": "d82347fdb8cf15bbaeb125a66cae2b910354fbae",
"oa_license": null,
"oa_url": "https://kops.uni-konstanz.de/bitstream/123456789/16894/2/Keim_Frank.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3dfd81636739c6cd45690db41f114caadbc0b5b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
14783785 | pes2o/s2orc | v3-fos-license | Weight-reduction through a low-fat diet causes differential expression of circulating microRNAs in obese C57BL/6 mice
Background To examine the circulating microRNA (miRNA) expression profile in a mouse model of diet-induced obesity (DIO) with subsequent weight reduction achieved via low-fat diet (LFD) feeding. Results Eighteen C57BL/6NCrl male mice were divided into three subgroups: (1) control, mice were fed a standard AIN-76A (fat: 11.5 kcal %) diet for 12 weeks; (2) DIO, mice were fed a 58 kcal % high-fat diet (HFD) for 12 weeks; and (3) DIO + LFD, mice were fed a HFD for 8 weeks to induce obesity and then switched to a 10.5 kcal % LFD for 4 weeks. A switch to LFD feeding led to decreases in body weight, adiposity, and blood glucose levels in DIO mice. Microarray analysis of miRNA using The Mouse & Rat miRNA OneArray® v4 system revealed significant alterations in the expression of miRNAs in DIO and DIO + LFD mice. Notably, 23 circulating miRNAs (mmu-miR-16, mmu-let-7i, mmu-miR-26a, mmu-miR-17, mmu-miR-107, mmu-miR-195, mmu-miR-20a, mmu-miR-25, mmu-miR-15b, mmu-miR-15a, mmu-let-7b, mmu-let-7a, mmu-let-7c, mmu-miR-103, mmu-let-7f, mmu-miR-106a, mmu-miR-106b, mmu-miR-93, mmu-miR-23b, mmu-miR-21, mmu-miR-30b, mmu-miR-221, and mmu-miR-19b) were significantly downregulated in DIO mice but upregulated in DIO + LFD mice. Target prediction and function annotation of associated genes revealed that these genes were predominantly involved in metabolic, insulin signaling, and adipocytokine signaling pathways that directly link the pathophysiological changes associated with obesity and weight reduction. Conclusions These results imply that obesity-related reductions in the expression of circulating miRNAs could be reversed through changes in metabolism associated with weight reduction achieved through LFD feeding. Electronic supplementary material The online version of this article (doi:10.1186/s12864-015-1896-3) contains supplementary material, which is available to authorized users.
Background
Obesity is associated with insulin resistance and an abnormal inflammatory response [1], and the strong associations suggest that adipose tissue plays a prominent role in the onset and progression of these comorbidities [2]. White adipose tissue (WAT) has been characterized as an endocrine organ [3], as it produces endocrineacting peptides such as leptin, and it is metabolically important, with excess levels being associated with metabolic syndrome [4,5]. High fat uptake leads to metabolic alterations in adipose tissue that increase the levels of circulating free fatty acids in the blood [6]. This leads to macrophage activation and the production of proinflammatory cytokines via Toll-like receptors, resulting in inflammation in adipose tissue [6]. When allowed ad libitum access to a high-fat diet (HFD), C57BL/6J mice develop insulin resistance and obesity in a manner that resembles disease progression in humans [7]. Increased energy expenditure and decreased energy intake are the two most commonly recommended lifestyle changes to reduce adiposity and restore insulin sensitivity in the treatment of diet-induced obesity (DIO) and associated comorbidities [8]. Calorie restriction is effective in improving insulin sensitivity and decreasing both body weight and percent body fat [9]. In addition, reductions in body weight and improvements in insulin sensitivity can also be achieved by reducing the percentage fat in a diet, i.e., by switching from a HFD to a low-fat diet (LFD) [10].
MicroRNAs (miRNAs) are endogenous small RNAs that post-transcriptionally regulate gene expression, and they have been demonstrated to have important roles in numerous disease processes. There is growing evidence that miRNAs play an important role in regulating adipose tissue pathways that control a range of processes, including adipogenesis, insulin resistance, and inflammation [11][12][13]. Many miRNAs are dysregulated in the metabolic tissues of obese animals and humans, potentially contributing to the pathogenesis of obesity-associated complications [11][12][13]. In addition, recent studies identified several miRNAs expressed in metabolic organs that could be used as feasible therapeutic targets for obesity and its consequent pathologies [11,13]. Recently, circulating serum miRNAs were found to display specific expression patterns, suggesting that miRNA profiles may represent fingerprints for various diseases [14,15]. In addition, despite the ubiquitous presence of ribonucleases, serum miRNAs levels are remarkably stable and reproducible [16,17], and they function in cell-to-cell communication [18]. Currently, how changes in miRNA profiles might affect adipose tissue at the functional and molecular level and to what extent they differ in response to weight-reduction strategies are not well understood. This information is important in the development of dietary anti-obesity interventions [19]. As circulating miRNAs potentially play an important role in regulating the pathophysiology of obesity and they are potential therapeutic targets, we hypothesized the weight reduction may change the circulating miRNAs expression. Our study aim was to profile the expression of circulating miRNAs in a mouse model of DIO with subsequent weight reduction achieved through LFD feeding.
Ethics statement
This study was conducted in strict accordance with guidelines on the use of laboratory animals, and every effort was made to minimize the suffering of affected animals. Animal protocols were approved by the IACUC of Chang Gung Memorial Hospital, Taiwan (permission number No. 2012091002).
Animal experiments
C57BL/6NCrl mice were purchased from BioLasco (Taipei, Taiwan). Animals were housed, and surgical procedures, including analgesia, were performed in an Association for Assessment and Accreditation of Laboratory Animal Care International-accredited SPF facility according to national and institutional guidelines. In this experiment, 18 male, wild-type C57BL/6NCrl mice were randomly assigned to three subgroups (n = 6 in each group) as follows: (1) control, mice were fed a standard AIN-76A (fat: 11.5 kcal %) diet ad libitum for 12 weeks; (2) DIO, mice were fed a 58 kcal % HFD (D12331; Research Diets Inc., New Brunswick, NJ) ad libitum for 12 weeks to induce obesity; and (3) DIO + LFD, mice were fed a 58 kcal% HFD (D12331) ad libitum for 8 weeks to induce obesity and then fed a 10.5 kcal% LFD (D 12329; Research Diets Inc.) for 4 weeks. Weight measurements were performed weekly, and a glucose tolerance test was performed at the beginning and end of the experiment to confirm that HFD-fed mice developed an obese and glucose intolerance phenotype. Briefly, mice were fasted for 5 h, and baseline blood glucose levels were measured with an Accu-Check Advantage blood glucose meter (Roche, New Jersey, USA) using blood collected from the tail vein. Mice (n = 6 in each group) were injected intraperitoneally with 2 g of glucose per kilogram body weight in sterile PBS. The glucose level was measured via tail vein blood (~10 μL) at t = -30 and 0 (pre) and t = 15, 30, 60, 90, and 120 min after the glucose infusion. Data were averaged and graphed as blood glucose level as a function of time. To reflect the circulating levels of glucose during the glucose tolerance test (GTT), we calculated the total area under the curve (AUC) of the glucose concentration versus time by the linear trapezoidal rule for the period of 0 -120 min after glucose infusion. To avoid the effect of loss of blood in GTT experiment or uncertain effect or repeat tail punctures on subsequent miRNAs expression and cytokine assay, additional groups of mice under the same model were used for further experiments. After the end of the experiment, all mice were euthanized, and the abdominal mesenteric WAT of each mouse was removed and weighed. The adipose tissue block embedded in paraffin was sectioned at 5 μm to measure the adipocyte area. Three 5 μm-thickness sections of the same fat specimen at 50 μm distance was mounted on glass plate and stained with hematoxylin and eosin. Two different microscopic fields (magnification × 100) per plate were photographed and 100 adipose cells were arbitrarily selected in the center of field and their cell diameters were assessed by tracing the outline of each adipocyte. The mean adipocyte area was measured from the WAT of control and experimental mice (n = 4 in each group) using Image-Pro Plus image analysis software (Carl Zeiss, Oberkochen, Germany) and expressed in terms of square micrometers. The cells were randomly chosen, and the person analyzing the images was blinded to the group assignments. At the indicated time of the experiment, 1 mL of whole blood was collected via cardiac puncture into a plain tube and allowed to clot for 1 h. Samples were centrifuged at 3000 × g for 10 min, and sera were aliquoted and stored at −80°C until further analysis.
RNA isolation and preparation
Total RNA was extracted from serum using the mir-Vana™ miRNA Isolation Kit (Life Technologies, NY). Purified RNA was quantitatively evaluated by measuring its absorbance at 260 nm using an SSP-3000 Nanodrop spectrophotometer (Infinigen Biotechnology, Inc., City of Industry, CA), and RNA quality was assessed using a Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA). Total RNA (10 ng) was reverse-transcribed into cDNA using a TaqMan miRNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA). Target miR-NAs were reverse-transcribed using sequence-specific stem-loop primers, and cDNA was used for quantitative real-time polymerase chain reaction (qPCR).
miRNA microarray analysis
The Mouse & Rat miRNA OneArray® v4 (Phalanx Biotech Group, Hsinchu, Taiwan) array used in this experiment contains 144 experimental control probes, 1157 unique mouse miRNA probes, and 680 rat miRNA probes, based on the miRBase 18 database. Three biological replicates of each group of mice were used in miRNA microarray experiments. Mouse genome-wide miRNA microarray experimental and statistical analyses were performed by Phalanx Biotech Group. Briefly, fluorescent targets were prepared from 2.5 μg of total RNA using the miRNA ULS™ Labeling Kit (Kreatech Diagnostics, Amsterdam, Netherlands). Labeled miRNA targets enriched using NanoSep 100K (Pall Corporation, Port Washington, NY) were hybridized to The Mouse & Rat miRNA OneArray® v4 in Phalanx hybridization buffer in the OneArray® Hybridization Chamber. After overnight hybridization at 37°C, non-specifically bound targets were removed by three washing steps (Wash I, 37°C, 5 min; Wash II, 37°C, 5 min and 25°C, 5 min; and Wash III, rinse 20 times). Slides were dried by centrifugation and scanned using an Axon 4000B scanner (Molecular Devices, Sunnyvale, CA). The signal intensities of Cy5 fluorescence in each spot were analyzed using GenePix 4.1 software (Molecular Devices, Sunnyvale, CA) and processed using R language (http://www.r-project.org/) with two packages: limma (http://www.bioconductor.org/packages/release/bioc/html/ limma.html) and genefilter (http://www.bioconductor.org/ packages/release/bioc/html/genefilter.html). Spots with flag < 0 were filtered out, and the remaining spots were log 2 transformed and normalized using the 75 % media scaling normalization method. Normalized spot intensities were converted into gene expression ratios between the control and treatment groups. Spots with expression ratios ≤0.5 or ≥2, as well as with p < 0.05, were selected for further analysis. Differentially expressed miRNAs were subjected to hierarchical cluster analysis using average linkage and Pearson's correlation as the measure of similarity. The miRNA array data have been deposited in the NCBI Gene Expression Omnibus with the accession number GSE61005. Five miRNAs detected by array analysis were selected and quantified by qPCR using the Applied Biosystems 7500 Real-Time PCR System (Life Technologies) to confirm the upregulation of miRNA expression in the DIO + LFD group. Twenty-five femtomoles of single-stranded cel-miR-39 synthesized by Invitrogen (Carlsbad, CA) was spiked into 400 μL of serum as an internal control for the expression of each miRNA.
Target prediction, GO enrichment, and KEGG pathway analyses
Target prediction was performed to identify the target genes of the identified dysregulated miRNAs by integrating all three public databases (TargetScan, PicTar, and miRanda). This method firstly mapped all target gene candidates to GO terms in the database (http:// www.geneontology.org/), calculated gene numbers for each term, and then used a hypergeometric test to find significantly enriched GO terms in target gene candidates compared to the reference gene background. Bonferroni's correction for the p-value was used to obtain a corrected p-value. GO terms with corrected p-values ≤ 0.05 were defined as significantly enriched in target gene candidates. To reveal the main pathways in which the target gene candidates are involved, pathway analysis using a major public pathway-related database, KEGG, was performed to identify significantly enriched metabolic pathways or signal transduction pathways in target gene candidates compared with the whole reference gene background. Genes with FDR ≤ 0.05 were considered significantly enriched among the target gene candidates.
Statistical analysis
The body weight of mice are expressed as the mean with a confidence interval. All other experimental data are expressed as the mean ± standard error of the mean. Analysis of variance combined with a Bonferroni post hoc correction was performed to identify significant differences in body weight, weight of fat, adipocyte area, glucose levels, and serum cytokine levels. A p value of 0.05 was regarded as the level of statistical significance.
LFD decreased body weight and adiposity
In comparison with the C57BL/6NCrl mice fed the standard diet, feeding with the HFD significantly increased body weight (Fig. 1). By contrast, after the change from HFD feeding to LFD feeding, the body weight of DIO mice decreased quickly, with the mean body weight stabilizing after 4 weeks on the LFD. Abdominal mesenteric WAT was also significantly larger in the DIO mice than in control mice, and the mice that were switched to the LFD had significantly less abdominal mesenteric WAT than those that remained on the HFD (Fig. 2a). However, the lower amount of abdominal mesenteric WAT was not accompanied by a significantly smaller adipocyte area (Figs. 2b and 3). Notably, because the fat specimen chosen at 50 μm distance was less than the mean diameter of the adipocytes, there may be exist a selection bias that some measured adipocytes were repeatedly calculated. However, we expect this selection bias is not significant because the tissue section was chosen in a fixed distance and the counting of high number of adipocytes could decrease the deviation of calculated mean adipocyte area. After injection of glucose to the control mice, blood glucose levels increased to a peak of 350 mg/dL after 15 min, and then gradually returned to baseline after 120 min. In the DIO mice, blood glucose concentrations at 30 to 120 min during the GTT were significantly higher than those in the control mice (Fig. 4a). HFD-fed animals displayed significant impairment in glucose tolerance, as evidenced by a 90 % higher incremental glucose AUC (Fig. 4b). In addition, significantly lower glucose level was observed at 30 min after glucose injection for the LFD-fed mice relative to the HFD-fed mice (Fig. 4a), resulting in an around 15 % lower glucose AUC (Fig. 4b).
Target prediction and function annotation
To further understand the physiological functions and biological processes associated with the 23 miRNAs, target prediction was performed by integrating three public databases (TargetScan, PicTar, and miRanda), and 1082 target genes were identified. GO annotation and KEGG pathway analysis was also performed to identify functional modules regulated by these 23 miR-NAs. In GO annotation analysis, cellular processes, biological regulation, metabolic processes, primary metabolic processes, and cellular metabolic processes were the most significantly enriched GO terms (Fig. 8). KEGG pathway analysis revealed 142 pathways associated with these miRNA targets. Among these, metabolic pathways were the most enriched, with 1024 associated genes, followed by MAPK signaling, actin cytoskeleton regulation, secondary metabolite biosynthesis, focal adhesion, insulin signaling, calcium signaling, cytokine-cytokine receptor interaction, tight junctions, phagosomes, and adipocytokine signaling pathways (Table 3). These results suggest that these targets have a high possibility of being regulated by miRNAs during obesity and weight reduction through LFD feeding; however, the possibility of false-positive results from the prediction algorithm always exists.
Discussion
Switching to a LFD is an effective intervention to promote weight loss and improve metabolic health parameters in obesity [19]. Although morbid obesity is considered a systemic inflammatory state, the serum inflammatory profile of C57BL/6 mice, as measured by an antibody array, revealed that DIO mice had higher leptin, IL-6, and LPS-induced chemokine concentrations and lower concentrations of all other chemokines/cytokines than control mice [20]. In this study, we demonstrated that LFD feeding reduced the body weight and adiposity of DIO mice; however, there was no significant difference in the expression of the 11 measured cytokines between DIO and DIO + LFD mice, suggesting that DIO mice may be in an early state of obesity. By contrast, significantly dysregulated miRNAs were identified in both groups. Notably, most (23 of 28) of these circulating miRNAs were upregulated in DIO + LFD and downregulated in DIO mice, implying that the downregulation of these miRNAs by obesity could be reversed by LFD treatment. In addition, target prediction and function annotation revealed that the target genes associated with these 23 differentially expressed miRNAs are involved in metabolic, insulin, and adipocytokine signaling pathways that directly link the pathophysiological changes that occur during obesity and weight reduction. Therefore, whether miRNA supplementation represents a potential therapeutic strategy to treat obesity is an interesting topic requiring further robust investigations to clarify.
In this study, some of the identified dysregulated miR-NAs have been linked to obesity or adipogenesis in the literature. Among them, five members of the Let-7 family (mmu-let-7a, mmu-let-7b, mmu-let-7c, mmu-let-7f, and mmu-let-7i) were dysregulated in response to obesity and weight reduction following LFD feeding. Mice with global overexpression of Let-7 are viable, but they have reduced body size and weight [25]. In mice, 12 genes encode members of the Let-7 family, which includes nine slightly different miRNAs (Let-7a, Let-c, and Let-7f [all encoded by two genes], and Let-7b, Let-7d, Let-7e, Let-7g, Let-7i, and miR-98 [all encoded by one gene]). All Let-7 family members are believed to have similar functions because they share a common seed region (nucleotides 2-8), which mediates interactions between miRNA and target mRNAs [25]. Furthermore, Let-7 transgenic mice exhibit impaired glucose tolerance because of diminished glucose-induced insulin secretion, and anti-miR-induced silencing of Let-7 has been proven to improve blood glucose levels and insulin resistance in obese mice [25].
In vivo, miR-103 is downregulated in the mature adipocytes of obese mice [26] and upregulated during the differentiation of human and murine pre-adipocytes [27,28].
A 9-fold upregulation of miR-103 was noted during early adipogenesis in 3T3-L1 pre-adipocyte cells, and lipid droplet formation was accelerated when it was ectopically expressed [26]. miR-103 is also upregulated during porcine adipogenesis, and its inhibition suppresses adipogenesis [26].
miR-15a overexpression leads to a decrease in the number, but an increase in the size, of murine adipocytes by inhibiting Delta-like 1 homolog expression [29]. In a study of miRNA libraries reconstructed from preand post-differentiated 3T3-L1 cells, it was noted that miR-15a may not be related to the actual differentiation process, but it may induce growth arrest and/or hormonal stimulation [30]. In addition, the miR-17-19 cluster, which comprises seven miRNAs (miR-17-5p, miR-17-3p, miR-18, miR-19a, miR-20, miR-19b, and miR-92-1) and promotes cell proliferation in various cancers, has been demonstrated to be significantly upregulated at the clonal expansion stage of adipocyte differentiation. MiR-17-92 has been revealed to target Rb2/p130, an important early regulator of pre-adipocyte clonal expansion [31], and the stable transfection of 3T3L1 cells with miR-17-92 resulted in accelerated differentiation and increased triglyceride accumulation after hormonal stimulation [32]. The adipogenic miR-21 has also been demonstrated to be upregulated in human obesity [33] and to enhance adipogenesis in human adipose tissuederived mesenchymal stem cells (hASCs) by mediating TGF-β signaling [34].
The miR-30 family has been found to be important for adipogenesis [12]. In this study, miR-30a, miR-30b, and miR-30c were significantly downregulated in obese mice, and miR-30b was significantly upregulated after LFD feeding. MiR-30 family members are strongly upregulated during adipogenesis in human cells, and inhibition of miR-30 inhibits adipogenesis [12]. miR-30 family members have also been demonstrated to act as positive regulators of adipocyte differentiation in a human adipose tissue-derived stem cell model [35]. Overexpression of miR-30a and miR-30d stimulates adipogenesis, and it has been demonstrated that miR-30a and miR-30d target RUNX2, a major regulator of osteogenesis and a potent inhibitor of PPARγ, the master gene in adipogenesis [36]. MiR-30c has been found to be upregulated in adipogenesis and to enhance adipogenesis in hASCs, and it appears to target two genes (PAI-1 and ALK2) in distinct pathways [37]. Moreover, miR-30d has been identified as a positive regulator of insulin transcription [38].
Furthermore, in this study, eight miRNAs (mmu-miR-711, mmu-miR-712, mmu-miR-713, mmu-miR-714, mmu-miR-715, mmu-miR-716, mmu-miR-717, and mmu-miR-574) were upregulated in DIO mice. Of these, miR-712 is a mechanosensitive miRNA that is upregulated in endothelial cells by disturbed flow, which regulates endothelial dysfunction and atherosclerosis [39,40]. MiR-717, which was first reported in mice, is encoded by intron 3 of the body mass-associated glypican-3 (Gpc3) gene, and it plays an important regulatory role in renal osmoregulation. Meanwhile, Gpc3 knockout mice display increased body mass, renal dysplasia, and perinatal mortality [41]. Bioinformatics analysis enables functional annotation of MiR-717 orthologs to determine the effect of its target genes on fat-related traits [42]. However, the effects and mechanisms of these eight upregulated miRNAs in the obese mice in this study are poorly understood.
Conclusion
This study identified the expression profile of circulating miRNAs in a mouse model of DIO and DIO with partial GO enrichment for the predicted target genes in cellular component, molecular function, and biological process. All the P < 0.001 subsequent weight reduction through LFD feeding. The results demonstrated that the majority of miRNA downregulation in association with obesity could be reversed by LFD feeding. Target prediction and function annotation revealed that the target genes associated with these 23 differentially expressed miRNAs are involved in metabolic, insulin, and adipocytokine signaling pathways that directly link the pathophysiological changes that occur during obesity and weight reduction. | 2017-06-21T09:12:32.623Z | 2015-09-16T00:00:00.000 | {
"year": 2015,
"sha1": "c9964b49eb89ef5160a7dd650c51280b9d9275a0",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-015-1896-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c9964b49eb89ef5160a7dd650c51280b9d9275a0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15233019 | pes2o/s2orc | v3-fos-license | On the spherical derivative of a rational function
For a rational function f we consider the norm of the derivative with respect to the spherical metric and denote by K(f) the supremum of this norm. We give estimates of this quantity K(f) both for an individual function and for sequences of iterates.
A rational function is a holomorphic map from the Riemann sphere into itself. We equip the Riemann sphere with the usual spherical metric whose length and area elements are ds = |dz| 1 + |z| 2 and dA = dxdy (1 + |z| 2 ) 2 .
In this paper we study the quantity K(f ) = max C f ′ . d'Ambra and Gromov [1] proposed to study the rate of growth of sup (f n ) ′ as n → ∞ for the iterates f n of smooth maps of Riemannian manifolds, especially those maps in a given class for which this growth rate is the smallest possible.
Such maps are called "slow". Slow maps of an interval and slow Hamiltonian diffeomorphisms of a 2-torus have been investigated in [3,16] and [2].
Let f be a rational function of degree d. As the map f : C → C is d-to 1, we conclude that This implies that We ask how small can K(f ) be for a function of given degree.
1. It is known that K(f ) ≥ 2 for all rational functions of degree at least 2.
In fact this holds for all smooth maps of the sphere into itself which satisfy deg(f ) ∈ {0, 1, −1} [10]. It is not known whether K(f ) = 2 can hold for rational functions of degrees 3 or 4.
2. An interesting question is whether (1) is best possible in certain sense. We have Theorem 1. There exists an absolute constant C with the following property. For every d ≥ 2 there exists a rational function of degree exactly d such that An analogous result was obtained by Gromov [10, Ch. 2D] for smooth maps of spheres of arbitrary dimension.
Littlewood [14] and Hayman [12] studied the quantity where the sup is taken over all rational functions of degree d. For polynomials φ(d) was also studied in [9,6,13]. It is easy to see that φ(d) ≤ π √ d and Hayman obtained φ(d) ≥ c 1 √ d using a rational approximation of elliptic functions. Our Theorem 1 implies this with a more elementary proof. Indeed, by a change of the independent variable, Let f be the function from Theorem 1. By rotating the sphere of the independent variable we may achieve that because the spherical area of the image sphere is π and it is covered d times. We first show that |f n | 1 + |f n | 2 ≤ C, where C is independent of n. We have Now fix z ∈ C and let m be an integer such that |2m − ℜz| ≤ 1. For We write f n = gh, f ′ n = g ′ h + gh ′ , and estimate h using (4): Now h is holomorphic in |w − z| ≤ 1/2, so by Cauchy's theorem, Next we estimate h(z) from below using (4) again: Combining all these estimates we obtain This proves (3) When |ℜz| ≥ n + 1, we will obtain better estimates. Using we obtain for z > 2n and ξ = ℜz − 2n: As f n has period π, there exists a rational function R n such that f n (nz) = R n (e 2z ). This rational function has degree 2n 2 and the derivative satisfies R ′ n ≤ Cn. This completes the proof of Theorem 1. Theorem 2. There exists an absolute constant c > 1 with the property that This can be considered as an analog of a result of Tsukamoto [18]. He studied spherical derivatives (2) of meromorphic functions F : ∆ → C, where ∆ is the unit disc with the Euclidean metric and proved that there exists an absolute constant c 1 < 1 with the property that ω(F (∆)) ≤ c 1 π for all meromorphic functions F satisfying F # ≤ 1, where We derive Theorem 2 from this result. In fact we show that Theorem 2 holds with c = 1/ √ c 1 .
Proof of Theorem 2. Proving by contradiction, we suppose that there exists a sequence f m of rational functions of degrees m, such that Let ω be the spherical area measure, so that and ω m = f * m ω the pull back of ω by f m . Then C dω m = πm.
It is easy to see that we can find discs D m = D(a m , r m ) ⊂ C (with respect to the spherical metric) of radii r m such that Dm dω = π/(mb 2 ) and Let a ′ m be the point diametrically opposite to a m , and let φ m : C → C\{a ′ m } be the conformal map (inverse to a stereographic projection) such that φ m (0) = a, φ m (∆) = D m , then uniformly with respect to z. Then F m = f n • φ m is a normal family. Let F = lim F m . From (6), (9) follows that then F # ≤ 1, but the area of F (∆) ≥ π/b 2 in view of (8), contradicting the result of Tsukamoto.
Theorems 1 and 2 have analogs for maps P 1 → P n which are stated and proved in the same way as for n = 1, using he Fubini-Study metric for the norm of the derivative. Constants C and c will of course depend on n.
Now we consider dynamical questions. By f n we denote the n-th iterate, and our standing assumption is that d(f ) ≥ 2. We define The limit always exists because the sequence a n = log K(f n ) is subadditive, a m+n ≤ a m + a n and for every such positive sequence the limit lim n→∞ a n /n exists and is equal to inf n a n /n (see, for example, Lemma 1.16 in [4]). It follows that k ∞ (f m ) = mk ∞ (f ). Notice that k ∞ is independent of the choice of a smooth Riemannian metric on the sphere, and is invariant under conjugation by conformal automorphisms. Obviously, k ∞ (f ) ≤ log K(f ).
3. What is the smallest value of k ∞ (f ) for rational functions of given degree?
The trivial lower estimate of K(f ) gives We will see that equality never happens, and that the Lattés functions are not extremal for minimizing k ∞ . For functions f d of degree d from Theorem 1 we have so the (1/2) in (10) cannot be replaced with a larger constant. In [7] these quantities were studied for polynomials, in particular, the inequality k ∞ (f ) ≥ log d(f ) was established for polynomials, with equality only if f is conjugate to z d .
Let us consider a slightly different quantity, the maximum characteristic exponent The difference in the definitions of k ∞ and χ m is in the order of max z and lim n→∞ . Przytycki proved in 1998 (reproduced in [17]) that the same quantity χ m (f ) can be obtained by taking the sup over periodic pints z of f , in which case the lim sup in (11) can be of course replaced by the ordinary limit. Moreover, he proved the following: Theorem P. For every ǫ > 0 there exists a periodic point z such that where m is a period of z.
In particular, k ∞ = χ m , and one can replace sup z∈C in (11) by sup over all periodic points. Proof. The estimate follows from the formula where h(f ) = log d is the topological entropy, and dim µ is the Hausdorff dimension of the maximal measure, see, for example [8] and references therein. Obviously dim µ ≤ 2 so we obtain our inequality. On the other hand, a theorem of A. Zdunik [20] says that dim µ = 2 can happen only for Lattés examples. This completes the proof of the theorem. Now we consider the Lattés functions [15].
Proof. A Lattés function can be defined by a functional equation
where F is an elliptic function with a critical point at 0. Assuming without loss of generality that F (0) = 0 we conclude that 0 is a fixed point of L. Now the derivative at this fixed point is from which follows that χ m (L) ≥ d(L).
To summarize, we have 4 quantities satisfying inequalities For Latteés functions we have In the first inequality equality holds only for Lattés functions. Proof. If z is a point whose trajectory is in M, Suppose now that k ∞ = log K(f ). We claim that | 2012-07-22T10:57:31.000Z | 2012-07-22T00:00:00.000 | {
"year": 2012,
"sha1": "c3dd863509700cf22a8c41fcb52c8be18cd48a6e",
"oa_license": null,
"oa_url": "http://www.math.purdue.edu/~eremenko/dvi/slow3.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c3dd863509700cf22a8c41fcb52c8be18cd48a6e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
145136579 | pes2o/s2orc | v3-fos-license | English Mental Health Reform: Lessons from Ontario?
Reforms in areas related to mental disability are under debate in England to an extent unprecedented for almost half a century. The Law Commission’s proposals on incapacity, following further consultation from the Lord Chancellor’s Department, have now largely been accepted in principle by the government for legislative enactment at some time in the undetermined future. A joint green paper from the Home Office and the Department of Health has established a policy agenda concerning the governance of people with serious personality disorders. Proposals by an expert committee chaired by Professor Genevra Richardson on mental health reform have likewise been followed up by a government green paper, and the two green papers have in turn resulted in a joint white paper on reform of the Mental Health Act 1983. All this takes place as the Human Rights Act 1998 takes effect, with its guarantees relating to liberty and security of the person, standards for hearings, respect for private and family life, and protection from inhuman or degrading treatment. Throughout the development of the reforms, a number of similar themes have recurred, involving civil rights, the provision of appropriate legal processes, anti-discrimination, the respect for people with capacity, the extension of controls into the community, and the safety both of people with mental disabilities and of the public as a whole.
Historical Overview
The contemporary history of mental health law in Ontario conveniently begins in 1967, with the passage of a new Mental Health Act. 6 In its general themes, it is comparable to the English Mental Health Act 1959. Both can be understood as broadly deferential to doctors' views, with admission criteria acknowledging a considerable degree of medical discretion, subject to review by an administrative tribunal. Both were silent on treatment issues. Unlike the English legislation, issues of incapacity were dealt with primarily under a separate statute, the Mental Incompetency Act. Nonetheless, where previously Ontario psychiatric inpatients had routinely lost the control of their estates, the 1967 act provided a system of routine assessments by the admitting physician of inpatient's capacity to manage their financial affairs, with the Public Trustee taking over management of the estates of those lacking such capacity, a system not reflected in the English legislation.
Significant revisions to the Ontario Mental Health Act were made in 1978. Where the 1967 act can be seen as reflecting developments in England, the 1978 act can be seen as anticipating them. Treatment provisions were introduced for the first time, on much the same model that would appear five years later in England: treatment of voluntary patients would be governed by common law, treatment of involuntary patients would be either by consent of the patient or else with a second opinion provided by a psychiatrist. Unlike the English system introduced in 1983, however, there was in Ontario no three-month grace period where treatment could be given without consent or second opinion, and the imposition of treatment without consent became subject to review by the administrative tribunal. Where the involuntary patient lacked capacity, consent could be provided by the patient's nearest relative as defined in the act, although no right of review was available to a doctor's decision regarding incapacity. Rights to view the clinical record were introduced at this time, although later strengthened considerably. A right to a tribunal review of the admitting physician's decision that the patient lacked financial capacity was introduced. More important for the body of this paper, amendments were made to the criteria for involuntary admission. Where the 1983 English act continued with vague criteria referring to the health or safety of the patient and the protection of others, the Ontario statute defined dangerousness in considerable detail.
To this point, the Ontario law had developed according to the evolution of political and professional thinking. The next set of amendments was forced by broader constitutional considerations. The Canadian Charter of Rights and Freedoms was introduced to the Canadian constitution in 1982. Along with enshrining rights for example to liberty and security of the person and to due process upon arrest or detention, section 15 of the Charter protected against nondiscrimination on the basis, inter alia, of mental handicap. The implementation of section 15 was delayed until 1986, to allow the amendment of legislation to comply with the section. Amendment of the Mental Health Act was thus effectively forced upon the Ontario legislature. There was no consensus in the governing Liberal Party as to how to proceed: the Minister of Health, reflecting the perceived view of the medical establishment, did not favour major legislative amendment notwithstanding the introduction of the Charter provisions; the Attorney General, who would have had to defend the legislation in court, was much more open to changes. In the end, the matter was forced by amendments proposed and spearheaded by the opposition New Democratic Party. 7 The 1986 amendments were significant for a number of reasons. Procedural protections were clarified and strengthened. Patients who had capacity to do so were given the right to appoint the person who would serve as their substitute decision-maker in the event that they later lost capacity. Children admitted on the consent of their guardians (called 'informal' patients following the amendments) 8 were given rights to tribunal review of their admissions. Most important for this paper, however, was the affirmation that a patient with capacity had the right to refuse treatment, whether that patient was voluntarily or involuntarily admitted to the hospital, and this refusal could not be overridden. The act further stipulated that patients lacking capacity could be treated on the consent of their substitute decision-maker, and detailed instructions were provided as to how this individual was to exercise that authority. The decision of the substitute would be based on the wishes of the patient when competent; or if none were known, best interests as defined by the statute. 9 For the first time, the decision of a treating physician that a patient lacked capacity could be appealed to the review board. A provision allowing the refusal of the substitute to be overridden in the best interests of the patient was struck out by litigation as contrary to the equality provisions of the Charter. 10 The result was that rights to consent to psychiatric treatment became entirely separate from admission status, although at this time both were still contained in the same legislation, the Mental Health Act.
This approach was taken a step further in 1992. Legislation regarding personal and financial guardianship had long been acknowledged in need of reform. The relevant legislation, the Mental Incompetency Act, 11 involved unwieldy court processes, and did not allow for partial guardianship arrangements beyond the distinction between financial and personal matters: an individual could manage all or none of their property and estate, and/or all or none of their personal affairs, but nothing in between. No more specific orders were possible. Some legislative tinkering had been done, such as the introduction of enduring powers of attorney for financial (but not personal) matters in 1983, 12 but no one was particularly satisfied with the state of the law. Various committees and inquiries had been struck, 13 but reform had languished in an absence of consensus and political will. A change of government in 1990 brought the political will, with the election of the New Democratic Party. O. 1990, c. M.7, s. 2(6) and 49 (5) respectively. 10 Fleming v. Reid (1991), 4 O.R. (3d) 74 (O.C.A.). 11 This was based in the 1909, itself really a codification of Victorian law. Amendments in 1911 slightly expanded the definition of incapacity, and new terminology was introduced in 1937. Otherwise, the act remained largely unchanged until its repeal in 1992: see R.S.O. 1970, c. 271, R.S.O. 1980, c. 264, R.S.O. 1990, c. M-9. Like the corresponding portion of the English legislation (Mental Health Act 1959, 7/8 Eliz II, c. 72, pt For present purposes, the 1992 reforms extended the Mental Health Act approach to the remainder of health care decision-making. The Consent to Treatment Act 1992 14 provided a statutory right of competent patients to make treatment decisions, and the list of substitutes to make decisions in the case of incapable patients, without distinction between physical and mental disorders. The movement of these provisions from the Mental Health Act to the Consent to Treatment Act further articulated the division between treatment decision-making and institutional confinement, and emphasising a similar approach to mental and physical treatment. At the same time, new guardianship legislation covering financial and personal decisions other than health care and mental health confinement was passed as the Substitute Decisions Act 1992. 15 The government was acutely aware of the need for effective enforcement and administration of these statutes. As a result, these statutes in combination with yet another piece of legislation, the Advocacy Act 1992, 16 placed rights advice and advocacy on a statutory footing and created a bureaucracy run by a board to administer rights advice and advocacy services.
Advocacy Ontario was short-lived. Its establishment and initial operation had been controversial and problematic for a variety of reasons, and it was abolished following a change of government in 1995, although rights advice remains a part of the system, in a somewhat reduced form. The new government also replaced the Consent to Treatment Act 1992 with the Health Care Consent Act 1996. 17 That statute continued the broad structure of the previous statute, respecting the treatment decisions of capable patients regarding both psychiatric and physical treatment.
In Ontario, homicides by those with psychiatric difficulties have in recent years been high profile as they have been in England, and the government responded with Brian's Law (Mental Health Legislative Reform), 2000. 18 This law makes minor amendments to the existing confinement criteria, as well as adding a new ground of confinement concerning people who lack capacity to consent to treatment and whose mental illness is both of a recurring nature and has been shown amenable to treatment. As such, like the Richardson proposals, it would introduce a different standard of confinement for those incapable of consenting to treatment . It also introduces a new form of regulation of treatment outside the psychiatric facility, described as a 'community treatment order'. As will become clear below, this is more similar to a contract than a coercive order, as it requires the patient if competent (and otherwise the substitute decision-maker) to consent to the order. Consent can further be withdrawn on 72 hours notice. While the possibility of informal coercion is of course not to be underestimated, 19 this model appears to be particularly strong on patient autonomy and, once again, does not undercut the basic position in Ontario law that persons with capacity have a right to refuse treatment.
Lessons for England?
The Ontario law orders the regulation of mental health in a very different way to its English counterpart. On its face, it appears to take into account many of the concerns raised regarding English reform proposals. The Ontario Mental Health Act is acknowledged to have a policing function: it is about public safety, reflecting similar concerns of the UK government, expressed in its green and white papers. There is no restriction on the range of mental disorders which are covered by the act. People with serious personality disorders are dealt with in the same way as persons with any other mental disorder: if they are dangerous within the meaning of the Act, they are locked up. This matches the concerns of the government contained in the proposals on people with serious personality disorder. While dangerous people with mental disorders are dealt with differently from dangerous people without mental disorders, a point suggesting some possible discrimination in approach, the Ontario legislation seems otherwise to be as close to nondiscriminatory as is reasonably possible. Specifically, treatment decisions under the Health Care Consent Act and other decisions covered by the Substitute Decisions Act are made on the basis of ability to make the decision in question: people with psychiatric problems are dealt with in exactly the same way as people with non-psychiatric incapacity, and psychiatric treatments in essentially the same way as physical treatments. 20 Capacity and the desire to regulate mental disorders in the same way as physical disorders are thus given a central role as envisaged by the Richardson report, with no sacrifice to the safety of the community. Procedural safeguards in the form both of rights advice and review tribunals, are provided efficiently and in abundance, and human rights are acknowledged. This seems to represent the range of concerns in the current English debate. Closer examination of the Ontario proposals further provide guidance on how English legislation might appropriately balance the above concerns.
Criteria and Process for Involuntary Admission
If the government is to increase the role of public safety as a guiding principle of the English Mental Health Act, as the white paper claims, 21 it ought to do so responsibly. The risk with dangerousness criteria is that large numbers of non-dangerous people are falsely identified as dangerous and thus inappropriately confined. 22 The current English criteria, referring only to it being 'necessary for the health and safety of the patient or the protection of other persons' that the individual be admitted for treatment, 23 provide no guidance as to how the appropriate threshold of risk is to be determined and thus provides no check on the over-prediction of dangerousness. The Richardson Report, somewhat surprisingly, does not propose any alteration of this wording. The white paper refers to 'risk of serious harm, including deterioration of health' or 'significant risk of serious harm to other people' as initial criteria for the imposition of a compulsory assessment, although the former criterion lapses into an ill-defined best interest test coupled with a treatability requirement when ongoing compulsion is at issue in the subsequent compulsory assessment. 24 Compare these to the 1978 Ontario criteria, contained in section 15(1) of the Mental Health Act: 15(1) Where a physician examines a person and has reasonable cause to believe that the person, (a) has threatened or attempted or is threatening or attempting to cause bodily harm to himself or herself; (b) has behaved or is behaving violently towards another person or has caused or is causing another person to fear bodily harm from him or her; or (c) has shown or is showing a lack of competence to care for himself or herself and if in addition the physician is of the opinion that the person is apparently suffering from mental disorder of a nature or quality that likely will result in, (d) serious bodily harm to the person; (e) serious bodily harm to another person; or (f) imminent and serious physical impairment of the person the physician may make application in the prescribed form for a psychiatric assessment of the person.
Substantively similar provisions apply to allow police officers and Justices of the Peace to remove an individual to a psychiatric facility, where the section 15 examination takes place.
The provision makes a serious attempt to clarify what sort of behaviour will warrant confinement. Subsections (a) through (c) make it clear that the prediction cannot be based on pure speculation: a threat or attempt of bodily harm, violent behaviour or causing someone else to fear violent behaviour, or a demonstrated lack of competence to care for the self is required. 25 A standard of predicted behaviour is also required: serious bodily harm or physical impairment must be likely (not 'possibly') to occur. The word 'imminent' in subsection (f) (a) has previously received treatment for mental disorder of an ongoing or recurring nature that, when not treated, is of a nature or quality that will result in serious bodily harm to the person or to another person or substantial mental or physical deterioration of the person or serious physical impairment of the person; and (b) has shown clinical improvement as a result of the treatment, and if in addition the physician is of the opinion that the person, (c) is apparently suffering from the same mental disorder as the one for which he or she previously received treatment or from a mental disorder that is similar to the previous one; (d) given the person's history of mental disorder and current mental or physical condition, is likely to cause serious bodily harm to himself or herself or to another person or is likely to suffer substantial mental or physical deterioration or serious physical impairment; and (e) is apparently incapable, within the meaning of the Health Care Consent Act, 1996, of consenting to his or her treatment in a psychiatric facility and the consent of his or her substitute decision-maker has been obtained, the physician may make application in the prescribed form for a psychiatric assessment of the person.
While a marked departure from the 1978 clauses, it shows some parallel structure. For the behavioural criteria in the paragraphs 15(1)(a) to (c), this subsection substitutes specific experience of successful treatment for mental disorder now afflicting the individual. The dangerousness criteria of paragraph 15(1)(d) to (f) are reflected in paragraph (1.1)(d) of the new section, albeit with the additional ground of substantial mental or physical deterioration.
Significant for current discussion, the section applies only for persons incapable of consenting to the proposed treatment and where the consent of the substitute decision-maker has been obtained.
Where section 15(1)(c) may have implicitly created a standard of confinement in which capacity was a relevant factor, the new subsection 15 (1.1) explicitly creates a standards of confinement based on the treatment capacity of the potential patient. This is a direct precedent for the Richardson proposals, which would create different criteria of compulsion based on capacity to consent to treatment. Effectively, the proposal allows slightly earlier intervention to ensure treatment of those lacking capacity to consent, where the substitute decision-maker consents and when there is a track record of successful treatment for the disorder. Here again, the right of competent patients to control their treatment is not affected: the provision applies only to those patients lacking capacity and does not in any way restrict the allegedly incapable person from applying for a review of his or her capacity in the usual way.
The initial admission provision allows confinement of an individual in a psychiatric facility for up to 72 hours. There is no review provided by the Act in this period, although judicial review by way of habeas corpus and civil actions for wrongful confinement are available, if not necessarily very practical. In the 72 hour period, a more extensive examination is to occur pursuant to section 20 of the act, after which a further confinement may be permitted if the attending physician takes the view that the patient is indeed suffering from a mental disorder of a nature or quality which will likely result in one of the conditions in subsection 15(1)(d) to (f) or 15(1.1) above if the person does not remain in the facility, and the person is not suitable for voluntary admission. Section 20 confinements can be renewed as they approach their expiry.
The first of these section 20 confinements lasts for two weeks, the second for a month, the third for two months, and the fourth and subsequent for three months. These time periods are considerably shorter than the current English equivalents of twenty-eight days under a section 2 confinement, six months for the first two section 3 confinements, and one year thereafter. 27 These periods are significant both because they require the doctor to re-assess the case for confinement, a process which may result in the doctor taking the view that confinement is no longer justified, and also because in Ontario, as in England, the patient has a right to a review of detention by the tribunal once per certificate. There is much to be said for the Ontario approach here, which better reflects the time that psychiatric interventions require to take effect. A patient who opts for a hearing at the beginning of his or her confinement would thus have a right to a second one a couple of weeks later, as prescribed drugs are taking effect and when there may therefore be a real change in the applicability of the confinement criteria to the patient. In England, if hearings were held promptly (which of course they are not -more on that below), the condition of a patient opting for a hearing at the beginning of the confinement period could have changed markedly, to the point where the confinement criteria cease to be met, months before the patient would have the opportunity to apply for another hearing. The fact that this system works effectively in Ontario raises the question of whether the right to periodic review of detention established by X v. United Kingdom 28 ought to be interpreted considerably more strictly.
Informal/Bournewood Patients
The 1986 amendments to the Ontario Mental Health Act introduced the concept of an 'informal' patient. This is someone admitted on the authority of another, and thus bears some resemblance to Bournewood patients. 29 The Mental Health Act provision applied only to persons between the ages of twelve and sixteen years, 30 but in 1992, similar provisions were introduced regarding adults in the Consent to Treatment Act and continued in the Health Care Consent Act 1996. 31 Even now, the parallel with Bournewood patients is not exact, as the Ontario legislation clearly has in mind individuals who are not acquiescing to their admission. The acts grant objecting patients who apparently lack the capacity to decide their own hospital admission the right to have their admission to the psychiatric facility reviewed by tribunal. Absent such application, review of the admission of minors under the Mental Health Act occurs automatically at the end of six months, but there is no such routine scrutiny for adults.
The Richardson Report argues for the importance of statutory regulation covering the voluntary admission of incompetent acquiescing patients, who cannot be expected actively to challenge their admissions. The government's response in the white paper suggests an approach similar to that of Ontario: applications by the patient or their representative will be possible to challenge de facto detentions. 32 The Ontario legislation may provide a model for the criteria which might be used to determine the appropriateness of such admissions: 34 (5) In reviewing the decision to admit the person to the hospital, psychiatric facility or health facility for the purpose of treatment, the Board shall consider, (a) whether the hospital, psychiatric facility or health facility can provide the treatment; (b) whether the hospital, psychiatric facility or health facility is the least restrictive setting available in which the treatment can be administered; (c) whether the person's needs could more appropriately be met if the treatment were administered in another place and whether space is available for the person in the other place; (d) the person's views and wishes, if they can be reasonably ascertained; and (e) any other matter that the Board considers relevant. 33 It is clear that the admission of those who lack capacity to decide where they will live should not be as limited in the same way as civilly confined patients. If the Law Commission proposals on incapacity are implemented in their present form, acquiescing Bournewood patients would be admittable on the basis of their best interests, although not confineable absent judicial intervention. 34 While the factors contained in the statutory test of best interests overlap with the Ontario criteria somewhat and would be appropriate additions to the above factors, it is at least arguable that the specific issues contained in the Ontario criteria ought to be specifically considered before the admission of a Bournewood patient.
Treatment Provisions
As noted above, the Health Care Consent Act concerns all medical treatment, not merely psychiatric treatment. The key provision for current purposes is contained in section 10, which provides that treatment may not be given unless the practitioner offering the treatment has ensured that the patient consents and is capable of doing so. Capacity is in turn defined by section 4(1) of that act:
4(1)
A person is capable with respect to a treatment, admission to a care facility or a personal assistance service if the person is able to understand the information that is relevant to making a decision about the treatment, admission or personal assistance service, as the case may be, and able to appreciate the reasonably foreseeable consequences of a decision or lack of decision.
In this provision there is no express requirement of a mental illness or diagnosis. Unlike the English test in Re C, 35 there is no express requirement that the individual believe the information provided. This difference is largely illusory, however, given the requirement that the individual appreciate the reasonably foreseeable consequences of his or her choice. It would be an unusual, but not theoretically impossible case, where the individual appreciated the foreseeable consequences of the choice to be made, without believing the information provided.
Where the patient lacks capacity to consent, the prescribed substitute decision-maker has authority to give or withhold consent. The substitute will be, in order of preference, a courtappointed guardian, the holder of a power of attorney for personal care authorising the holder to make such decisions, an individual appointed by the review board to fulfil this role, or a family member according to a prescribed list of proximity or relationship. 36 The way in which the decision is to be made regarding treatment of the incapable patient is also closely defined by the legislation. Consistent with the respect accorded to patient capacity, wishes expressed by the patient while competent and over the age of sixteen years must be honoured, and only in the absence of such wishes may resort be had to the patient's best interests. 37 'Best interests' is in turn defined by section 21 (2): 21 (2) In deciding what an incapable person's best interests are, the person who gives or refuses consent on his or her behalf shall take into consideration, (a) the values and beliefs that the person knows the incapable person held when capable and believes he or she would still act on if capable; (b) any wishes expressed by the incapable person with respect to the treatment that are not required to be followed under paragraph 1 of subsection (1) [i.e., the paragraph requiring competent wishes to be followed] (c) the following factors: 1. Whether the treatment is likely to, i. improve the incapable person's condition or well-being, ii. prevent the incapable person's condition or well-being from deteriorating, or iii. reduce the extent to which, or the rate at which, the incapable person's condition or well-being is likely to deteriorate.
2. Whether the incapable person's condition or well-being is likely to improve, remain the same or deteriorate without the treatment. 3. Whether the benefit the incapable person is expected to obtain from the treatment outweighs the risk of harm to him or her.
4. Whether a less restrictive or less intrusive treatment would be as beneficial as the treatment that is proposed.
These criteria are binding on substitute decision-makers. While the Ontario legislation remains deferential to the wishes of the individual expressed while competent, some flexibility is accorded to the review tribunal within that framework: 36 (3) The Board may give the substitute decision-maker permission to consent to the treatment despite the wish [i.e., the previously expressed refusal of the patient while competent] if it is satisfied that the incapable person, if capable, would probably give consent because the likely result of the treatment is significantly better than would have been anticipated in comparable circumstances at the time the wish was expressed.
Under Ontario law, unlike the English situation following F v. West Berkshire Health Authority, 38 the doctor never makes the final decision as to whether treatment will be given, and the person making that decision on behalf of a person lacking capacity must decide according to a specific set of criteria. Once again, Ontario adopts an approach requiring specificity.
The intent of the Ontario system was to ensure that there would always be a second view of the doctor's proposal for treatment, a reality check serving a function analogous to informed consent by a competent patient, ensuring that the proposal was appropriate for the patient's particular circumstances. This second view has been the case for mental health in Ontario since 1978. 39 In the early years, the approach did not entirely fulfil this objective. The perception among patient rights advocates was that it was treated more as an obligation to inform family members of treatment rather than as scrutiny prior to consent, and in any event, it was thought that families tended to be too deferential to the medical views even when they conflicted with the patient's earlier, competent choices. For this reason, the closer guidance as to how consent should be given was included in the 1986 amendments. This, along with some administrative back-up to the provisions to inform substitutes of the criteria, has probably improved matters in this regard. It is difficult to see that it is sufficient to provide any real check on appropriateness of proposed treatments, however, as the person providing consent will in practice rely upon the advice provided by the doctor, advice which will normally point to the desirability of treatment. Appropriate audit structures may thus be a more effective mechanism of professional scrutiny, although one which is again likely to reflect medical values. That said, the Ontario provisions did introduce clearer guidance to doctors and substitute decision-makers as to how decisions regarding treatment are to be made.
One object of the 1986 reforms had been to force a second, non-medical opinion for the patient who was incapable, but was acquiescing to treatment. Treatment on this basis had been illegal without the consent of the substitute since 1978, but the experience of the patient rights bar was that such consent was nonetheless often not obtained. While publicity surrounding the law may have altered this to some degree, particularly in extreme cases, it is not clear that it has solved the problem. There remains anecdotal evidence that psychiatrists are negotiating treatment regimes with patients of at best marginal capacity, to avoid the perceived administrative hassle of 38 [1989] approaching the nearest relatives. While a partial solution should not necessarily be criticised because it is not a total solution, the Ontario situation may here promise more than it delivers.
The difficulties of involving family members and carers formally in decision-making structures have received some discussion in England. Particularly when the list of substitutes is fixed, inappropriate results may occur. As an extreme case, a patient might quite reasonably not want a parent informed of the particulars of their treatment, if the parent has been abusing the patient. The Richardson proposals, reflected in the government white paper, proposes a system which would reduce the formal role of nearest relatives, and instead create a more informal role for nominated persons, appointed by the patient if competent and a review tribunal if not. 40 While the role in Ontario is more formal, the appointment system is much as the government and the Richardson Committee envisage. The green paper raises questions about the mechanics of appointment, 41 unanswered in the white paper; the government might do well to consult with the Ontario review tribunal regarding practicalities.
Community Treatment Orders
There has been no tradition of community treatment orders as such in Ontario. The approach of the Ontario legislation, which separates capacity and treatment from confinement, creates a markedly different environment for the consideration of such orders. At least theoretically, the provision of physical or mental treatment of an incapacitated person in the community has not posed problems, as it may be performed on the consent of a substitute. Further, when treatment cannot be enforced on a non-consenting competent patient in a psychiatric facility, it is unsurprising that it similarly cannot be enforced in the community.
Brian's Law introduced what it describes as a community treatment order, in 2000. In Ontario, as in England, political pressure had been towards further control of persons with mental health problems in the community, and in particular those ceasing prescribed treatment. The act itself was named in memory of Brian Smith, an individual killed by such a person.
Certainly, realities must be acknowledged: the act brings these patients into a new legal regime, subjecting them to particular professional scrutiny, and creating practical pressures to conform to treatment proposals. At least on paper, however, the Ontario model is not so much about enforcing a treatment programme on an unwilling patient, as it is about the provision of a coherent programme of after-care to those in particular need. There is no Ontario equivalent to the English right to after-care under section 117; if such care is to be required, the CTO is the only mechanism to do so. The intention in the drafting of the provisions seems to be to require doctors and the patient (or the patient's substitute decision-maker, if the patient lacks capacity) to reach an agreed solution embodied in a community treatment plan as to what treatment is appropriate in the community. It is available only if the patient has been an in-patient in a psychiatric facility on two or more separate occasions, or for a cumulative period of 30 days or more in the previous three years, or has previously been subject to a CTO in the previous three years. If the subject is not at the time of the order an in-patient, the physician must determine that the patient meets the criteria for compulsory admission under subsection 15(1) or 15(1.1), discussed above. In addition, it must be determined that the person is able to comply with the community treatment plan; that the care 40 Richardson Report, para 12.17-23; white paper, para 5.5-9.
41 Green paper, para. 10.10. and treatment proposed is available in the community; and, in section 33.1(2)(c), that 'if the person does not receive continuing treatment or care and continuing supervision while living in the community, he or she is likely, because of mental disorder, to cause serious bodily harm to himself or herself or to another person or to suffer substantial mental or physical deterioration of the person or serious physical impairment of the person' 42 If these conditions are met, so long as the subject agrees (or the subject's substitute, if the subject is incapable), the CTO takes effect. It runs for six months, and is subject to renewal if the above conditions are still applicable.
The statute is curiously silent about the scope of what may be included in a community treatment plan. Clearly, a regimen of medicine would be possible; but it is unclear how far the plan may extend outside the medical sphere and into the realms of social care, contact with services and accommodation.
The subject of the order may request a re-assessment of the situation at any time. Alternatively, consent of the subject or the substitute may be withdrawn on 72 hours notice. In either case, the attending physician may terminate the treatment order following a review of the individual's condition, if appropriate. If the physician believes that the subject is failing to comply with the order, an assessment may be ordered under section 15, the usual entry route to civil confinement, but only if the risks of bodily harm, physical or mental deterioration or physical impairment identified above are thought to exist, and if reasonable efforts have been made to assist the subject in complying with the order and warning of the possibility of admission if the order is not complied with.
The CTO also places responsibilities on the treatment providers named in the order. While the new section 33.6 of the Mental Health Act exempts treatment providers from liability for default of others in the provision of the treatment, it makes no such exception for treatment which the named treatment provider is charged with providing himself or herself under the order. This suggests quite a different approach from that of the English court in Clunis v. Camden and Islington HA, 43 where the court specifically denied any duty of care either in breach of statutory duty or in negligence for the supervision of a patient under section 117 aftercare. Such a duty of care would presumably be found in Ontario. As such, the Ontario CTO can be seen as enforcing standards of care from treatment providers as much as enforcing compliance in the patient population. This, again, is a step beyond what is proposed for England. The Richardson Report does propose that rights to assessment and to aftercare would exist, but there is no indication how these would be enforced. Certainly, there is no suggestion that the failure to assess or provide aftercare would lead to civil liability. After the decision in Clunis, it is difficult to see that such an amendment can be intended in the absence of express language. In the government white paper, even the formal right to an assessment has been removed.
The CTO is a sufficiently new mechanism in Ontario that it is not yet possible to suggest how successful it will be. There does seem to be considerable evidence that patient concordance with treatment is affected by the standard and availability of that treatment. If that is indeed the case, the Ontario approach may well be worth taking seriously. June 2001
The Consent and Capacity Review Board and Due Process Protections
The Consent and Capacity Review Board hears applications relating to capacity to consent to treatment, financial capacity and challenging civil confinement. It also hears applications for review lodged by informal patients as discussed above, and similar applications from allegedly incapacitated adults objecting to being admitted by substitute decision-makers to nursing homes and similar institutions. It can appoint substitute decision-makers for treatment and care purposes when the patient lacks capacity and has not done so, and can provide directions as to the effect of wishes expressed by the patient regarding care and treatment. As in England, the board generally sits in panels of three: one psychiatrist, one lay person, and a lawyer as chair. Unlike the English tribunals, standards are contained in the legislation as to expeditiousness. Hearings must commence within seven days of the application unless all parties agree to a postponement. A decision must be communicated to the parties within one day of the completion of the hearing. The parties must be informed of their right to request reasons, and if requested, reasons must be handed down within two days. 44 Once again, the decisions of the European Court of Human Rights on speedy determination of rights begins to look extraordinarily feeble, particularly when the Ontario legislation is much more generous in the frequency of hearings to challenge confinement.
The review board system is supported by a fairly extensive system of rights advice. Major psychiatric facilities contain full-time rights advisors, and a network of part-time advisors exists in the broader community. These individuals make routine visits when decisions of significant legal import are made relating to the patient, such as a finding of incapacity, original civil confinement, or the renewal of civil confinement. They do not in their rights advisor role represent patients before the review board, although some of the part-time advisors in the community are lawyers who may take on briefs in that capacity. Instead, rights advisors generally put patients wishing to challenge decisions in contact with lawyers, who are funded through legal aid. This provision is in addition to the services in large psychiatric facilities of professional patient advocates, who assist patients with administrative matters outside the competence of review boards. While some rights advisors are part time, this is not an ad hoc programme. It shares with the patient advocate programme a small secretariat in Toronto. It is through this central office that the advisors are trained and employed; they may work in the psychiatric facilities, but they are not employed by them. This system has been in place for almost twenty years.
There was, briefly, a much more extensive and high-profile system of advocacy, Advocacy Ontario, created by legislation in 1992. This was a government office intended to provide rights advice and advocacy services to people with physical or mental disabilities, to act in the best interests of those incapable of instructing advocates when the health or safety of those individuals was at stake, to engage in public education, to press for systemic change to improve the situation of people with disabilities, and generally to promote respect for the rights, freedoms, autonomy and dignity of people with physical or mental disabilities.
Advocates employed by the agency had considerable power. They were for example to have access at all reasonable times to any place where a vulnerable person was thought on reasonable grounds to be, although entry to private dwelling houses would be only by warrant of a Justice of the Peace. 45 They had access to the health and other administrative records relating to an individual 44 Health Care Consent Act 1996, s. 75. 45 Advocacy Act 1992, S.O. 1992 lacking capacity upon whose behalf they were acting, and otherwise by consent of the individual, 46 as well as a facility's administrative procedural manuals and records for the purposes of systemic advocacy. 47 The office was to be overseen by a board of commissioners. Eight of the twelve members of this board along with the chair were required by statute to be drawn from a list of individuals nominated by groups representing people with physical or mental disabilities, to ensure accountability to the users of advocacy services. To protect against potential co-option, Advocacy Ontario was placed under the Ministry of Citizenship, removed from the Attorney-General and Health Ministries which were responsible for the other legislation relating to mental health and incapacity.
One can readily understand the logic behind Advocacy Ontario. Rights advice supported by legal representation works in individual cases, with clients who have capacity to instruct. It is not efficient at creating systemic change, however, and it is not effective for clients lacking capacity to press for their own rights. When the rights in question are those relating to personal guardianship, invoked because of a perception that an individual lacks capacity, it is obvious that an ability to press for ones rights cannot be assumed. Further, it is simply not true that all carers are good carers. Canadian estimates are that seven to ten per cent of elderly people suffer some form of physical, mental, or financial abuse, generally at the hands of their families. One cannot assume that other vulnerable people fare better. If the principles behind the Ontario reforms of the early 1990s were to be meaningful, the logic goes, appropriate support services had to be put in place.
Sadly, Advocacy Ontario was not a success. The reasons are manifold. It became a political issue, associated in the public mind with a government which had become deeply unpopular by the time Advocacy Ontario was up and running. The unpopularity was articulated in a variety of ways. It was perceived as over-funded and profligate. It was perceived as overly interfering in the private lives of Ontario's families, caring for their loved ones. While it is true that the powers accorded were significant, it is not in fact obvious that they were excessive. If the people at risk in the community were to be protected from abuse, for example, a process to get a warrant to enter a private dwelling seems to be a necessity, but in Ontario, as in England, the risks to which vulnerable people are subjected in the family and in other 'safe' environments are not something that many politicians are prepared to tackle. The first chair of Advocacy Ontario, a former shadow health minister and former user of psychiatric services, was hailed with broad enthusiasm upon his appointment. As the stock of the government in general and Advocacy Ontario in particular fell, he became perceived as a purely political appointment. The problems were not all perceptual, however. Appointments to the advisory board and to the Commission were apparently chosen to reflect the diversity of views relating to advocacy and patient rights issues. While this might have been effective in other circumstances, the board sadly seemed incapable of working together. Under these stresses, Advocacy Ontario had largely imploded before a new government finally abolished it, shortly after an election in 1995. 48 The result is problematic. There is now in Ontario no systemic mechanism in place to ensure that the law is being followed. As rights advisors act only on competent instructions, they have little effect for persons unable to provide such instructions. For those persons, advocacy services are largely absent, and the honour system seems to be relied upon for the application of the law. The English government has in the green paper agreed to consider the provision of advocacy in a mental health context. The existing Ontario model, and Advocacy Ontario, provide a mixture of success and failure. We might well learn from more detailed study of this experience.
Problems
From the foregoing, it will be clear that there is much for English analysts to consider. The overarching structure of the Ontario legislation is designed to take into account both patient rights and safety of the public and the patient. These are central to the concerns of the government in its green paper and of the Richardson Committee. While there may still be some problems with enforcement mechanisms, the presence and efficacy of the Ontario rights advice and review board structure does provide the English onlooker with cause for pause.
There are, of course problems, real and apparent. The major theoretical difficulty with applying the Ontario system to England is that Ontario's Mental Health Act expressly acknowledges a policing role of psychiatric confinement, based on dangerousness rather than the need for or availability of treatment. Theoretically, it would be possible for patients to be detained in psychiatric facilities ad infinitem, untreated because there is no effective treatment, or because they are competent and refuse consent, or because they lack capacity and refused the required treatment prospectively. The concern is that the ethos of the facility would change from hospital to patient warehouse.
Certainly, the express acknowledgement of dangerousness rather than treatability as the criterion for confinement does have a symbolic importance, but it is easy to overstate the difference with the current English system. After all, English statute law allows confinement not just on health grounds, but also for the 'safety of the patient or for the protection of other persons', a dangerousness criterion, albeit coupled with the alternative best interest criterion of 'health'. There is further no express treatability requirement for either severe mental impairment or mental illness, but only for the small minority of cases which are categorised as psychopathic disorder or (non-serious) mental impairment. The requirement of treatability rather than dangerousness as a prerequisite for involuntary admission in England is thus already largely a myth. The Ontario legislation is more specific in its articulation of how dangerousness is to be determined, but it is not obviously theoretically different for that.
In practice, the concern seems ill-founded, since virtually no competent patients in Ontario psychiatric facilities refuse all treatment. Ontario facilities have simply not become warehouses of patients 'rotting with their rights on', any more than their English counterparts. Certainly, some patients refuse some treatments, requiring negotiation between doctor and patient towards an agreed treatment regime. While this may result in some compromise on what are perceived by the doctors as medical best interests, the increased communication between doctor and patient which is implied has its own advantages. It ensures that the patient is more involved in the development of the treatment plan, at least in theory meaning that the patient has a greater emotional stake in the resulting deal. This should in turn mean better rates of treatment continuation -a desirable medical result.
The cost of the review board structure is an obvious area of curiosity, but it does not seem exorbitant. The Ontario Consent and Capacity Review Board received 3091 applications in 1998-9, resulting in 1785 hearings. The cost of this to the taxpayer was just over $CDN 2 million, or about £900,000. 49 In this period, roughly 15,000 people (excluding criminal confinements) were involuntarily admitted to psychiatric facilities in the province. The higher number of confinements in England would militate towards an increase in this figure, 50 but the higher population density would counteract this to some degree, as transportation costs to get board members to hearings would be reduced. The cost hardly seems excessive, for provision of an efficient tribunal structure.
The more severe criticisms relate to the key terms of the legislation. It is all very well to focus on dangerousness as the criterion of confinement, but even after the closer criteria of the Ontario legislation, dangerousness is notoriously unpredictable. Studies generally find that between half and three quarters of those predicted to be dangerous by psychiatric professionals do not in the end turn out to be violent. 51 Capacity is similarly an extremely slippery concept. And while the standards in the legislation appear to provide considerable power to patients, the effect of informal coercion is not to be underestimated. In what sense, for example, is consent to treatment 'voluntary' as required by the Health Care Consent Act, 52 if it is provided after the doctor explains (perhaps quite accurately) that the treatment is the patient's only hope of recovering far enough to be released from the psychiatric facility, or if carers in the community will only accept the patient if he or she agrees to medication? These problems exist equally in the current and proposed English systems, however, and the closer wording and clearer structuring of the Ontario acts at least provides an improvement on the vague English legislation in these regards. The fact that it is only a partial solution does not necessarily justify extreme criticism, given what else is on offer.
Conclusion
Admittedly, the Ontario acts have their problems. At the same time, they do seem to provide a coherent system, which takes into account the variety of interests and concerns under discussion in the current reform debate. The risk is not merely that the government may re-invent the wheel in the to-ing and fro-ing leading up to mental health reform, but perhaps more important, that they may not re-invent it very well. The Ontario example provides a wealth of experience which should be tapped. The English commentators and legislators would do well to give it further heed. | 2018-11-03T07:21:51.048Z | 2001-06-01T00:00:00.000 | {
"year": 2014,
"sha1": "68ca0ea51def9674d93c754b88c11101a87d5363",
"oa_license": null,
"oa_url": "https://doi.org/10.19164/ijmhcl.v1i5.359",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "00d7fb3345ebde5245294c8a17fd9a6c9f0ce94f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology",
"Political Science"
]
} |
221754570 | pes2o/s2orc | v3-fos-license | Potential Stunting in Riverside Peoples (Study on Pahandut Urban Village, Palangka Raya City)
Efforts to Increase Health Degrees in Indonesia until now have not been considered to have an Continuation of Health Development Continuation, this is when compared to neighboring Countries Health Degrees in Indonesia are still considered low. One indicator of the success of development for the development of an ideal nation is the establishment and organization of a good health system, in this discussion covering physical and psychological aspects which are added with spiritual conditions, personality and empowerment. This aspect discusses the discussion space on health development which is interrelated with other aspects. The current global challenges also revise the health system in Indonesia. From this condition, supporting health development is carried out with the support of a good system and human resources equipped with technological support. In the ideal level that outcomes as a desire in the desired health development efforts, so that from this context sustainability will continue to exist. In the formulation made by WHO provides an understanding that the health system (health system) is all activities that have the aim to improve, improve or care Abstract
for health. This concept then developed into a health care system because it deals with various policies made by each country. Health care system because it deals with various policies made by each country. By Lassey (1976) in (Adisasmito, 2007) states that the health care system is a combination of health institutions, supporting human resources, financial mechanisms, information systems, organizational network mechanisms, and management structures including administration, in its efforts to support the provision of health care services for patients .
One of the urgent things to do in Indonesia in aligning health development in order to reach a high degree of health is the problem of stunting. In this connection health development has an integral meaning in relation to the development of human resources in realizing an advanced and independent nation and in physical and spiritual well-being.
According to the Health Data and Information Window Bulletin (Sakti, Eka Satriani, 2018) stunting is a major nutritional problem facing Indonesia. Stunting itself is a condition of children having a length or height that is less when compared to age. In another section (Trihono, 2015) states that the stunting condition can be measured based on Zscore values ≥-3 to <-2 SD (short) and <-3 SD (very short) according to WHO (World Health Organization) standards. Stunting can also be interpreted as a condition of failure to thrive in children under five due to chronic malnutrition, especially in the First 1000 Days of Life (HPK). In another part of the stunting rate in Indonesia, based on basic health research data (Balitbang Kemenkes RI, 2018) in 2013, there were 37.2% of children under five who experienced stunting and decreased in 2018 to 30.8%, amounting to 6.4 %. This figure ranks Indonesia 108th out of 132 countries included in countries with high stunting rates according to data from the 2016 Global Nutrition Report. Whereas for Central Kalimantan itself, it ranks 5th highest stunting incidence rate according to Riskedas data (Balitbang Kemenkes RI, 2018 ), with East Barito as a district included in the list of 100 priority cities / regencies of intervention with 2017 stunting children referring to the results of the 2013 RISKESDAS where 54.84% of children under five in East Barito District experienced stunting.
Meanwhile in Republika's online media reporting (Republika, 2019) the condition of stunting in Central Kalimantan is actually not only in East Barito District, but there are other regencies with high stunting levels, namely Kapuas Regency and East Kotawaringin Regency. This condition can be said as a threat in the process of building a nation, in the past the state of stunting in Indonesia has received less attention, but over time and the various cases that were revealed turned out to be a major threat to the Indonesian nation.
At the national level, representation related to environmental health vulnerability for people living on the riverside community of the river can be seen from its relationship with water quality, sanitation and hygiene based on existing geographical aspects, for example in Central Kalimantan Province. For the condition of the province of Central Kalimantan (Response, 2019) has a unique geographical characteristic because it has 11 (eleven) large rivers and no less than 33 small rivers / creeks and on average the rivers and creeks are inhabited by people who live and have activities around it, it is no wonder that actually Dayak people or migrants are people who are still dependent on the river. This can be seen that the people in Central Kalimantan Province live in lanting (floating houses) as a way to live part of the community on the riverside community of the river as a form of local wisdom that has been hereditary in Central Kalimantan.
Communities use rivers to build settlements, on the other hand some rivers in Central Kalimantan are also places for disposal of waste from human activities such as industrial waste, household waste, even human waste or feces that are discharged directly into water bodies because the latrines of floating riverside community do not have facilities (septic tanks).
Data on the number of residents who live on the riverside community of the river in each Regency / City in Central Kalimantan amounted to 860,426 people, and specifically for the City of Palangka Raya as many as 37,816 people (BPS, 2018). Types and number of buildings on the edge of the Kahayan Palangka Raya River, with the type of building on 452 lanting houses, 623 houses on stilts, bath,wash, toilet facilities (Jamban lanting) 245. With a total population of kahayan riverside community in the city of Palangkaraya in Pahandut Village 6,224 people and Pahandut Village across 1,636 people with a total population of 7,860 people has a job (Observation Results of Unpar Civil Engineering Students 2015, unpublished). Especially for the city of Palangka Raya, the condition of children under five who experience stunting can be said to be invisible, but that does not mean that they are free from threats to be free from stunting. There are many factors in seeing the condition of children who experience stunting, one of which is the environment. The condition of the people in the city of Palangka Raya who still live on the riverside community of the river are considered to have the potential for stunting. This is also compounded by the low understanding of the people living on the riverside community of the river on environmental cleanliness.
This condition requires a study of the potential for stunting for people who live on the riverside community of the Kahayan river, so that by knowing the potential that can cause stunting children can provide a policy strategy in handling stunting for people who live on the riverside community of the Kahayan river.
The formulation of the problem to be answered in this study is 1) What is the health condition of the people who live on the riverside community of the Kahayan river? 2) What is the potential for stunting for people living on the riverside community of the Kahayan river?
Prior Research
In developing this research, then to provide a different perspective from previous studies, can be seen in the following research: a. (Besral, 2014)
Stunting
In the handbook of the Secretariat of the Vice President of the Republic of Indonesia. The National Team for the Acceleration of Poverty Reduction (TNP2K) (Satriawan, 2018) states that stunting stunting illustrates the condition of failure to thrive in children due to malnutrition or chronic malnutrition during the period of growth and development that appears after children are 2 years old. This situation is represented by the height z-score by age (TB / U) less than -2 standard deviations (SD) based on growth standards. Then referring (Sakti, Eka Satriani, 2018) stunting has an impact in reducing cognitive function and can cause low education and productivity, and of course this can decrease the quality of development in the Republic of Indonesia. (Vonaesch, et al., 2017) mentions that globally, one in four children (25%) under five years of age experience stunting / growth delays and based on this, around 90% of these children live in Sub-Saharan Africa and Asia ( Levels and Trends in Child Malnutrition, WHO, UNICEF, World Bank, 2012). The situation of Short Toddler (Stunting) in Indonesia has the highest prevalence in the Southeast Asia / South-East Asia Regional (SEAR) is in Timor Leste and followed by Indonesia which is included in the third country with an average prevalence of stunting toddlers in Indonesia in 2005-2017 is 36.4%. In 2010, around 26.7% of children in Asia and 26.7% of children in Southeast Asia experienced stunting.
Pathogenesis
In the context of pathogenesis mechanisms of environmental contamination and stunting show that ingested microbes cause changes in intestinal structure and function (Mbuya & Humphrey, 2016). Intestinal villous atrophy which results in a reduced ability of the intestine to absorb it maximally leading to digestive disorders and nutrient malabsorption. Chronic exposure causes: (i) malnutrition, (ii) suppresses growth hormones, so that bone growth and remodeling are inhibited and growth disturbance occurs, (iii) causes further damage to the intestinal mucosa. From this situation, the absorption of nutrients in the intestine is not optimal, so that nutrients that are very important for the growth and development of children become inadequate and the child experiences a growth disorder in a linear or stunting manner.
Environmental Health
As part of public health science, environmental health is a study that provides an overview of the dynamics of interactive relationships between population groups with various changes in environmental components that pose a threat that has the potential to disrupt public health or in other words environmental health coverage is all aspects of nature and the environment that are affect human health. The Definition of Environmental Health According to P. Halton Purdon (1971) in (Article Biofarma, 2014) is that Environmental Health is part of the basics of health for modern society, environmental health is an aspect of public health which includes all aspects of human health in relation to the environment. The aim is to maintain and improve the degree of public health at the highest level by modifying social factors, physical environmental factors, environmental characteristics and behaviors that can affect health.
WHO states that environmental health is a condition in which environmental health relies to an ecological balance that must exist and man and his environment in order to ensure his health becomes. Environmental health is the realization of an ecological balance between humans and the environment that must exist, so that people become healthy and prosperous. So that according to WHO Environmental Health are: Those aspects of human health and disease that are determined by factors in the environment. It also refers to the theory and practice of assessing and controlling factors in the environment that can potentially affect health. Or if it is concluded "An ecological balance that must exist between humans and the environment in order to guarantee the healthy state of humans".
Riverside Communities
History shows that there is a close relationship between settlements and rivers, especially in the area of Central Kalimantan which already has a very close relationship between rivers in the formation and development of cities and their communities. From the time development of the people who live on the riverside community of the river, the community can make the river as a source of necessities of life and a source of income, both in terms of transportation, economic, social and cultural. With the interaction between the community and the river, river culture is formed. Culturally the people of Palangka Raya are very close to the river and difficult to separate from the river. Even from the existing history, the embryo of the City of Palangka Raya is a village on the riverside community of the river or originated from settlements along the riverside community. It is not wrong if the settlement on the river bank is one of the characteristics of the City of Palangka Raya. The Kahayan Riverside community are used for various activities, from settlement activities and other activities. With characteristic types of buildings on the riverside community of the Kahayan River are the floating and stage types. However, with the progress of the times there was a decline in some cultural activities of the river which was also followed by a decline in the function of the Kahayan River.
III. Research Methods
This type of research used in this study is qualitative, which serves to understand the meaning and elaborate the results of interactions with informants, the location of the research is in the Village of Pahandut, District of Pahandut. The city of Palangka Raya and chosen on the riverside community of the Kahayan river, this location is determined because there is potential for stunting and is also a representation of the problem formulation that has been determined. Data collection through observation and interviews. Observations were carried out in October 2019 to November 2019. The informant selection technique used simple random sampling, pregnant women, mothers with children under the age of 2 years and only young women selected.
The data used in this study are primary and secondary data. Primary data were obtained through interviews with toddlers, pregnant women and adolescents using depth and indept interview techniques. Secondary data were obtained from data obtained from the Pahandut Community Health Center.
Data analysis techniques use interactive analysis techniques, namely the process of data collection, data reduction, display, verification and conclusion drawing based on a predetermined formula, and given meaning by the researcher.
IV. Discussion
The river has a long flow as the main source of life by humans. Like bathing, washing, and so on as a livelihood. Rivers that flow and reach into each region make the river as a path to move from one place to another. The movement will form a structure of life that makes a settlement that initially the area was uninhabited, and became a fairly large settlement.
The philosophers of history say that history is a dialectic between continuity and discontinuity, succession between order and change, or the famous slogan of Soekarno history is a kind of revolutionary symphony of the attempt to break down and build, when connected with culture there will be a two-dimensional dialect of culture. According to Kleden, that culture is a dialect between calm and anxiety, between discovery and search, between integrity and disintegrity, between tradition and reform (Kleden, 1986).
The river is the identity of the people who live and live in Kalimantan, the river becomes a means of transportation in the transformation of the process of cultural formation for people in Central Kalimantan. Central Kalimantan consists of 13 Regencies and 1 City, the majority of which live along the riverside community of the river. there are around 20% of the total population living in the mainland without access to the river, and can be ascertained as a new residential area in the form of mobilization and transmigration (Nyahu, 2017).
Palangkaraya City starts from a village on the edge of the Kahayan River, called Pahandut Village. In this village, Dayak people depend their lives on the Kahayan river which divides the city of Palangka Raya. It was also in Pahandut Village that Sukarno first set foot in Central Kalimantan, precisely in July 1957. At that time Sukarno came to inaugurate the construction of the City of Palangka Raya, the capital of Central Kalimantan Province.
Settlements in the Pahandut region are influenced by natural conditions that are affected by nature, the pattern of community settlements on the riverside community of the Kahayan River, Pahandut Urban Village tends to be clustered, housing growth is unplanned and tends to be uncontrolled, dense and tends to become slum areas (Observation Results, 2019), even though Thus the people who live on the riverside community of the Kahayan river are divided into two groups, those who live in floating houses (lanting) and those who live on stilts (land) with the use of different water sources. Communities in lanting houses still use water sources originating from the river and houses on stilts use wells as a source of water (Interview Results, 2019).
For the ownership of toilet, the community in a stilt house has a toilet in their house, even though the sewage is still flowed directly into the river, the same as the lanting community (Interview Results, 2019). With the exception of the daray community, whose access is more than 20m from the river mouth, all have septic tanks (Interview Results, 2019). An overview of community health conditions is presented in table 1.
Panggung House
Water from the Drilling Well.
The water from the wellbore still smells rust so it must be deposited in a plastic barrel.
Toilet without septic tanks (for those close to the river) and with septic tanks for those far from the river mouth The condition of public health related to awareness of cleanliness and health based on the results of observations and interviews is quite low, this can be seen in common that both people who live on stilts and lanting houses both still have the habit of throwing trash directly in the river or under his house, and of course this can cause pollution to water sources, and even illness in the dry season.
The issue of stunting is a very urgent issue to be taken seriously because it involves the quality of Indonesia's human resources in the future and greatly influences the country's existence. At the policy level, the government has issued policies related to efforts to accelerate stunting prevention both through specific nutrition interventions and sensitive nutrition interventions (Izwardy, 2019). Diet is a risk factor that can increase the potential for stunting in the community. And people who live on the riverside community of the Kahayan river tend to have low access to the fulfillment of food nutrition, orientation to the quantity of food, and disease (Interview Results, 2019). In addition, secondary data shows that there are still 10 toddlers and toddlers with potential for stunting in the District of Pahandut (Pahandut Health Center Data, 2019).
V. Conclusion
Based on the explanation and discussion of the results of the study discussed, it can be concluded that the environmental health conditions of the people living on the riverside community of the Kahayan river are very bad, if referring to health standards, because there are no adequate sanitation facilities related to water management and disposal of solid and liquid waste, plus the habits of the people who still throw garbage directly into the river. The solution is to improve public health through collaboration of various parties, especially for the management of organic waste and plastic waste, changing the behavior patterns of disposing waste directly in the river. In addition, collaboration with agencies that deal with water issues and water management must be improved, so that it does not close the riverside community of the kahayan river area to become a clean and proper water area.
The potential for stunting for riverside community is quite large, because people who live on the riverside community of the Kahayan river tend to have low access to the fulfillment of food nutrition, orientation to the quantity of food, and disease. There should be further research and community empowerment by the Health Office related to the nutritional patterns of children, adolescents and pregnant women related to eating patterns based on quality and not based on quantity. | 2020-09-16T21:49:47.216Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "f38e9f18d81f2d6320261dedbe08e15244758872",
"oa_license": "CCBYSA",
"oa_url": "http://www.bircu-journal.com/index.php/birci/article/download/1092/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f38e9f18d81f2d6320261dedbe08e15244758872",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Geography"
]
} |
221090514 | pes2o/s2orc | v3-fos-license | Coupling machine learning and crop modeling improves crop yield prediction in the US Corn Belt
This study investigates whether coupling crop modeling and machine learning (ML) improves corn yield predictions in the US Corn Belt. The main objectives are to explore whether a hybrid approach (crop modeling + ML) would result in better predictions, investigate which combinations of hybrid models provide the most accurate predictions, and determine the features from the crop modeling that are most effective to be integrated with ML for corn yield prediction. Five ML models (linear regression, LASSO, LightGBM, random forest, and XGBoost) and six ensemble models have been designed to address the research question. The results suggest that adding simulation crop model variables (APSIM) as input features to ML models can decrease yield prediction root mean squared error (RMSE) from 7 to 20%. Furthermore, we investigated partial inclusion of APSIM features in the ML prediction models and we found soil moisture related APSIM variables are most influential on the ML predictions followed by crop-related and phenology-related variables. Finally, based on feature importance measure, it has been observed that simulated APSIM average drought stress and average water table depth during the growing season are the most important APSIM inputs to ML. This result indicates that weather information alone is not sufficient and ML models need more hydrological inputs to make improved yield predictions.
1. Explore whether a hybrid approach (simulation crop modeling + ML) would result in better corn yield predictions in three major US Corn Belt states (Illinois, Indiana, and Iowa); 2. Investigate which combinations of hybrid models (various ML x crop model) provide the most accurate predictions; 3. Determine the features from the crop modeling that are most relevant for use by ML for corn yield prediction. Figure 1 depicts the conceptual framework of this paper. The remainder of this paper is organized as follows. "Materials and methods" section describes the methodology and the materials used in this study, and "Results" section presents and discusses the results and the possible improvements. "Discussion" section discusses the analysis and findings and finally, "Conclusion" section concludes the paper.
Materials and methods
Since the main objective is to evaluate the performance of a hybrid simulation-machine learning framework in predicting corn yield, this section is split into two parts. The first describes the Agricultural Production Systems sIMulator (APSIM) and the second the Machine learning (ML) algorithms. Each of them explains the details of the prediction/forecasting framework, including the inputs to the models, the data processing tasks, the details of selected predictive models, and evaluation metrics used to compare the results, for simulation and machine learning.
Agricultural Production Systems sIMulator (APSIM). APSIM run details. The Agricultural Produc-
tion Systems sIMulator 37 (APSIM) is an open source advanced simulator of cropping systems. It includes many crop models along with soil water, C, N, crop residue modules, which all interact on a daily time step. In this project, we used the APSIM maize version 7.9 and in particular the calibrated model version for US Corn Belt environments as outlined by Archontoulis et al. 1 that includes simulation of shallow water tables and inhibition of root growth due to excess water stress 38 and waterlogging functions 39 . Within APSIM we used the following modules: maize 40 , SWIM soil water 41 , soil N and carbon 42 , surface residue 42,43 , soil temperature 44 and various management rules to account for tillage and other management operations. The crop models simulate potential biomass production based on a combined radiation and water use efficiency concept. This potential is reduced to attainable yields by incorporating water and nitrogen limitation to crop growth (For additional information, we refer to www.apsim .info).
To run APSIM across the three states Illinois, Indiana, and Iowa, we used the parallel system for integrating impact models and sectors (pSIMS) software 45 . pSIMS is a platform for generating simulations and running point-based agricultural models across large geographical regions. The simulations used in this study were created on a 5-arcminute grid across Iowa, Illinois and Indiana considering only cropland area when creating soil profiles. Soil profiles for these simulations were created from Soil Survey Geographic database (SSURGO) 46 , a soil database based off of soil survey information collected by the National Cooperative Soil Survey. Climate information used by the simulations came from a synthetic weather data set called "IEM Reanalysis", which was engineered at Iowa Environmental Mesonet (mesonet.agron.iastate.edu). This database is developed from a combination of several weather sources. The temperature data comes from National Weather Service Cooperative Observer Program (NWS COOP) observers (www.weath er.gov/coop). The precipitation data is derived from radar-based estimates of National Oceanic and Atmospheric Administration Multi-Radar / Multi-Sensor System (NOAA MRMS) (www.nssl.noaa.gov/proje cts/mrms), Oregon State's PRISM data set (https ://prism .orego nstat e.edu/), and NWS COOP reports. Finally, the radiation data comes from NASA POWER (power.larc.nasa. gov). The synthetic product was tested against point weather stations and proved accurate (see more information here: https ://crops .exten sion.iasta te.edu/facts /weath er-tool). Current management APSIM model input databases include changes in plant density, planting dates, cultivar characteristics and N fertilization rate to corn from 1984 to 2019. Planting date and plant density data derived from USDA-NASS 47 . Cultivar traits data derived through regional scale model calibration. N fertilizer data derived from a combined analysis of USDA-NASS 47 and Cao et al. 48 including N rates to corn by county and by year. Over the historical period, 1984-2019, APSIM captured 78% of the variability in the NASS yields having a RMSE of 1 Mg/ha and RRMSE of 10% (See Fig. 2). This version of the model is used to provide outputs to the machine learning.
APSIM output variables used as inputs to ML models. The first step to combine the developed data set with APSIM variables was to extract all APSIM simulations from its outputs and prepare the obtained data to be added to the mentioned data set. The APSIM outputs include 22 variables (the details are presented in Table 1). The granularity level for the APSIM variables was different from USDA obtained data, as the APSIM variables made at 5 arc (approximately 40 fields within a county). Therefore, to calculate a county-level value for each of them, the median of all corresponding values is used. The reason to use median instead of a simple average is • Imputing zero values with the average of other values of the same feature • Removing rows with missing values • Normalizing the data to be between 0 and 1 • Cross-referencing the new data with the developed data set.
Then, all feature selection procedures explained in "Data pre-processing" section were executed on the newly created data set to keep only the variables that carry the most relevant information for the prediction task.
The developed data set considers data from 1984 to 2018. The data from three years, namely 2012, 2017, and 2018 are in turn considered as the test data and for each scenario, the training data is set to be the data from the other years. In essence, we considered average to wet years (2017 and 2018) and an extremely dry year (2012) as the test years to assess the model performance in all situations.
Machine learning (ML). The machine learning models are developed using a data set spanning from 1984 to 2018 to predict corn yield in three US Corn Belt states (Illinois, Indiana, and Iowa). The data set is comprised of the environment (soil and weather) and management as input variables, and actual corn yields for the period under study as the target variable. The input data are comprised of weather, management, and soil data 1 . Environment data includes several soil parameters at a 5 km resolution 46 and weather data.
Data set. The county-level historical corn yields were downloaded from the USDA National Agricultural Statistics Service 47 for years 1984-2018. A data set including observed information of the environment, management, and yields was developed, which consists of 10,016 observations of yearly average corn yields for 293 counties. The factors that mainly affect crop yields are alleged to be the environment, genotype, and management. To this end, weather and soil as environmental features and plant population and planting progress as management features were included in the data set. It should be noted that data preprocessing has been designed to address the increasing trends in yields due to technological and genotypic advances over the years 49,50 . This is mainly due to that there is no publicly available genotype data set. The data set with 598 variables (including target variable) are described below. Data pre-processing. Several pre-processing tasks were conducted to ensure the data is prepared for fitting machine learning models. Since it is favorable for some machine learning models especially weighted ensemble models for the data input to have similar ranges, the first pre-processing task was to scale the input data between 0 and 1 using min-max scaling. The most common scaling methods include min-max scaling and normalization, from which min-max scaling is selected as it keeps the distributions of the input variables. The next preprocessing tasks include adding yearly trends, cumulative weather feature construction, and feature selection.
Add yearly trends feature. Figure 3 suggests an increasing trend in the yields over time. It is evident that there is no input feature in the developed data set that can explain this observed increasing trend in the corn yields. This trend is commonly described as the effect of technological gains over time, such as improvements in genetics (cultivars), management 51 , equipment, and other technological advances 52,53 .
Therefore, to account for the trend as mentioned above, the following actions were taken.
A new feature (yield_trend) was constructed that only explained the observed trend in corn yields. For building this new feature, a linear regression model was built for each location as the trends for each site tend to be different. The year ( YEAR ) and yield ( Y ) features formed the independent and dependent variables of this linear regression model, respectively. Then the predicted value for each data point ( Ŷ ) is added as a new input variable that explains the increasing annual trend in the target variable. Only training data was used for fitting this linear regression model and the corresponding values of the newly added feature for the test set is set to be The following equation shows the trend value ( Ŷ i ) calculated for each location ( i ), that is added to the data set as a new feature.
Aggregated and cumulative weather feature construction. To provide more climate information for the machine learning models, additional weather features were constructed that include cumulated values of the existing weather features. The aggregated precipitation, growing degree days, and shortwave radiation features are computed from summation of weather features, while the aggregated minimum and maximum temperature features come from average of the existing values. There are two sets of new cumulative weather features: Quarterly weather features (20 features), and cumulative quarterly weather features (15 features).
Feature selection. Since the data developed data set has a large number of input variables and is prone to overfitting, feature selection becomes necessary to build generalizable machine learning models. A two-stage feature selection procedure was performed to select the most essential features in the data set and prevent the machine learning models from overfitting on the highly dimensional training data. The two steps to perform feature selection were feature selection based on expert knowledge, and permutation feature selection using random forest.
i. Feature selection based on expert knowledge Using expert knowledge, weather features were reduced by removing features for the period between the end of harvesting and the beginning of next year's planting. Additionally, the number of planting progress features were lowered by eliminating the cumulative planting progress for the weeks before planting, as they did not include useful information. The feature selection based on expert knowledge could reduce the number of features from 550 to 387. ii. Permutation feature selection with random forest Strobl 54 pointed out that the default random forest variable importance (impurity-based) is not reliable when dealing with situations where independent variables have different scales of measurement or different number of categories. This is specifically important for biological and genomic studies where independent variables are often a combination of categorical and numeric features with varying scales. Therefore, to overcome this bias and find decisive importance of input features, permutation feature importance is decided to be used 55 .
Permutation feature importance measures the importance of an input feature by calculating the decrease in the model's prediction error when one feature is not available 56 . To make the unavailability of one feature possible, each feature is permuted in the validation or test set, that is, its values are shuffled, and the effect of this permutation on the quality of the predictions is measured. Specifically, if permutation increases the model error, the permuted feature is considered important, as the model relies on that feature for prediction. On the other hand, if permutation does not change the prediction error significantly, the feature is thought to be unimportant, as the model ignores it for making the prediction 57 .
The second stage of feature selection and likely the most effective one, includes fitting a random forest model with 100 number of trees as the base model and calculating permutation importance of input features with 10 times of repetition and considering a random tenfold cross-validation schema. It should be noted that the number of trees hyperparameter of this random forest model is tuned using a tenfold cross-validation. Afterward, the top 80 input features were selected in the second stage of feature selection.
Model selection. Tuning hyperparameters of machine learning models and selecting best models with optimal hyperparameter values is necessary to achieve high prediction accuracies. Cross-validation is commonly used to evaluate the predictive performance of fitted models by dividing the training set to train and validation subsets. Here, we use a random tenfold cross-validation method to tune the hyperparameter of ML models.
Grid search is an exhaustive search method that tries all the possible combinations of hyperparameter settings to find the optimal selection. It is both computationally expensive and generally dependent on the initial values specified by the user. However, Bayesian search addresses both issues and is capable of tuning hyperparameters faster and using a continuous range of values.
Bayesian search assumes an unknown underlying distribution and tries to approximate the unknown function with surrogate models such as Gaussian process. Bayesian optimization incorporates prior belief about the underlying function and updates it with new observations. This makes tuning hyperparameters faster and ensures finding an acceptable solution, given that enough number of observations are observed. In each iteration, Bayesian optimization gathers observations with the highest amount of information and intends to make a balance between exploration (exploring uncertain hyperparameters) and exploitation (gathering observations from hyperparameters close to the optimum) 58 . That being so, to tune hyperparameters, Bayesian search with 20 iterations was selected as the search method under tenfold cross-validation procedure.
Predictive models. In this study, we combine diverse models in different ways and create ensemble models to make a robust and precise machine learning model. One prerequisite for creating well-performing ensemble models is to show a particular element of diversity in the predictions of base learners as well as preserve excellent performance individually 59 . Thus, several base learners made with different procedures were selected and trained, including linear regression, LASSO regression, Extreme Gradient Boosting (XGBoost), Light Gradient www.nature.com/scientificreports/ Boosting Machine (LightGBM), and random forest. Moreover, an average weighted ensemble that assigns equal weights to all base learners is the simplest ensemble model created. Additionally, optimized weighted ensemble method proposed in Shahhosseini et al. 60 was applied here to test its predictive performance. Several two-level stacking ensembles, namely stacked regression, stacked LASSO, stacked random forest, and stacked LightGBM, were built, which are expected to demonstrate excellent performance. The details of each model can be found at Shahhosseini et al. 61 .
Linear regression. Linear regression intends to predict a measurable response using multiple predictors. It assumes the existence of a linear relationship between the predictors and response variable, normality, no multicollinearity, and homoscedasticity 62 .
LASSO regression. LASSO is a regularization method that is equipped with in-built feature selection. It can exclude some variables by setting their coefficient to zero 62 . Specifically, it adds a penalty term to the linear regression loss function, which can shrink coefficients towards zero (L1 regularization) 63 .
XGBoost and LightGBM. XGBoost and LightGBM are two implementations of gradient boosting tree-based ensemble methods. These types of ensemble methods make predictions sequentially and try to combine weak predictive tree models and learn from their mistakes. XGBoost was proposed in 2016 with new features, such as handling sparse data, and using an approximation algorithm for a better speed 64 , while LightGBM was published in 2017 by Microsoft, with improvements in performance and computational time 65 .
Random forest. Random forest is built on the concept of bagging, which is another tree-based ensemble model. Bagging tries to reduce prediction variance by averaging predictions made by sampling with replacement 66 . Random forest adds a new feature to bagging, which is randomly choosing a random number of features and constructing a tree with them and repeating this procedure many times and eventually averaging all the predictions made by all trees 59 . Therefore, random forest addresses both bias and variance components of the error and is proved to be powerful 67 .
Optimized weighted ensemble. An optimization model was proposed in Shahhosseini et al. 60 , which accounts for the tradeoff between bias and variance of the predictions, as it uses mean squared error (MSE) to form the objective function for the optimization problem 68 . In addition, out-of-bag predictions generated by k-fold crossvalidation are used as emulators of unseen test observations to create the input matrices of the optimization problem, which are out-of-bag predictions made by each base learner. The optimization problem, which is a nonlinear convex problem, is as follows.
where w j is the weights corresponding to base model j ( j = 1, . . . , k ), n is the total number of instances, y i is the actual value of observation i , and ŷ ij is the prediction of observation i by base model j.
Average weighted ensemble. Average weighted ensemble, which we call "average ensemble", is a simple average of out-of-bag predictions made by each base learner. The average ensemble can perform well when the base learners are diverse enough 59 .
Stacked generalization. Stacked generalization tries to combine multiple base learners by performing at least one more level of learning task, that uses out-of-bag predictions for each base learner as inputs, and the actual target values of training data as outputs 69 . The out-of-bag predictions are generated through a k-fold crossvalidation and have the same size of the original training set 70 . The steps to design a stacked generalization ensemble are as follows.
(a) Learn first-level machine learning models and generate out-of-bag predictions for each of them by using k-fold cross-validation. (b) Create a new data set with out-of-bag predictions as the input variables and actual response values of data points in the training set as the response variable. (c) Learn a second-level machine learning model on the created data set and make predictions for unseen test observations.
(2) www.nature.com/scientificreports/ Considering four predictive models as the second-level learners, four stacking ensemble models were created, namely stacked regression, stacked LASSO, stacked random forest, and stacked LightGBM.
Performance metrics.
To evaluate the performance of the developed machine learning models, three statistical performance metrics were used. These metrics together provide estimates of the error (RMSE, RRMSE, MBE) and of the variance explained by the models (R 2 ).
Results
Numerical results of hybrid simulation-ML framework. Table 2 shows the test set prediction errors of the 11 developed ML models for the benchmark (the case that no APSIM variable is added to the data set) and the hybrid simulation-ML (where all 22 APSIM outputs are added to the data set) cases. The relative RMSE (RRMSE) is calculated using the average corn yield value of the test set (see Table S1 in the supplementary material). Adding APSIM variables as input features to ML models improved the performance of the 11 developed ML models. In terms of RMSE, the hybrid model boosted ML performance up to 27%. In addition, comparing the lowest prediction errors (RMSE) of the benchmark and the hybrid scenario, we found that the use of hybrid models achieved 8%-9% better corn yield predictions.
Looking at the average test results (Fig. 4), it can be observed that adding APSIM features makes improvements to all designed ML models. Moreover, considering the smallest decrease in the prediction error (RRMSE) which is the worst-case scenario and is obtained by LASSO model, the hybrid model still is proved to be better than the benchmark. Another observation is the superiority of weighted ensemble models compared to other ML models. It should be noted that the negative R 2 value of some models (XGBoost, Stacked Random forest, and Stacked LightGBM) when having no APSIM variables shows that this models' predictions are worse than taking the mean value as the predictions.
On average, stacked ensemble models benefit the most from inclusion of APSIM outputs in predicting corn yields. Besides, considering Mean Bias Estimate (MBE) values of the ML models, we can observe that all ML models presented less biased predictions after having APSIM information in their inputs and it seems that inclusion of APSIM variables helped reducing the prediction bias significantly. Figure 5 illustrates the goodness of fit of some of the designed ML models for two benchmark and hybrid cases for the test year 2018. As mentioned above, the advantage of including APSIM variables in the machine learning algorithms is the better distribution of the residuals (deviation from the 1:1 line) which decreased overall prediction bias. See FigureS1 from supplementary material for additional summary statistics.
Models performance on an extreme weather year (2012).
To assess the performance of the trained models on an extreme weather year, here the data from the year 2012, which was an exceptionally dry year, is considered as unseen test observations and the quality of the predictions made by the benchmark and the hybrid models are compared. Table 3 demonstrates lower prediction accuracy of the models in year 2012 (extreme dry year) compared to average to wet years model predictions (2017 and 2018, see Table 2). This result was consistent for both ML and hybrid models. However, the hybrid model managed to provide improvements over the benchmark in the 2012 year. This was ranging from 5 to 43% decrease in the prediction RMSE. Comparing the best model of the benchmark (LightGBM) with the best model of the hybrid scenario (Stacked regression ensemble), we observed that the use of hybrid model provided 22% better predictions.
Partial inclusion of APSIM variables. This section investigates the effect of partial inclusion of APSIM
variables considering three different scenarios for the test year 2018 (see Table 4). The scenarios are (1) include only phenology-related APSIM variables (silking date and physiological maturity date); (2) include only croprelated APSIM variables (crop yield, biomass, maximum rooting depth, maximum leaf area index, cumulative transpiration, crop N uptake, grain N uptake, season average water stress (both drought and excessive water), and season average nitrogen stress), and (3) include soil and weather-related APSIM variables (annual evapotranspiration, growing season average depth to the water table, annual runoff, annual drainage, annual gross N mineralization, total N loss that accounts for leaching and denitrification, annual average water table depth, ratio of soil water to field capacity during the growing season at 30, 60, and 90 cm profile depth). When including only phenology-related APSIM variables, results demonstrate that stacked regression ensemble model makes the best predictions, while the least biased predictions are generated from stacked random forest ensemble.
In case of having crop-related APSIM variables as ML inputs, results indicate that stacked regression and stacked random forest ensembles make the best and the least biased predictions, respectively.
When the soil and weather-related APSIM variables are considered as ML inputs, the results show that stacked regression ensemble makes decent predictions with having the least amount of prediction error as well as bias.
Scientific Reports | (2021) 11:1606 | https://doi.org/10.1038/s41598-020-80820-1 www.nature.com/scientificreports/ Table 4 presents the test set prediction errors of designed ML models for all three scenarios of partial inclusion of APSIM variables. Overall results indicate that soil and weather-related APSIM variables as well as crop-related variables have a more significant influence on the predictions made by ML. This is interesting and is partially explained by the fact that ML somehow already accounts for phenology-related parameters, which are largely weather-driven, while the soil-related parameters are more complicated parameters that ML alone cannot see. This is more evident in Fig. 6. Furthermore, it can be observed that some of the soil and weather-related ensemble models provide improvements over the models that we developed earlier including all APSIM variables. This result suggests that not all the included APSIM variables have useful information for ML yield prediction.
Variable importance. The permutation importance (see also "Data pre-processing" section) of five individual base models (linear regression, LASSO regression, XGBoost, LightGBM, and random forest) was calculated using the test data of the year 2018. Figure 8 depicts the top-15 normalized average permutation importance of these ML models. It should be noted that due to black-box nature of ensemble models, only individual learners were used to calculate permutation importance. Figure 7 indicates that the most important input feature for ML models is "yield_trend" which is the feature we constructed for explaining the increasing trend in corn yields and incorporated technological advances over To find out which APSIM features have been more influential in predicting yields, the average permutation importance of five individual models (linear regression, LASSO regression, LightGBM, XGBoost, and random forest) was calculated for each test year. Figure 8 demonstrates the ranking of top 10 APSIM features. Results indicate that the AvgDroughtStress, AvgWTInseason, and CropYield were the most important features for machine learning models to predict yield. Most of these are water-related features suggesting the importance of soil hydrology in crop yield prediction in the US Corn belt. This result was consistent across three years, including the year drought 2012, in which model prediction was lower than the other years. www.nature.com/scientificreports/
Discussion
We proposed a hybrid simulation-machine learning approach that provided improved county-scale crop yield prediction. To the best of our knowledge, this is the first study that designs ensemble models to increase corn yields predictability. This study demonstrated that introducing APSIM variables into machine learning models and utilizing them as inputs to a prediction task on average can decrease the prediction error measure by RMSE between 7 and 20%. In addition, the predictions made by the hybrid model show less bias toward actual yields.
Other studies in this area, are mainly limited in coupling simplest statistical models, i.e. linear regression variants, www.nature.com/scientificreports/ with simulation crop models and apart from two recent studies 35,36 there has been no study combining machine learning and simulation crop models. Considering the hybrid models, some of the developed models provided predictions with RRMSE values as small as 6-7%. This indicates that the developed models outperform the corn yield prediction models developed in the literature 24,[72][73][74][75] .
In addition to the prediction advantages achieved by coupling ML and simulation crop modelling, we investigated the value of different types of APSIM variables in the ML prediction and found out that soil water related APSIM variables contributed the most in improving yield prediction. The inclusion of APSIM consistently improved ML yield prediction in all years (2012,2017,2018). We also noticed that neither ML nor the hybrid model could sufficiently predict yields of the 2012 dry year. This suggests that more work is needed to adequately predict yields in extreme weather years, which are expected to increase with climate change [76][77][78][79] , but we noticed that yield prediction of the dry year was better done by the hybrid model. Developing models that are more robust to extreme values, including additional climate information that can help the model to detect the drought, and including remote sensing data can be future research directions.
Designing a method that enables the ML models to capture the yearly increasing trends in corn yields was the main challenge of this work. To address this challenge, an innovative feature was constructed that could explains the trend to a great extent and as the variable importance results showed, it is by far the most important input feature for predicting corn yields.
The significant merits of coupling ML and simulation crop models shown in this study raise the question that whether the ML models can further benefit from addition of more input features from other sources. Hence, a possible extension of this study could be inclusion of remote sensing data into the ML prediction task and investigate the level of importance each data source can exhibit.
It should be also acknowledged that APSIM simulations that used as inputs to ML model leveraged the full weather of each test year. In real word applications, the weather will be unknown and the APSIM model would www.nature.com/scientificreports/ need to run in a forecasting mode 1,12,80 introducing some additional uncertainty. This is something to be explored further in the future.
Conclusion
We demonstrated improvements in yield prediction accuracy across all designed ML models when additional inputs from a simulation cropping systems model (APSIM) are included. Among several crop model (APSIM in this study) variables that can be used as inputs to ML, analysis suggested that the most important ones were those related to soil water, and in particular growing season average drought stress, and average depth to water table.
We concluded that inclusion of additional soil water related variables (either from simulation model or remote sensing or other sources) could further improve ML yield prediction in the central US Corn Belt. www.nature.com/scientificreports/ | 2020-08-11T01:00:33.554Z | 2020-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "63169ec74fa19d5fee55c9b3dc35307e400458ae",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-80820-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98ebc7db6f1dde9466a81e5a0d88e14ea20d2e4b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Computer Science",
"Mathematics"
]
} |
254129667 | pes2o/s2orc | v3-fos-license | Nowcasting unemployment rate during the COVID-19 pandemic using Twitter data: The case of South Africa
The global economy has been hard hit by the COVID-19 pandemic. Many countries are experiencing a severe and destructive recession. A significant number of firms and businesses have gone bankrupt or been scaled down, and many individuals have lost their jobs. The main goal of this study is to support policy- and decision-makers with additional and real-time information about the labor market flow using Twitter data. We leverage the data to trace and nowcast the unemployment rate of South Africa during the COVID-19 pandemic. First, we create a dataset of unemployment-related tweets using certain keywords. Principal Component Regression (PCR) is then applied to nowcast the unemployment rate using the gathered tweets and their sentiment scores. Numerical results indicate that the volume of the tweets has a positive correlation, and the sentiments of the tweets have a negative correlation with the unemployment rate during and before the COVID-19 pandemic. Moreover, the now-casted unemployment rate using PCR has an outstanding evaluation result with a low Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), Symmetric MAPE (SMAPE) of 0.921, 0.018, 0.018, respectively and a high R2-score of 0.929.
Introduction
The novel coronavirus known as "severe acute respiratory syndrome-related Coronavirus type 2" (SARS-CoV-2), responsible for the "Coronavirus Disease 2019" (COVID-19) pandemic, was first detected in the metropolitan city of Wuhan, Hubei Province, mainland China, in late December 2019. Since then, it quickly spread around .
Since then, countries have enforced non-pharmaceutical interventions (NPIs) to curb the diffusion of the virus and prevent its spread, including lockdowns and different levels of restrictions. Even though effective both from a clinical and epidemiological perspective, consecutive rounds of NPIs have had devastating effects on the economy and caused bankruptcy to many companies and businesses (4). As a result, many people and individuals have lost their jobs and countries are experiencing economic recession (5). To better manage the economic impacts of the pandemic on the economy and people, it is highly important to have complete, reliable, and realtime information about the effects of the pandemic on the unemployment rate as one of the key macroeconomic indicators (6,7).
Traditional census methods that are used by most countries to generate unemployment data are often conducted on a seasonal or annual basis (8,9). While this provides sufficient information for public policies in normal situations, these methods lack the details and urgency that are required for decision-making during a disaster, such as a pandemic. Census data often use questionnaires on a sample of households to collect employment data. Despite using new technologies in data collection (such as online surveys) and analysis, censuses are still expensive, time-and resource-consuming, and difficult to handle. The census method faces many other challenges and limitations such as privacy concerns, low public cooperation, errors caused by response burden, cybersecurity attacks (e.g., denial of service), and missing out on hard-to-reach populations. Migration, homelessness, and nomadism may result in under-or over-registration, making collected data not representative of the entire population. Low levels of literacy and language issues may cause some people to struggle with the census forms and fail to provide correct information.
Due to such difficulties, the unemployment rate in South Africa is also estimated quarterly. In contrast, social media data is readily available. Statistics and demographic information can be easily extracted and processed in real-time. Many of the problems and limitations of the classical census approach do not exist when data are extracted and estimated using social media (10,11). Twitter data has the potential to present sociodemographics, statistics, and textual information/content that can be exploited to estimate/model macroeconomic indicators, like the unemployment rate (12). Moreover, approximately 82% of the Twitter users in South Africa are of working ages (16-54 years). About half of them are women (56%) and half of them are men (44%) (13). Finally, retrieving data from Twitter is not expensive and time-consuming, and it does not require manpower and administrative personnel. With several lines of code, data can be quickly accessed: with the streaming and full archive search endpoints, data is available in real-time, and in several days at maximum, respectively.
As unemployment increases, it becomes a common concern, and everyone generally talks about it more. On the other hand, as unemployment decreases, everyone is less bothered by it, and it is less talked about on social media. As a result, the aggregated data derived from social media reflects the unemployment situation and can potentially be used to estimate the statistics (14)(15)(16)(17)(18)(19). Moreover, applying sentiment analysis which is a way of classifying text for extracting qualitative insights gives additional information that could be further used for machine learningbased prediction (20, 21).
Access to socio-economic data such as unemployment rates is very critical for rapid and effective decision-making and public health policies, during devastating disasters such as the still ongoing COVID-19 pandemic. In the present study, we propose a method for understanding and estimating unemployment rates during COVID-19 using social media, particularly Twitter data (22). As previously mentioned, accessing data extracted from social media is fast, easy, and low-cost. It can be done in real-time and does not have the difficulties and limitations of census-based methods.
Social media provide a large amount of data about users and their interactions about a given subject, thereby, offering researchers new opportunities for research (23)(24)(25)(26). Twitter as a pervasive social media is widely used for understanding economic behavior and measuring its metrics (27,28). It is also one of the most popular social media in South Africa (29,30). With the implementation of NPIs, such as lockdowns and the closure of workplaces and public areas, people spend even more of their time on social media (31).
In this paper, we aim to examine how Twitter data can be used to collect qualitative and quantitative information about the unemployment rate and how unemployment is lived and experienced in South Africa as a case study. This could be beneficial to policymakers, especially during disasters such as the COVID-19 pandemic as it can capture and report rapid changes in unemployment in real-time rather than seasonally or annually. This work may enable policymakers to understand the current situation of the labor market and react in terms of policies. Accordingly, the main contribution of this study includes: • Using the quantity of the tweets to understand how people experience unemployment. • Using the quality of the tweets (or sentiments) to understand how people feel about unemployment. • Nowcasting and finding the missing data on the unemployment rate using the quantity and quality of the tweets.
Background and literature review
Social media, especially Twitter, has long been used for investigating economic issues. Authors in (32) searched for tweets with hashtags for different keywords on jobs and gathered tweets sent by popular users in the United States. Sentiment analysis showed that most of the tweets had negative sentiments. In (33) a sentiment-based model was designed with 0.6787 accuracies for tweets, news articles and movie reviews and concluded that the sentiment scores were correlated with economic indexes such as the exchange rate. Although social media has long been used for studying economic issues and related concerns, very few studies have considered using social media to understand the unemployment rate. One of the first works that used Twitter to estimate the unemployment rate is presented in (14). In this paper, 19.3 billion tweets were gathered from July 2011 to November 2013 on unemployment in the United States. Principal Component Analysis (PCA) was used to reduce the dimension of the dataset. The unemployment rate of the United States was then estimated using the principal components. A similar approach was proposed in (15) for studying the correlation between the number of unemploymentrelated tweets and the unemployment rate in Greece. Sentiment analysis has not been considered in these two studies to improve the results further. Ryo in (17) analyzed the sentiments of Korean tweets, blogs, and news articles, and used sentiments to predict the unemployment rate with autoregression analysis (like ARIMAX and ARX). The Twitter dataset was found to have the lowest error. The authors in (18) used Twitter data to study the unemployment and employment rates in the United States. Using sentiment analysis, they found out that negative and positive sentiments peak when people lost or gain jobs. They also used sentiment analysis to predict the unemployment rate of the United States. Authors in (19) built a linear model to predict employment and unemployment rates using tweets from the United States. Although the papers mentioned above have presented novel methods for studying the unemployment rate using social media, they have not investigated unemployment rate changes during a disaster such as the COVID-19 pandemic.
Authors in (16) hydrated a Twitter dataset and used it to study the correlation between the number of unemploymentrelated tweets and the unemployment rate and track the unemployment rate of the USA during the COVID-19 pandemic. However, because of the limitations in their dataset, they were not able to properly understand how the unemployment rate changed over time.
Some social media-related studies have focused on the labor market flow during the COVID-19 Pandemic. Authors in (34) used Twitter to study the effect of different factors on reopening sentiments. They found that people with low income, low education level, high housing rent, and in the labor force are more positive about reopening. In (35) Twitter was used to study the economy of the United States during the COVID-19 pandemic. In this work, the Area Deprivation Index (ADI) of different geographical locations was used to assess the economic situation of people. It concluded that in low resource areas, people were more concerned with economic hardship while in high resource areas people were more focused on public health. In (36) data from Twitter and newspaper articles were used to study economic uncertainty in the United Kingdom and the United States during the COVID-19 pandemic. Numerical results show that with the COVID-19 pandemic, a huge uncertainty jump was found in economic-related indicators such as business growth, GDP growth, and stock market volatility.
These papers have investigated the effect of the COVID-19 pandemic on the economy. However, they do not consider studying and estimating the unemployment rate using social media. The main contribution of this study is to fill the existing gaps in using social media data to understand, analyze, and estimate the unemployment rate during the pandemic using a combination of methods. This combination has significantly improved the classical method for estimating the unemployment rate.
Materials and methods
Our complete code can be found at (37). The unemployment rate for South Africa is estimated in four steps. In the first step, relative keywords are selected to collect Twitter data. In the second step, missing unemployment data is estimated using Google Mobility Index (GMI). In the third step, sentiment analysis is performed to achieve further information about the labor market conditions. Finally, in the fourth step Principal Component Regression (PCR) is used to estimate the unemployment rate from the number of unemployment-related tweets. The overall architecture of the project is presented in Figure 1.
We evaluate our method using four different metrics, Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), Symmetric Mean Absolute Percentage Error (SMAPE), and coefficient of determination (R 2 -score), which are presented in equations 1-4.
FIGURE
The overall architecture of the research.
Where n is the number of tested values, A is the actual unemployment rate, is the mean unemployment rate, and P is the predicted values.
Data collection
All the geotagged tweets posted from South Africa, except for retweets, until Nov 30th, 2021, for certain keywords are retrieved using full archive search of the Twitter Academic Researcher account. Tweets are cleaned, i.e. mentions (@username), URLs, and punctuations are removed. Records that include only URLs become null/nan after cleaning and are deleted. Our method requires a dataset of tweets from real and genuine accounts (38). Therefore, we aim to remove as many tweets posted by bots and fake accounts as possible. Since most tweets that are created by bots include URLs, many of them are deleted after removing null/nan records (39). To further take out tweets created by bots and fake accounts we examine the number of followers and followings of the authors. Generally, users that have a very large or small number of followers to followings ratio are broadcasters or spammers, respectively. Genuine users have a followers to followings ratio close to one (40). Therefore, by removing tweets that their authors have a followers to followings ratio greater than t1 = 10 or smaller than t2 = 0.1, more tweets from fake accounts are excluded. It is worth mentioning that decreasing or increasing the thresholds t1 or t2, respectively, degraded the performance of the regression model.
It is worth mentioning that minors are not excluded from the dataset. Since minors can also post how they or their friends and family, e.g., parents, guardians, etc., are experiencing the unemployment, their comments and sentiments could add useful information to the model and increase the accuracy of the PCR. Next, the Term Frequency (TF) of the keywords are found over time using Equation 5.
Where tweet k is the number of tweets that include keyword k, and tweet total is the total number of tweets. Using the TF of the keywords, the Pearson correlation of each keyword over time with the unemployment rate is calculated. In the economy, correlations higher than 0.4 and 0.7 are considered moderate and strong, respectively (41). To avoid overfitting our estimation model, we chose the keywords which have a .
/fpubh. . correlation higher than 0.4, before and during the COVID-19 pandemic for training the nowcasting model. We build our final dataset using the selected keywords that have a correlation higher than 0.4 with the unemployment rate before and during the COVID-19 pandemic. The cleaned tweets are suitable and used for performing Natural Language Processing (NLP) such as sentiment analysis. The dataset is divided into two parts. The first part contains tweets up to March 31st, 2020, and the second part contains tweets from April 1st, 2020 up to Nov 30th, 2021. The first part is used to analyze the tweets and their sentiments before the COVID-19 pandemic and the second part is used for the COVID-19 pandemic period.
To make sure the volume of the tweets truly correlates with the unemployment rate in the long run, we go further into history and gather the tweets as early as possible. Geotagged tweets with our keywords are available from June 2009. However, due to the low volume (lower than ten tweets per month) of tweets between June 2009 and June 2010, we leave out the tweets from this period. The number of tweets have a moderate to high correlation with the quarterly and interpolated unemployment rate of South Africa from July 2010 to Nov 2021, respectively. Moreover, we compared the number of tweets for each province with the unemployment rate of that province since July 2010 and find a moderate to high correlation for all of them. The results are presented in Appendix A, supplementary files. Previous works have used the change in geolocation of the geotagged tweets in a certain time period to identify mobility and travel (42,43). We find all the geolocations of the tweets sent by each 144,809 users in 1 year to recognize travelers/non-residents. Similar to (43), we used the place field of the json file that is returned by the Twitter API to discover the province of the user when sending the tweet. Users which have posted from multiple provinces are identified as travelers. The most frequent geo-province associated to a user is considered as the primary location of that user, and where the user is residing, or at least working. Thus, the most frequent geo-province associated to each user is assigned to all the tweets sent by that user in that specific year. If more than one province has the greatest number of occurrences, the self-reported location of the user is taken as the primary location of the user (43). The primary province of a few users (2 users on average) that are not identifiable in each year, are taken out from the dataset for that particular year, when nowcasting the unemployment rate of the provinces. By this method the correlation between the volume and sentiment of the tweets and unemployment rate, as well as the estimation using PCR in different provinces increased (see Appendix A, supplementary files).
Data preprocessing
The real unemployment data for South Africa is provided on a seasonal basis (44), and it is calculated in two different ways. In the first method, an individual is considered unemployed during an interview if (1) the individual was not employed in the seven days before the interview, (2) the individual is ready to work within a week of the interview, and (3) has actively taken some steps to look for a job or start a self-employed business, 4 weeks before the interview. In the second method, the third condition is relaxed (45). Since people do not normally look for new jobs or start a business during a lockdown (even if they are jobless), in our work, we use the second definition. This expanded definition of unemployment aligns with the definition in many other countries (45). However, due to on and off rounds of lockdowns the unemployment rate has changed rapidly during COVID-19, and the quarterly unemployment rate is not capable of capturing the rapid fluctuations. We thus use GMI to interpolate the census unemployment rate during the COVID-19 pandemic (46)(47)(48). According to the International Labor Organization (ILO), the unemployment rate can be approximated using GMI (47). GMI shows the movement trends over time and space in six different categories of places namely, retail and recreation, grocery and pharmacies, parks, transit stations, workplaces, and residential. It was released to the public on Feb 15th, 2020 and will be removed after the pandemic (46). Since GMI is only temporarily available, it cannot be used to estimate the unemployment rate after the pandemic. However, Twitter data is always available and can be used for understanding, nowcasting, and even interpolating the unemployment rate. Figure 2A shows the indexes of GMI categories over time for South Africa.
Because a residential activity is not a work-related function and its index has a negative correlation with the rest of the indexes, we exclude it from our analysis. We average the indexes of all other categories and used linear regression to interpolate the unemployment rate of South Africa from GMI data. Equation 6 contains the results obtained from fitting a linear regression model to the GMI data. unemp = −0.1354 × GMI + 41.1134 (6) GMI represents the GMI averaged over all the categories except for the residential places and unemp is the interpolated unemployment rate. Figure 2B shows the quarterly unemployment rate (26) interpolated using the GMI for South Africa. In this figure, the error bars show that the highest error occurs in the estimation of the unemployment rate for August 2021.
To evaluate the goodness of the fit, we used the SMAPE Metric. This metric which is shown in Equation 3 is a value between 0 and 2, with 0 indicating a perfect fit and 2 showing the worst fit possible. Generally, a SMAPE value lower than 0.1 shows a really good regression fit (49). We found a value of 0.05196 for SMAPE which indicates that our simple linear regression model captures the GMI data quite well.
Data labeling and sentiment analysis
Sentiment analysis is an NLP procedure that classifies text based on its affective states. Sentiment analysis is done using a pretrained Bidirectional Encoder Representations from Transformers (BERT) model (50,51). The model is trained using a large Twitter dataset (52, 53). We randomly choose 200 tweets from our dataset and manually label them as negative, neutral, or positive. We find that the model has 0.69 accuracy on our dataset. Based on how negative, neutral, or positive a tweet is, the machine assigns a score between−1 and 1 to the tweet. Negative, neutral, and positive tweets have a score close to−1, 0, and 1, respectively (54).
Unemployment has increased during the COVID-19 pandemic, and everybody even the young population that are not in the work force have suffered from it (55,56). Even in wealthier families, children and adolescents that are not in the working-age have experienced anxiety and depression due to financial and economic crises (57). Their experience may not be as negative and acute as adults, but the economic recession caused by lockdowns and unemployment is reflected in their comments and sentiments, as well (55). In addition, in poorer families, economic crises may cause adolescents to quit school and enter the labor market to supplement their household economy (58). These negative sides and impacts of the pandemic and lockdowns are reflected in the tweets, and we capture them by performing sentiment analysis. The result efficiently increases the performance of the model for nowcasting the unemployment rate.
The normalized sum of the sentiment scores over time is calculated for the two parts of the dataset and compared with unemployment rate, before and during the COVID-19 pandemic. Moreover, the sentiment classes and scores for different provinces are calculated and compared. The two datasets are concatenated to train the PCR model and estimate the unemployment rate. From the 1182,632 different tweets, 289,738 tweets belong to the second part of the dataset (COVID-19 pandemic period), and the rest belong to the first part (pre-COVID-19 pandemic period). Figure A.1 in Appendix A in supplementary files shows the word-cloud generated for our dataset.
Model development and validation
After concatenating our datasets, the number of tweets over time for the whole dataset and different keywords are found and stored in a vector. Next, since the normalized sum of the sentiment scores over time have a negative correlation with the unemployment rate, it is inverted and stored in a separate vector. These vectors which make up the training set, are standardized to improve the performance of the regression model. The unemployment rate is also stored in a different vector and used as labels for the PCR. The PCR method is essentially a linear regression model on the principal components of the training dataset (59). Therefore, PCA is applied to all of the vectors of the training dataset, and twenty-two different principal components are found. According to Figure 3A, the first component accounts for more than 80% of the variance. However, according to Figure 3B, the cross-validation Root Mean Square Error (RMSE) indicates that the least error is obtained when five of the principal components are used for linear regression. Therefore, we use linear regression with the first five principal components in our model.
Results
The method is implemented using Python 3 in Google Colaboratory (60). Using the vectorization feature of python, we are able to process our large dataset in no time. However, the sentiment analysis part which requires Graphics Processing Unit (GPU) takes more than 3 hours to execute (37).
. /fpubh. . (14), their correlations with the unemployment rate, their p-values, and whether they are selected for tracing the unemployment rate and training the PCR model.
Quantity of the tweets
The total dataset is 53% and 90% correlated with the unemployment rate and has a p-value of 4 × 10 −4 and 3.76 × 10 −8 before and during COVID-19 pandemic, respectively. Figures 4A,B show the correlation between the number of tweets in the total dataset and the unemployment rate before and during the COVID-19 pandemic, respectively.
According to these results, the employment-related tweets gathered using our selected keywords are significantly correlated with the unemployment rate of South Africa, during and before COVID-19. Next, the two datasets for before and during COVID-19 pandemic are concatenated. Figure A.2A in Appendix A, supplementary files, shows that the number of tweets in the concatenated dataset is also highly correlated with the unemployment rate. Thus, Twitter data may be used to estimate the unemployment rate in real time. Figure 5A shows the confusion matrix of the pretrained model tested on our labeled dataset. The diameter of the confusion matrix indicates that the accuracy of the model is 69%. Table 1 shows the precision, recall, and f1-score of the model. The average of the parameters on different polarities also suggests that the accuracy of the model on our dataset is approximately 69%. Moreover, Figure 5A and Table 1 also show that tweets from negative and positive polarities are better recognized than tweets with neutral polarity. The reason could be that tweets with neutral sentiment may carry a mixture of positive and negative polarity and therefore are more difficult to distinguish. Figure 5B shows the number of negative, neutral, and positive employment-related tweets before and during the COVID-19 pandemic. As shown in Figure 5B, there are more tweets with negative sentiments than with positive sentiments. This is as expected since the dataset is on unemployment-related tweets. Moreover, it can be seen in Figure 5B that sentiment classes are more negative and less positive during the COVID-19 pandemic compared to before. This is in-line with previous research on social media sentiments during COVID-19 (61). As the COVID-19 pandemic started, people found more free time during the lockdowns to spend in social media. In addition, they were able to communicate with friends and family while social distancing (62). However, the microblogging sentiments were not always more positive than before COVID-19 pandemic (61,63). Authors in (62,63) found that the most dominant emotion of tweets regarding topics related to COVID-19 were fear, anticipation, and trust. This means that they were scared of the pandemic circumstances, yet hopeful that new solutions will be unearthed for prevention and recovery. Moreover, emotions regarding economy have been very negative during the COVID-19 pandemic (64,65). The study in (64,65) show that at the beginning of the pandemic investors became very fearful and uncertain of the stock market trends and trading. The sentiments regarding oil price were dominantly fear at the beginning of the pandemic as well (65). Finally, it is stated in (65) that Twitter sentiments around job and employment continued to be more optimistic in March 2020. People were hopeful that everything will go back to normal after the lockdowns. However, since April 2020, emotions have been increasingly becoming less optimistic and more anxious and annoyed regarding the labor market. We compare the normalized sum of sentiment scores with the unemployment rate, during and before COVID-19. Figure 6 shows the distribution of the normalized sum of the sentiment scores (A) before and (B) during the COVID-19 pandemic, over time. Table 2 shows the correlation and the p-value of sentiment scores with the unemployment rate for the first and second part of the dataset, i.e., before and during the COVID-19 pandemic, and the correlation between the concatenated dataset and the unemployment rate.
Sentiment classification
According to Figure 6 and Table 2, the sentiment scores have a high negative correlation with the unemployment rate, during and before the COVID-19 pandemic. This could be interpreted to mean that the higher the unemployment rate, the more negative the sentiments of the employment-related tweets. The sentiments can be used to qualitatively analyze employment-related tweets, to understand how dissatisfied people are with unemployment.
Nowcasting the unemployment rate
The sentiment scores are next inverted to have a positive correlation with the unemployment rate. Two-thirds of the Frontiers in Public Health frontiersin.org . /fpubh. . dataset is used for training the PCR model and the remaining portion is used for testing. According to Figure 7, the predicted values of the unemployment rate are very well-correlated with the actual values. The SMAPE, RMSE, MAPE, and coefficients of determination R 2 metrics in Figure 7 are calculated using Eq. 1-4 (31). As shown in Figure 7, the trained model has R 2 -score of 0.93 and SMAPE of 0.01 which is very outstanding. Figure 8A shows that the estimated unemployment rate matches the actual unemployment rate. Figure 8B shows that the estimated unemployment rate is well-correlated with the actual unemployment rate, during the COVID-19 pandemic. The model has an R 2 -score of 0.51 and SMAPE of 0.03 which shows that it has a good effect size and performs very well (49).
We have also used PCR to nowcast the unemployment rate of different provinces. Figure 9 shows the correlation between the actual unemployment rate of Gauteng and the estimated values when about two-third of the data is used for training the PCR model and one-third is used for prediction. According to Figure 9, the model for Gauteng has a SMAPE of 0.03 and an R 2 -score of 0.89 which indicates a pretty good prediction.
Moreover, we use one-third of the data before the COVID-19 pandemic to train the PCR model and then use the trained model to nowcast the unemployment rate during the COVID-19 pandemic for different provinces in South Africa. Figure 10A shows that the predicted values closely follow the actual unemployment rate for Gauteng. Figure 10B shows the correlation between the actual and predicted values of the unemployment rate for Gauteng. We obtain a SMAPE value of 0.03 and R 2 -score of 0.68, which indicates a good prediction.
The results for the rest of the provinces can be found in Appendix A in supplementary files.
Discussion
In this paper, we use social media to nowcast the unemployment rate of South Africa. We find that the number . /fpubh. . of tweets on certain keywords has a high correlation with the unemployment rate in South Africa. Moreover, the social sentiments of the tweets are negatively correlated with the unemployment rate. Social media provide a large amount of data about users and their interactions about a given subject, thereby, offering an unconventional data source for data-driven policy decisions. It is turning into the primary place where people share their thoughts and daily activities. In addition to what people express on social media, an investigation of their underlying attitudes can help inform policies. Some of these conversations on social media are employment-related.
In this study, we show that certain keywords extracted from employment-related tweets can be used to nowcast the unemployment rate. The selected keywords correlate with the unemployment rate for all the years considered. Therefore, it is very likely that the number of tweets gathered with these keywords will keep on correlating with the unemployment rate, in the future. Moreover, the fact that the normalized sum of the sentiment scores of the tweets gathered with these keywords has a strong negative correlation with the unemployment rate verifies that these keywords can reflect the unemployment rate. As the unemployment rate increases, people begin to talk about it on the social media, in a negative way, and the selected keywords can pick this reflection.
Our PCR method for estimating the unemployment rate using the number of tweets on the selected keywords and the normalized sum of their sentiments has an SMAPE and R 2 -score of 0.01 and 0.93, respectively.
In conclusion, our PCR method can estimate the unemployment rate of a country very well. This is very valuable as it allows us to remove the barriers and difficulties of the census methods and estimate the unemployment rate in real time. Furthermore, to make sure that the dataset gathered truly captures the unemployment rate and can be used to nowcast it in the long run, we find the number of tweets belonging to each province in South Africa and stratify the provinces based on age and industry. Figure 11A shows the number of tweets of each province. Most of the tweets come from urban provinces, namely, Gauteng, KwaZulu-Natal, and Western Cape (66). These provinces contain more than 85% of the tweets. Other provinces which are considered rural account for <15% of the tweets.
However, when we study the distribution of age and industry population in different provinces, we find that (1) most of the . /fpubh. . people of working ages (20-60 years old) live in the urban provinces (Gauteng, KwaZulu-Natal, and Western Cape), and (2) the most populated industries are in the urban provinces. Figure 11B shows the distribution of the age population in different provinces for 2021. We depict this diagram for 2020, 2019, and 2018 and find that they have a similar distribution (67). The diagrams can be found in our complete code (37). As can be seen in this figure, the population of working ages (20-60 years old) for KwaZulu-Natal and Gauteng is almost 2 times and 3 times more than the rural provinces, respectively. Moreover, after these two provinces, working ages are more populated in Western Cape, compared to the rural provinces. In South Africa specifically, about 82% of Twitter users are of the working ages (16-54 years) (8,13,68). Based on the above, we conclude that the tweets that we have gathered are dominantly from people in their working ages, talking about their economic condition and therefore, the volume of the tweets does represent the unemployment situation of the country.
Next, we find the distribution of industry population in different provinces. Figure 12A shows the distribution of industry population in different provinces and Figure 12B shows the population of a given industry divided by the whole population of that industry for different provinces, for 2021. These diagrams for 2020, 2019, and 2018 are very similar to 2021 and can be found in our complete code (37).
As can be seen in Figure 12B, in all the industries, except for utilities, mining, and agriculture, most of the population live in Gauteng, KwaZulu-Natal, and Western Cape. The population living in Gauteng for these industries is almost 2 times, and even 3 times in some cases, more than that of the other provinces. Among the rural provinces, most of the population for these industries live in Eastern Cape which has the highest number of tweets according to Figure 12B. For utilities, mining, and agriculture, the population in rural provinces is considerable, however, according to Figure 12A the population working in these industries is very small. Therefore, we conclude that the . /fpubh. . number of tweets is highly attributed to the population working in different industries and reflects the economic situation of different sectors.
In conclusion, what we are capturing by tweet volume is associated with the unemployment rate of the country and considering that it has always correlated with the unemployment rate of the country since the beginning of Twitter, it will most probably represent the unemployment rate of the country in the long run. Results in Appendix A in supplementary files show that the number of Tweets in each province has a moderate to strong correlation with the unemployment of that province which also shows that we have gathered the tweets using the right keywords.
We also calculate the sum of sentiment scores divided by the number of tweets, over time in urban and rural areas of South Africa. Figure 13 shows the sum of sentiment scores divided by the number of tweets in urban and rural areas, since 2017.
According to Figure 13 sentiments for urban areas are noticeably lower during COVID-19 pandemic compared to before it. One probable reason could be that during the COVID-19 pandemic, the economy was devastated, and most industries and people from working age groups are located in urban areas. Therefore, the sentiments of urban areas are evidently lower than rural provinces. This is another finding that shows we have gathered the right data from Twitter, and most probably our method can be used to nowcast unemployment rate in the long run.
Limitations
There are many limitations related to Twitter data that prohibit us from training a perfect model. As previously mentioned, generally, only 15% of online adults, which are mostly 18-29 years old and some minorities, regularly use Twitter. Certain populations, urban/suburban residents, affluent householders, and mobile users, and are more likely to use Twitter (69). As a result, a great portion of the public is left out of consideration. Moreover, 95% of Twitter users never geotag. Among those who consider geotagging their tweets, only about 1% allow most of their tweets to be geotagged. Basically, very passive, and very active users who, respectively post <50 and more than 1,000 tweets per year do not allow most of their tweets to be geotagged. Only moderate users who have 50 to 1,000 tweets per year, frequently allow their tweets to be geotagged. Therefore, a vast number of tweets cannot be used (70, 71). Essentially, among geotagged tweets, only those that are in English can be of use. This is especially crucial when studying multilingual countries such as South Africa. Officially, 11 different languages are spoken in South Africa (72). However, we are only able to gather and analyze English tweets for tracing and nowcasting the unemployment rate.
Conclusion
In this paper, social media, particularly, Twitter is traced to estimate the unemployment rate of South Africa in realtime. Since in South Africa the unemployment rate is measured quarterly, this method can be used to find the missing information on the unemployment rate, as well. Moreover, this method can provide the unemployment rate statistics in realtime, and without the difficulties faced using the traditional approach. Finally, this information can be highly valuable for analyzing labor market flow when facing disasters such as a pandemic.
The normalized sum of sentiment scores over time before and during the COVID-19 pandemic has a strong negative correlation with the unemployment rate. We combine the number of tweets on different keywords, and the sentiment .
/fpubh. . scores and use PCR to nowcast the unemployment rate. The results show that the estimated unemployment rate is wellcorrelated with the actual unemployment rate. One contribution to the future work of this project is to use social media to estimate other economic metrics such as inflation rate, job vacancy rate, labor force participation rate, and parttime working rate. Another work that can be done is to use social media to forecast economic metrics such as the unemployment rate. Different methods or techniques of time series prediction or data mining and machine learning algorithms can be used to forecast these metrics. This can be extremely useful for disaster management response and recovery. Finally, since other media, especially images and videos make up a large portion of social media, new methods need to be proposed to process social media content further.
Data availability statement
The original contributions presented in the study are publicly available. This data can be found here: https:// github.com/Jdkong/Nowcasting_Unemployment and the code is available at: https://colab.research.google.com/drive/ 1O4NidnStzSGmc-RdJLcUB1NTl5viEcRy?usp=sharing.
Author contributions
JK and ZN designed research and collected data. All authors conducted literature search, analyzed data, and wrote the paper. All authors contributed to the article and approved the submitted version. | 2022-12-02T15:36:27.530Z | 2022-12-02T00:00:00.000 | {
"year": 2022,
"sha1": "7a40efe43dae0fad4828de0e3d21e3dc48039ecc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "7a40efe43dae0fad4828de0e3d21e3dc48039ecc",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118937730 | pes2o/s2orc | v3-fos-license | Magnetic relaxation and dipole-coupling-induced magnetization in nanostructured thin films during growth: A cluster Monte Carlo study
For growing inhomogeneous thin films with an island nanostructure similar as observed in experiment, we determine the nonequilibrium and equilibrium remanent magnetization. The single-island magnetic anisotropy, the dipole coupling, and the exchange interaction between magnetic islands are taken into account within a micromagnetic model. A cluster Monte Carlo method is developed which includes coherent magnetization changes of connected islands. This causes a fast relaxation towards equilibrium for irregularly connected systems. We analyse the transition from dipole coupled islands at low coverages to a strongly connected ferromagnetic film at high coverages during film growth. For coverages below the percolation threshold, the dipole interaction induces a collective magnetic order with ordering temperatures of 1 - 10 K for the assumed model parameters. Anisotropy causes blocking temperatures of 10 - 100 K and thus pronounced nonequilibrium effects. The dipole coupling leads to a somewhat slower magnetic relaxation.
I. INTRODUCTION
The investigation of low-dimensional magnetic nanostructures has become a very active field of current research. 1 The controlled preparation of different nanostructures allows for the investigation of a variety of interesting magnetic properties. 2,3,4 The dependence of these properties on the nanostructure and magnetic interactions in such systems is still not well understood. So far, no consistent theoretical analysis has been performed for the magnetic behavior of an ultrathin film during growth, ranging from an island-type structure to a smooth film, in particular the influence of structural disorder on the magnetic properties. In this study, we present Monte Carlo (MC) calculations of the nonequilibrium and equilibrium magnetization of growing inhomogeneous films. Different coverages below and above the percolation threshold and different magnetic interactions are taken into account. This analysis needs a newly developed cluster MC method including coherent rotations of neighboring magnetic islands.
To illustrate the problem, we discuss for instance an ultrathin Co film grown on a Cu(001) substrate. 5,6 This system shows a perfect layer-by-layer growth for coverages Θ > 2 monolayers (ML). Here, the exchange coupling results in a large Curie temperature T C 350 K. Below the measured percolation coverage of Θ P = 1.7 ML, randomly positioned Co islands with a considerable admixture of Cu atoms are observed, exhibiting a comparably strong remanent magnetization up to temperatures as large as ∼ 150 K. At Θ P , a jump of T C of about 100 K occurs. 5 The important question arises which mechanism causes the strong remanence of an island-type film for coverages Θ < Θ P . On the one hand, this could be induced by nonequilibrium blocking effects due to single-island anisotropies which impede magnetic relaxation. On the other hand, an equilibrium magnetization can originate from long-range magnetic interactions. We like to discuss these two limiting cases.
An ensemble of magnetically isolated single-domain islands behaves like a superparamagnet. The single-island anisotropy causes a finite time-dependent magnetization below the nonequilibrium blocking temperature T b . For thin Co/Cu(001) films this temperature is estimated to be T b ≈ 5 K, 31 following from the Arrhenius-Néel ansatz k B T b = N K/ ln(τ m Γ o ). 7,8 Note that an island size dispersion will influence the relaxational behavior and T b sensitively. Second, a finite magnetization may also originate from a collectively ordered state in thermal equilibrium. Such a magnetic state, not necessarily a collinear one, of an ensemble of isolated islands results from longrange magnetic interactions. 9 We consider here the magnetic dipole coupling between islands. 32 The corresponding ordering temperature T C should be comparable to the average dipole energy per island which is, however, difficult to determine for an irregular system. Assuming two neighboring disk-shaped Co islands containing N atoms each, the dipole energy E dip of this island pair is proportional to √ N . For N = 1000, we estimate E dip ≈ 6 K. 33 Note that both simple estimates for the temperatures T b and T C disagree with experimental observations. Hence, improved calculations are needed taking better into account the inhomogeneous film structure characterized by varying island sizes, shapes, and positions. In particular for coverages close to Θ P , the island coagulation leads to a larger effective island size N eff and thus to a larger average dipole energy E dip ∝ √ N eff . Correspondingly, also the blocking temperature T b ∝ N eff K due to anisotropy will be larger than the single-island estimate T b ≈ 5 K. Thus, for a disordered film structure one may expect a remanent magnetization at much higher temperatures than obtained by these simple esti-mates.
For the investigation of the magnetic relaxation, the magnetic anistropy and the dipole interaction are taken into account. In case of coagulated islands, the exchange coupling between islands has to be considered as well. For three-dimensional systems of interacting magnetic particles, either longer 10 as well as shorter 11 relaxation times as compared to noninteracting ensembles have been calculated. Experimentally, longer relaxation times for increasing interparticle interactions have been measured. 12,13 The existence of a collective, spinglass-like magnetic ordering was discussed. 14 For twodimensional systems only few investigations have been reported, also indicating longer relaxation times for increasing interaction strengths. 13,15 In this paper, we report on MC calculations of the nonequilibrium and equilibrium remanent magnetization for nanostructured thin films during growth as function of coverage, temperature, and MC time. 16,17,18 By application of a modified Ising model, which allows one to take into account magnetization dynamics, the blocking as well as the ordering temperatures are determined. Of particular concern is the consideration of structurally inhomogeneous systems ranging from isolated islands to smooth ferromagnetic films. Numerical simulations are unavoidable since the low symmetry of these systems and the complicated nature of the involved magnetic interactions preclude analytical approaches. However, the application of the common MC technique runs into a severe problem. Assigning a "super" spin to every island magnetic moment (Stoner-Wohlfarth model 19 ), conventional single-spin-flip algorithms yield an extremely slow and unrealistic relaxation towards equilibrium for coverages where the islands are partly coagulated. 34 Hence, we apply a cluster-spin-flip algorithm, which includes simultaneous rotations of magnetic moments of connected islands. 20,21 We emphasize that with this method the relaxation behavior and the equilibrium magnetization are calculated efficiently. For all film coverages, the clusterspin-flip method enables an appropriate analysis of the influence of the anisotropy and dipole coupling besides the dominating exchange interaction.
The film growth, the micromagnetic model for the calculation of the magnetic properties, and the cluster MC method are described in Sec. II. In Sec. III, we test the cluster algorithm for the obtained island nanostructure and present results for the remanent magnetization, as well as for the blocking and ordering temperatures. A conclusion is given in Sec. IV.
A. Growth mode and micromagnetic model
For the simulation of the island-type growing film, we use the simple solid-on-solid Eden model. 17,22 Within this model, each additional atom is deposited on an island perimeter site i with probability p(q where q i is the local coordination number and z i the layer index. A bilayer island growth mode is assumed yielding an island structure similar as observed for epitaxial Co/Cu(001). 6 A 2 × 500 × 500) fcc-(001) unit cell with lateral periodic boundary conditions is applied. The island density is ρ = 0.0025 islands per site, resulting in Z = 625 randomly distributed islands in the unit cell. For the ratio of binding parameters, 17 we use A(1)/A(2) = 0.989. For the obtained atomic structures, a micromagnetic model for the total (free) energy of a system of interacting magnetic islands is applied: 16,17 with Θ being the film coverage and T the temperature. Each magnetic island with N i (Θ) atoms is treated as a Stoner-Wohlfarth particle 19 with a single giant magnetic moment µ i = µ at m i N i , whose direction is confined to the film plane, where µ at is the atomic magnetic moment. The unit vector S i = µ i /µ i characterizes the magnetic moment direction of the ith island. The first term in Eq. (1) represents the magnetic domain wall energy between connected islands, with L ij the number of bonds between islands i and j, and γ ij the domain wall energy per atomic bond. The second term is the long-range magnetic dipole interaction between the island magnetic moments µ i where r ij = |r i − r j | is the distance between the centers of islands i and j. The point-dipole energy is calculated by applying the Ewald summation technique over all periodically arranged unit cells of the thin film. 18,23 The last term denotes the uniaxial in-plane anisotropy energy with K i the anisotropy per atomic spin. Due to this anisotropy we allow for only two stable directions for each island moment (S x i = ±1). Thus, our system refers to a modified Ising model, for which during magnetization reversal a possible anisotropy energy barrier is taken into account, hence allowing the consideration of magnetization dynamics. Finally, due to the finite exchange coupling J between neighboring atomic spins, the internal island magnetization m i (Θ, T ) is taken into account within a mean-field approximation. This leads to temperaturedependent effective anisotropy coefficients K i (Θ, T ) and domain wall energy densities γ ij (Θ, T ), as described in greater detail in Ref. 17.
Equation (1) describes a system of dipole-coupled single islands at low film coverages Θ ≪ Θ P as well as a connected ferromagnetic film at high coverages Θ ≫ Θ P . We point out that the transition between these extremal cases during the film growth is described within the same model. The assumption of individual magnetic islands with varying interactions is a good approximation as long as the system is laterally nanostructured, whereas for smooth films (here Θ ≈ 2.0 ML) it represents an unphysical discretization of the system.
B. Cluster MC method
The magnetic equilibrium and nonequilibrium properties are calculated by performing kinetic cluster MC simulations. Especially, close to the percolation coverage Θ P , most of the magnetic islands are connected to neighboring islands and form large but still finite clusters. A single-spin-flip (SSF) algorithm 22 for such an irregular atomic structure yields a very slow relaxation towards thermodynamic equilibrium, since subsequent flips of island magnetic moments in this cluster of connected islands, as considered by SSF updates, are strongly hindered by the exchange energy, see Ref. 34. Thus, a rotation of the entire island cluster is very unlikely, and its dependence on dipole interaction and anisotropy is strongly underestimated. For an improved simulation of the magnetic relaxation, a coherent or simultaneous rotation of the spins in these clusters has to be taken into account. For this purpose, we propose a cluster-spin-flip (CSF) algorithm 20,21 in the present study.
In a first step of each MC update, a cluster C ν consisting of ν connected islands is constructed by the following scheme: (a) Choose randomly a single island i, representing the first (smallest) island cluster C 1 = {i}.
(b) Add a random second island j which is connected to island i (L ij = 0), forming the second island cluster (c) Construct subsequently larger island clusters C ν by adding a randomly chosen island to the preceding cluster C ν−1 , provided that this island is connected to at least one of the ν − 1 islands of C ν−1 .
(d) Continue this construction procedure till either no additional adjacent islands are present or if a maximum allowed number λ max of islands in the cluster is reached. From this procedure, we obtain a set of λ ≤ λ max island clusters {C 1 , . . . , C λ }.
A Monte Carlo step (MCS) is defined by the usual condition that Z islands in the system are probed. Employing a cluster C ν containing ν islands considers the portion ν/Z of the system in a single update. To ensure that probing large clusters does not dominate the relaxation process, we assign the weight ω ν = 1/ν for choosing C ν out of the set {C 1 , . . . , C λ }. This definition implies that within a single MCS no additional relaxation channels are opened by the consideration of island cluster flips. 35 We emphasize that not only is the largest possible island cluster C λ probed for flipping, but all island clusters out of the corresponding set are considered. The island moments within an island cluster need not to be parallel.
In the second step of each update, all ν island spins of the chosen cluster C ν are probed for a coherent flip. The corresponding flip rate Γ ν is calculated in the usual way as if these ν connected islands form a single large island. 17 From Eq. (1), the magnetic energy of this island cluster as function of the in-plane angle φ is given by with the reduced magnetic field and the total anisotropy energy of the cluster K ν = ν k=1 N k K k . Here, the k sum runs over all spins inside, and the l sum over all spins outside the island cluster C ν . We have neglected the dipole sums kl (x kl y kl )/r 5 kl which are usually smaller than the sums kl (x kl ) 2 /r 5 kl and kl (y kl ) 2 /r 5 kl . The Ising-like states S x i = ±1 of C ν represent either energy minima which are separated by an anisotropy energy barrier, or refer to an energy maximum and minimum. The respective energy barriers for the forward and backward transitions are given by The flip rate Γ (1) ν of the island spin cluster C ν to overcome ∆E (1) ν is calculated from the common Arrhenius-Néel ansatz. 7, 8 We use a constant prefactor Γ o = 10 9 sec −1 which determines the time unit of the magnetic relaxation in kinetic MC simulations. 36 The latter case is treated with the usual Metropolis-type rate, using the same prefactor. 22 The growing thin film is characterized by a large amount of nonequivalent lattice sites, corresponding to a large number of different interaction parameters. Since little is known about these values, we use in our simulation averaged quantities for the magnetic parameters which are fixed as follows, using as an example the Co/Cu(001) thin-film system. The atomic magnetic moments are set to µ at = 2.0 µ B . 24 The domain wall energy γ is adjusted to give the observed Curie temperature of the ferromagnetic long-range order of T C = 355 K of a 2-ML Co/Cu(001) film, 5 yielding γ = 5.6 meV/bond. The exchange interaction for the calculation of the internal island magnetic ordering is set equal to J = 7.0 meV/bond. 17 For the uniaxial anisotropy, two different values K = 0.1 and 0.01 meV/atom are investigated.
In this study, we determine the remanent magnetiza- the growing thin film, where t is the MC time in units of MCS. The simulation starts from a completely aligned island spin state. The choice of this initial state refers to experiments which saturate the magnetic system by an external magnetic field and determine the remanent magnetization after removal of the field. 3,5 We have no evidence that magnetic arrangements end up in metastable states during relaxation when starting from a saturated state. In addition, we calculate the equilibrium magnetization M eq (Θ, T ), which is obtained by averaging M rem (Θ, T, t) over a range of 500 MCS after the system has become equilibrated. M rem (Θ, T, t) and M eq (Θ, T ) are averaged over at least 20 different structural runs. The magnetizations are given in units of a saturated monolayer (i.e., Θ = 1 ML) at T = 0. Since the finitesized unit cell undergoes eventually total magnetic reversals during MC probing, accidental cancellation of a finite M rem and M eq during structural and temporal averaging may occur. To avoid this, we use in this study merely the absolute values |M rem | and |M eq |.
A. Island growth and CSF algorithm
Snapshots of the atomic structure during thin film growth, resulting from our growth model, are shown in Ref. 17. The resulting static atomic structure is similar to the one observed for the Co/Cu(001) system. 6 In the initial stages of growth, randomly located islands with almost rectangular shapes are obtained. With increasing film coverages, the single islands start to coagulate and form island clusters with a still finite size. By analysing the percolation probability using the Hoshen-Kopelman algorithm, we yield a percolation coverage of about Θ P ≈ 0.9 ML. 25,26 Continued film growth leads to a connected thin film. In this coverage range, the system still exhibits a distinct irregular nanostructure. Isolated island clusters vanish rapidly upon further adatom deposition. The coverage Θ = 2.0 ML corresponds to a smooth magnetic film with two closed layers.
At first, we investigate the effect of the cluster-spinflip MC method on the simulation of the remanent magnetization |M rem (Θ, T, t)|. Here we consider only the exchange interaction. For a strongly connected film (Θ P ≪ Θ = 1.8 ML we test whether by use of CSF |M rem (Θ, T, t)| relaxes into the correct equilibrium value |M eq (Θ, T )|, starting from a fully aligned state. As can be seen from Fig. 1(a), this condition is fulfilled, since different maximum allowed numbers λ max of islands in the cluster lead to the same equilibrium value as for the single-spin-flip MC method. The larger the number λ max is, the slower is the relaxation. This property is caused by the fact that in this coverage and temperature range the magnetic relaxation is mainly provided by flips of single islands or small island clusters. As mentioned in Sec. II B, no additional relaxation channels are opened by use of CSF, hence the number of single-spin-flip attempts becomes reduced in favor of unprobable cluster-spin-flip ones. Examples for the error bars are also given. The statistical error, which is similar for all forthcoming figures, results mainly from averaging over different structural realizations of the unit cell and could be reduced by using larger unit cells.
We point out that the main improvement of the CSF with respect to the SSF method is obtained for coverages Θ Θ P , characterized by a considerable amount of island cluster formation, and which is very difficult to be studied analytically. In Fig. 1(b), the magnetic relaxation for Θ = 0.8 ML is depicted using different λ max . The equilibrium magnetization M 0 eq (Θ, T ) should vanish for Θ < Θ P , since long-range magnetic interactions are neglected here. Due to the use of the absolute value a finite but small |M 0 eq (Θ, T )| is obtained in our calculations. The SSF algorithm exhibits an extremely slow magnetic relaxation toward |M 0 eq |. Even after 10 6 MCS the remanent magnetization is relaxed only to |M 0 rem | = 0.60. The reason is that this method considers very unfavorable intermediate states. Already the allowance of a few coherently flipping island spins results in a much faster relaxation. The relaxational behavior converges rapidly with increasing λ max . Using the CSF with λ max = 50 or larger the equilibrium is reached already after ∼ 100 MCS. To obtain a fast equilibration, the closer the coverage to Θ P the larger the value chosen for λ max . The CSF algorithm leads to a much faster equilibration also for coverages Θ Θ P . In this coverage range, island clusters are still present which have only weak links to other clusters. In the following investigations, we set λ max equal to the number of single islands Z = 625, except for coverages Θ ≫ Θ P where we yield a better performance for λ max = 100.
Hence, by use of the SSF algorithm the exchange interaction grossly dominates the MC simulations for strongly inhomogeneous systems. This is avoided by applying CSF, allowing thus for the investigation of the effect of the much weaker anisotropy and dipole interaction.
B. Effect of interactions
First, we study the combined effect of the dipole and the exchange interaction on the film magnetization for coverages Θ < Θ P . We determine equilibrium properties which within our model are not influenced by the anisotropy. In Fig. 2(a), we present |M rem (Θ, T, t)| for different temperatures T as a function of MC time t. The coverage is assumed to be Θ = 0.8 ML. Starting from the fully aligned state |M rem (Θ, T, t = 0)| = 0.8, the remanent film magnetization relaxes fast to its equilibrium value. For the assumed temperatures the dipole coupling leads to a net magnetization |M rem | > |M 0 rem | where for |M 0 rem | the dipole interaction is neglected. After several hundred MCS equilibration is obtained for the dipole-coupling induced |M rem | which then stays stable within the simulation time. We emphasize that it is impossible to obtain these and the following results with conventional SSF algorithms.
In Fig. 2(b), the equilibrium magnetization |M eq (Θ, T )| is shown as function of temperature T for different coverages Θ. For low temperatures, clearly a magnetic ordering due to the dipole interaction is seen. The larger the coverage is, the larger is the ordering effect, since with an increasing Θ the average island cluster size and thus the average dipole coupling energy increases. Above the ordering temperatures T C (Θ), the magnetizations |M eq (Θ, T )| reach the corresponding values |M 0 eq (Θ, T )| as calculated without the dipole interaction. Due to the use of absolute values, |M 0 eq (Θ, T )| stays always finite. T C (Θ) is estimated by extrapolating the linear part of |M eq | to |M 0 eq |. The ordering temperature even for the largest investigated coverage is quite small, yielding T C ≈ 6 K for Θ = 0.8 ML. The rounding of |M eq | near the ordering temperature is caused by (i) the finite unit cell size, (ii) the use of the absolute value, (iii) the presence of island size and -position dispersions, and (iv) the average over 20 different realizations of the unit cell.
The existence of a long-range magnetic ordering due to the dipole interaction has been calculated for periodic lattices. 27 In our study, we find that also within an irregular island system below the percolation threshold the dipole interaction leads to a magnetic ordering, indicated by a net magnetization |M eq | > |M 0 eq |. Such a collective state is expected to be spin-glass-like, as discussed for three-dimensional magnetic particle systems, 14 and which needs a further investigation. We remark that the obtained ordering temperatures will increase by considering a noncollinear island magnetization beyond S x i = ±1, 28 by taking into account the finite island extension for the dipole interaction beyond the point-dipole approximation, 29 or by consideration of densely packed three-dimensional particles.
In Fig. 3, we investigate the influence of the magnetic anisotropy and the exchange interaction on the magnetic relaxation for coverages Θ < Θ P . Here, the dipole coupling is neglected. In Fig. 3(a), the remanent magnetization |M rem (Θ, T, t)| is given as function of MC time t for different temperatures T and for K = 0.01 meV/atom. The coverage is assumed to be Θ = 0.8 ML. Starting from the fully aligned state, at first |M rem (Θ, T, t)| drops rapidly due to relaxation of single islands and small island clusters. The further relaxation happens much more slowly since here larger island clusters have to be reversed. For T 25 K, the magnetization |M rem (Θ, T, t)| reaches within the depicted time range the curve |M 0 rem | as calculated for K = 0.
In Figs. 3(b) and 3(c), we show |M rem (Θ, T, t)| after t = 1000 MCS for different coverages as function of temperature, using the anisotropy parameters K = 0.01 and 0.1 meV/atom. With increasing temperature, the magnetization |M rem (Θ, T, t)| approaches the equilibrium value |M 0 eq |. The corresponding blocking temperatures T b (Θ, K) are obtained by extrapolating the linear part of |M rem | to |M 0 eq |. A rounding of |M rem | near T b is observed due to the same reasons as discussed in connection with Fig. 2. We emphasize that for a connected island structure an increase of the anisotropy K by a factor of 10 does not necessarily lead to an increase of T b (Θ, K) by the same factor as obtained from the Stoner-Wohlfarth model. 19 This is caused by internal cluster excitations, i.e., creation or motion of domain walls inside island clusters. To discuss this, we have performed additional calculations for an infinite domain wall energy γ, indicated by the full lines in Figs. 3(b) and 3(c), hence allowing only for coherent island cluster rotations. Above a certain temperature, the curves for finite and infinite γ deviate, since then internal cluster excitations become effective. For K = 0.01 meV/atom and coverage Θ = 0.6 ML, the difference between these curves is small, thus the magnetic relaxation happens mainly via coherent rotation. In contrast, for Θ = 0.8 ML or for the larger anisotropy K = 0.1 meV/atom, obviously both relaxation processes are present.
Which relaxation process is effective at a given temperature is determined by its energy barrier ∆E. For a coherent rotation of an isolated island cluster, ∆E is given by its total anisotropy energy K ν . In contrast, ∆E for an internal cluster excitation consists of both the anisotropy of the actually reversed islands and the domain wall energy, see Eqs.(4) and (5). By closer investigation, we found that a particular relaxation process becomes effective above a temperature amounting to 5 -10 % of ∆E. In the temperature ranges T < 40 K for K = 0.01 meV/atom and T < 100 K for K = 0.1 meV/atom, each internal cluster excitation consists mainly of reversing only one or two islands. For markedly larger temperatures, the internal cluster excitations will become more complex, depending in a complicated way on the nanostructure.
The influence of the dipole coupling on the relaxation behavior is discussed in Fig. 4. For Θ Θ P and K = 0.01 meV/atom, the dipole interaction results in a small increase of |M rem | and T b . Thus, the dipole interaction leads to a slower magnetic relaxation. Interestingly, this effect is visible in the whole temperature range up to T b , and is not limited to those small temperatures where the dipole coupling induces a magnetic ordering, see Fig. 2. The increase of |M rem | will become larger if a stronger dipole coupling is assumed, for example for larger island magnetic moments. Experimentally, a similar effect was observed for two-dimensional arrays of interacting magnetic nanoparticles with random anisotropy axes. 13 A more detailed investigation of this property is needed.
Next, we investigate the magnetization for coverages above the percolation threshold, Θ > Θ P . In Fig. 5, the equilibrium magnetization |M eq (Θ, T )| is shown as function of temperature for different coverages. Here, the exchange coupling causes a fast magnetic relaxation and a strong ferromagnetic long-range order. For large coverages and at low temperatures, the behavior of |M eq | is reigned by the decrease of the internal island magnetization m i (Θ, T ), whereas at elevated temperatures a strong decay of |M eq | is caused by the disturbance of the island spin alignment. The resulting ordering temperatures T C (Θ) are deduced from the inflection points of |M eq (Θ, T )|. In addition, for Θ = 1.0 ML and 1.2 ML we show the nonequilibrium remanent magnetization |M rem | after t = 1000 MCS, considering K = 0.01 eV/atom. The corresponding blocking temperatures T b (Θ) are markedly larger than T C (Θ) in the coverage range Θ Θ P , where the nanostructure of the percolated thin film is still very irregular and nonequilibrium effects due to anisotropy barriers are pronounced. For films with larger coverages, having a higher connectivity between islands, the exchange coupling results in a fast magnetic relaxation and thus the temperature difference between T b and T C is small. A very weak ordering effect due to the dipole interaction is visible only for very low temperatures and coverages Θ Θ P . Here, still a few isolated islands or island clusters exist which are coupled to the percolating cluster by the dipole interaction.
In Fig. 6, we summarize the most important results of this study. The (nonequilibrium) blocking temperature T b (Θ, K) and the (equilibrium) ordering temperature T C (Θ) are presented as functions of coverage Θ in the whole investigated growth range. For a better visualization, a logarithmic temperature scale is applied. T b is determined for two different anisotropies K = 0.01 and 0.1 meV/atom and t = 1000 MCS. Below the percolation coverage Θ P , the dipole interaction induces small ordering temperatures T C of the order of 1 -10 K for the assumed model parameters. Due to the coagulation of islands with increasing coverage the exchange interaction becomes more important, since it couples single islands to magnetically aligned large clusters. This results in a strong increase of T C in particular close to Θ P . This behavior has been observed in experiments on Co/Cu(001) ultrathin films ("T C -jump"). 5,6 For percolated thin films, the ordering temperature is of the order of 100 -300 K and is, within the accuracy of our calculations, exclusively determined by the exchange coupling. The slope of T C (Θ) for Θ > Θ P is not as steep as for Θ < Θ P . In addition, we show the ordering temperature T C due to dipole interaction by neglecting the exchange coupling between islands. Evidently, a distinct variation of T C near Θ P is not obtained in this case.
The nonequilibrium behavior as caused by the anisotropy K differs strongly for coverages below and above Θ P . Due to the slow relaxation of the irregular atomic structure for Θ < Θ P , a blocking temperature T b (Θ) is obtained which is an order of magnitude larger than T C (Θ) resulting from the dipole interaction. Evidently, T b depends on the anisotropy K and the MC time t. On the other hand, for Θ > Θ P , the relaxation is accelerated by the exchange interaction. With increasing Θ the remanent magnetization reaches the equilibrium value within t = 1000 MCS, hence T b (Θ) merges into T C (Θ).
Recently, a mean field theory (MFT) for the dipolecoupling induced magnetic ordering temperature has been performed, using a simplified growth model. 6 A qualitatively similar behavior of T C (Θ) as compared to the present MC calculations was obtained, in particular the strong variation of T C near Θ P due to the exchange interaction between coagulated islands. Evidently, MFT yields much larger values for T C (Θ) for such low-dimensional systems especially for Θ < Θ P , since thermal fluctuations are neglected.
In the following, we discuss our results in connection with measurements on Co/Cu(001) ultrathin films. Although several model parameters are chosen in accordance with this system, a full quantitative comparison cannot be drawn yet. The main reason is that the observed intermixing of Co adatoms with Cu substrate atoms is not taken into account within our growth model due to the incomplete knowledge of the resulting atomic morphology. The measured percolation threshold Θ P ≈ 1.7 ML is much larger than the one as obtained with the growth parameters used by us.
To investigate solely the effect of an enlarged Θ P , we have performed additional simulations simply by taking into account magnetic islands with up to three atomic layers, yielding the observed Θ P . Then for a coverage Θ = 1.6 ML, the dipole coupling induces a ordering temperature T C ≈ 50 K. The corresponding blocking temperature for K = 0.01 meV and t = 1000 MCS is obtained to be T b ≈ 150 K. Hence, we find that for the assumed growth modes and magnetic parameters the blocking temperatures are always markedly larger than the ordering temperatures. These temperatures are comparable with measured temperatures T C ≈ 150 K for coverages slightly below Θ P . 5 In addition, to draw a quantitative comparison with the Co/Cu(001) system a fourfold symmetry of the in-plane anisotropy has to be taken into account. We expect that the above stated general behavior of the magnetic relaxation and ordering obtained with a uniaxial anisotropy will not be changed.
IV. CONCLUSION
In this study, we have calculated the nonequilibrium and equilibrium remanent magnetization of growing ultrathin films, using a cluster Monte Carlo method. An island-type nanostructure with a nonuniform distribution of island sizes, shapes, and locations was investigated. Within a micromagnetic model (modified Ising model) the single-island magnetic anisotropy, the dipole coupling, and the exchange interaction between magnetic islands were taken into account. We have analysed the transition from dipole-coupled islands for film coverages below the percolation threshold Θ P towards a connected ferromagnetic film above Θ P with increasing film coverage.
For coverages Θ < Θ P , the dipole interaction leads to an equilibrium net magnetization refering to a collectively ordered state. A small ordering temperature T C of about 1 -10 K results for the assumed model parameters.
T C increases strongly near Θ P due to exchange interaction which aligns coagulated islands. On the other hand, the anisotropy induces a pronounced nonequilibrium remanent magnetization which may be visible in experiment even after long waiting times. The corresponding blocking temperature T b is obtained to be of the order of 10 -100 K, which is always markedly larger than T C . Approaching Θ P , the proportionality between blocking temperature and anisotropy is no longer valid due to relaxation via internal island cluster excitations. A nonequilibrium remanent magnetization due to anisotropy is visible also for coverages Θ Θ P where the film is still very irregular. For smoother films at larger coverages the exchange interaction induces a fast magnetic relaxation towards equilibrium.
We have obtained these results with a cluster-spin-flip algorithm which takes into account coherent magnetic rotations of island clusters. This method leads to a very fast and more realistic magnetic relaxation towards equilibrium in the coverage range with an irregular nanostructure. Our results cannot be achieved by conventional single-spin-flip algorithms. The suggested CSF algorithm can be applied also to other inhomogeneous spin systems such as diluted magnets and spin glasses.
Several possible improvements of our micromagnetic model are pointed out. In this study, we have used Isinglike states S x i = ±1. By applying continuously varying spins S i , noncollinear magnetic arrangements can be analysed, 28 allowing one to also determine the effects of an external magnetic field for these strongly inhomogeneous films. In particular, the movement of magnetic domain walls can be investigated. Furthermore, various magnetic nanostructures like chains and stripes 3 can be studied easily by a proper variation of the parameters of the Eden-type growth model. Anisotropies with, e.g., a four-fold in-plane symmetry will be considered. Finally, the relaxation laws and times of the remanent magnetization can be investigated for such thin film systems. 3 32 In addition to the magnetic dipole coupling other longrange interactions can be present such as the indirect exchange (RKKY-) interaction in case of metallic substrates, and the superexchange for insulating substrates. 33 Consider two disk-shaped Co islands with N atoms each.
Assume that the distance R between their centers is comparable to their diameters, R ≈ D ∝ ao √ N . Then the point-dipole energy of this island pair is E dip ≈ (N µat) 2 /R 3 ≈ 0.623 (µ 2 at /a 3 o ) √ N ≈ 6 K, with µat = 2.0 µB the atomic magnetic moment measured for a 2-ML Co film, 24 µB the Bohr magneton, ao = 2.5Å the Co interatomic distance, and N = 1000. 34 Consider for instance the magnetic rotation ↑↑ ⇀ ↓↓ of an isolated island cluster consisting of two connected islands. Assuming Ising-like magnetic moments, inevitably the single-spin-flip method applies a subsequent rotation of the single island spins through the intermediate state ↑↓. However, such a process is very unlikely due to the large increase of exchange energy of this intermediate state.
Obviously, a proper treatment of magnetic relaxation requires the inclusion of a coherent rotation of the island pair. This is performed within a cluster-spin-flip algorithm which takes into account such simultaneous spin rotations of connected islands. 35 The described CSF algorithm satisfies the condition of detailed balance. This is guaranteed by the fact that the probabilities for construction and choice of the cluster Cν are the same for both flip directions, and by the used flip rates which already obey detailed balance. Ergodicity is maintained since any spin state can be reached due to the allowance of single-spin flips. 36 A constant attempt frequency Γo is widely applied in literature. The justification of this approach is given in Refs. 7 and 8 where Γo is calculated using different approaches. Γo is found to vary weakly with temperature and local fields. Note that the applied transition rates depend only weakly on the exact value of Γo. Fig. 3(b). Results for two coverages Θ below the percolation coverage ΘP are presented. FIG. 5: Long-range ferromagnetic ordering due to exchange interaction for different coverages Θ above the percolation coverage ΘP. The equilibrium magnetization |Meq| is shown as function of temperature T . For comparison, for Θ = 1.0 and 1.2 ML also the remanent magnetization |Mrem| after t = 1000 MCS is depicted for an anisotropy K = 0.01 meV/atom. | 2019-04-14T02:01:11.132Z | 2002-11-14T00:00:00.000 | {
"year": 2002,
"sha1": "15d9c30b860f30474d4411dc8bb4d3495bd14c60",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0211290",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5b691745d3b9e65ae9ad2407e528fda328f67326",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
182824082 | pes2o/s2orc | v3-fos-license | A prospective study to compare the efficacy of tacrolimus vs cyclosporine in vernal keratoconjunctivitis in children in India
Vernal keratoconjunctivitis (VKC) is a chronic condition that affects children and young adults. This condition appears before 10 years of age and lasts for 2 to 10 years, with spontaneous recovery during puberty. The diagnosis is essentially clinical. Symptoms include intense itching, tearing, mucous secretion and photophobia and conjuctival signs include hyperaemia, papillae and Horner Tantra’s dot. VKC is characterised by conjunctival in filterations ABSTRACT
with eosinophils, degranulated mast cells, basophils, plasma cells, lymphocytes and macrophages, conjuctival scrapings have yielded Th2 clones. 3 Increased production of Th2 cytokines may contribute to tissue remodelling and papillary formation on tarsal conjunctiva. 4 Drug therapy for allergic conjunctival disease often utilises topical and oral antiallergic agents or steroids. However, antiallergic agents often have insufficient efficacy and long-term patient management is usually required with this therapy. 5 The use of ocular steroids is associated with a serious increased risk of ocular hypertension, cataracts and/or glaucoma. 6,7 Additionally, the risk of steroid-induced ocular hypertension is particularly high in children less than 10 years old and a cataract-induced visual acuity reduction during infancy or early childhood markedly affects a patient's quality of life. 8 Immunomodulatory drugs like cyclosporine has been successfully used for treatment of moderate to severe vernal keratoconjunctivitis. 9,10 There are studies suggesting use of tacrolimus another immunomodulatory drug in treatment of various eye conditions. However, there are fewer studies in which comparison of tacrolimus and cyclosporine treatment for VKC in terms of efficacy and safety is done and none of the study is done in Indian setup.
This study was conducted with the objectives of evaluating the efficacy of tacrolimus and its comparison with cyclosporine in the treatment of vernal keratoconjunctivitis (VKC). Safety in terms of side effects associated with tacrolimus treatment was also done.
METHODS
A prospective open parallel randomized study was carried out on 60 patients in the age group of 2-15 years at
Visit 1 (at week 0)
Eligible candidates were enrolled into the study. Written informed consent was obtained. Patient underwent ophthalmic examination along with proper history of medications used earlier, family history of same illness, any other illness associated along with VKC and complete demographic characteristics at presentation including age, sex, age at onset of disease and duration of illness was taken. Patients were randomized into two groups. Patient in group A received 0.05% cyclosporine eye drops.
Patient's guardian or parents were instructed to instill it four times a day and not to use any other medication along with it. Patient in group B received 5 gm of tacrolimus 0.03% ophthalmic ointment. Patient's guardian or parents were instructed to apply ointment to conjunctival fornix every 12 hours and were also instructed to refrain from direct sunlight exposure soon after application of ointment. Patient slit lamp examination and external ocular photography was done. They were instructed to come after 2 weeks for follow up and to report any side effects or adverse event during the treatment immediately.
Visit 2 (week 2) and visit 3 (week 4)
For each patient, five major subjective symptoms scores and ocular sign scores of both eyes were again noted. Patients were asked about any possible side effects or adverse effects they have experienced on administration of drugs or during treatment. Patient slit lamp examination and external ocular photography was done.
Visit 4 (week 8)
Patient returned for final examination. Symptoms and sign scoring were done along with ocular photography.
Clinical scoring system
Objective signs and subjective symptoms were observed at baseline (before treatment) and 2, 4, 6 and 8 weeks after treatment initiation. Five objective signs were assessed as shown in Table 1 (clinical evaluation criterion of allergic conjunctivitis) which included bulbar conjunctiva (hyperemia, oedema), palpebral conjunctiva (papillae), limbus (tantra's dot) and corneal involvement using 4 grades (0=normal, 1=mild, 2= moderate, 3=severe). 11 In addition, each of five symptoms, including itching, discharge, tearing, photophobia and foreign body sensation was scored on a four-grade scale 0=none, 1 or mild (occasional symptoms), 2 or moderate (frequent symptoms), 3 or severe (constant symptoms). Scoring was done at baseline (therapy initiation) and at 2, 4, 6 and 8 weeks into treatment. In cases when therapy was discontinued, observations were not included for statistical evaluation. Demographic variables collected and examined. The primary outcome was the change in total signs and symptom scores from baseline. The severity of total subjective symptoms (TSSS) and objective ocular signs (TOSS) at each visit were summed. Maximal values of TSSS and TOSS were 15. These scores were used for comparison within and between groups.
Data were summarized as Mean±SE (standard error of the mean). Groups were compared by independent Student's t test. Groups were also compared by repeated measures analysis of variance (ANOVA) using general linear models (GLM) and the significance of mean difference within (intra) and between (inter) the groups was done by Tukey's post hoc test. Discrete (categorical) groups were compared by chi-square (χ 2 ) test. A two-tailed p value less than 0.05 (p <0.05) was considered statistically significant. All analyses were performed on SPSS software (windows version 17.0).
RESULTS
Out of 60 patients enrolled, 17 patients did not appear for participation in this study and were excluded from study. Total 43 patients (cyclosporine=21 and tacrolimus=22) were found evaluable in the present study. Total subjective symptoms scores (TSSS) and total ocular sign scores (TOSS) of both eyes were subjected for statistical analysis.
Comparing the mean age of two groups, t-test revealed similar age between the two groups (
Outcome measures
Total subjective symptom score (TSSS) The pre (0 week) and post (2, 4, 6 and 8 weeks) treatment total subjective symptom scores (TSSS) of two groups are summarized
Total ocular sign score (TOSS)
The pre (0 week) and post (2, 4, 6 and 8 weeks) treatment total ocular sign scores (TOSS) of two groups are summarized in Table 5. It showed that the mean TOSS in both groups decreased (improved) after the treatment and the decrease (improvement) was evident higher in tacrolimus group than cyclosporine group. Evaluating the effect of groups and periods on TOSS, ANOVA revealed insignificant effect of groups (F=3.58, p=0.065) while significant effect of periods (F=152.72, p <0.001) on TOSS. However, the interaction (groups x periods) effect of both on TOSS was found to be insignificant (F=1.02, p=0.398).
Side effects
No serious adverse effect was reported during the study. The most frequent treatment related ocular adverse event in tacrolimus was mild ocular irritation in 9.09% cases (2/22) while the most frequent adverse effect in cyclosporine group was burning sensation on instillation of eye drops. It was reported in 80.95% cases (17/21). Second adverse effect reported in cyclosporine group was redness in eyes, found in 19.04% cases (4/21). No ocular infection was reported during the treatment period.
DISCUSSION
In this study both tacrolimus 0.03% ophthalmic ointment and cyclosporine 0.05% eye drops were found to be effective for treatment of VKC. Authors did not include a placebo group or control group in the study since it seemed unethical to leave patients with symptomatic VKC untreated for such a prolonged period of 8 weeks.
Age at presentation is almost same for this study and other studies. VKC is more common in younger age group and occurrence decrease with increase in age due to desensitization of receptors responsible for allergic reactions and decreased production of other pathological mediators.
Frequency of males was higher than females in both the groups, 11 males (52.4%) and 10 females (47.6%) in cyclosporine group and 14 males (63.6%) and 8 females (36.4%) in tacrolimus group. In total 25 males and 18 females presenting with VKC were enrolled in this study.
Other studies also show VKC is more common in males than females. 12
Medications
About 0.03% tacrolimus ophthalmic ointment and 0.05% cyclosporine eye drops were used for the study purpose. Tacrolimus dermal ointments are easily available in market but only ophthalmic ointment available in the setup after extensive search was 0.03% formulation. Cyclosporine eye drops were available in 0.05% and 0.1% concentration and 0.05% formulation was more easily available. Studies justify use of these concentration in treatment of vernal keratoconjunctivitis. In Vichyanond P et al, study 0.1% tacrolimus eye ointment and 2% cyclosporine eye drops were used. 13 Tacrolimus is a hydrophobic molecule which mean its aqueous solution at clinically useful concentrations are likely to be unstable. Attempts to overcome this problem were done by preparing ophthalmic solutions in castor oil, olive oil and dextrin. However, burning, redness, itching and epithelial keratitis limits the use of such oil vehicles. Dermal ointments were used for some studies. Muller GG et al, used 0.03% tacrolimus dermal ointment protopic directly to conjunctival fornix and according to them there is sufficient reports in literature of its good tolerance and low toxic effects on ocular surface. 14 Pucci N et al, conducted studies using cyclosporine 1% and 2% concentrations15 in other study in 2015 they used 0.1% concentrations for patients who failed to respond to 1% Cyc eye drops. 16 Akpek EK et al, used 0.05% cyclosporine eye drops for treatment of severe steroid resistant vernal keratoconjunctivitis. 10
Outcome measures
Total subjective symptom score (TSSS) at final evaluation (mean change from baseline to 8 weeks), improvement is 5.2% more in tacrolimus group (83.7%) than cyclosporine group (78.5%
Side effects
For the assessment of safety evaluation of visual acuity, IOP, pupil diameter and other clinical findings were taken into consideration. No serious adverse effect was reported during the study. The only treatment related ocular adverse event in tacrolimus was mild ocular irritation. Most frequent side effect seen with cyclosporine treatment is burning sensation soon after instillation of eye drops and is reported in almost all studies done. Tacrolimus is associated with transient ocular irritation only. Study conducted by Gupta et al, mentions occurrence of fungal infections with use of cyclosporine ophthalmic preparation. 17 No such events were reported in any other studies or in this study.
Tacrolimus ophthalmic preparation appears to be safe drug as there are no reports of any serious side effects associated with its use from earlier studies and even from this study. Although safety would have been better commented if blood analysis for finding out any systemic absorption of drugs has been carried out. Ebihara N et al, found that topical tacrolimus led to minimal systemic absorption of the compound. 18
CONCLUSION
Cost of treatment with tacrolimus is minimal in comparison to that of cyclosporine.
The study found tacrolimus is a better drug for treatment of vernal keratoconjunctivitis than cyclosporine although findings of present study may need further validation on larger sample size.
The major limitation of this study is small sample size and lack of blinding or placebo group as a result there are chances of subjecting to statistical error. Larger sample size and multicentric studies need to be conducted in Indian setup to confirm or add to these findings.
Authors have not taken recurrence of disease after discontinuation of medications into consideration the reason been limited and short time of this study. Primary outcomes in terms of efficacy and side effects should be find out conducting long term studies, using different concentration of tacrolimus and taking recurrence of disease after discontinuing medication into consideration as recurrence is major problem in present treatment options of VKC. | 2019-06-07T21:13:22.738Z | 2019-05-23T00:00:00.000 | {
"year": 2019,
"sha1": "2fcc82ac5382e5163d24e12afff6ab6fb8eb9bc7",
"oa_license": null,
"oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/3319/2436",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d904cc78e8c30699ad8c1ae0e92fa549a54bcc48",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259273309 | pes2o/s2orc | v3-fos-license | Hexahexyloxycalix[6]arene, a Conformationally Adaptive Host for the Complexation of Linear and Branched Alkylammonium Guests
Hexahexyloxycalix[6]arene 2b leads to the endo-cavity complexation of linear and branched alkylammonium guests showing a conformational adaptive behavior in CDCl3 solution. Linear n-pentylammonium guest 6a+ induces the cone conformation of 2b at the expense of the 1,2,3-alternate, which is the most abundant conformer of 2b in the absence of a guest. In a different way, branched alkylammonium guests, such as tert-butylammonium 6b+ and isopropylammonium 6c+, select the 1,2,3-alternate as the favored 2b conformation (6b+/6c+⊂2b1,2,3-alt), but other complexes in which 2b adopts different conformations, namely, 6b+/6c+⊂2bcone, 6b+/6c+⊂2bpaco, and 6b+/6c+⊂2b1,2-alt, have also been revealed. Binding constant values determined via NMR experiments indicated that the 1,2,3-alternate was the best-fitting 2b conformation for the complexation of branched alkylammonium guests, followed by cone > paco > 1,2-alt. Our NCI and NBO calculations suggest that the H-bonding interactions (+N–H···O) between the ammonium group of the guest and the oxygen atoms of calixarene 2b are the main determinants of the stability order of the four complexes. These interactions are weakened by increasing the guest steric encumbrance, thus leading to a lower binding affinity. Two stabilizing H-bonds are possible with the 1,2,3-alt- and cone-2b conformations, whereas only one H-bond is possible with the other paco- and 1,2-alt-2b stereoisomers.
Introduction
Molecular recognition [1] is a fundamental process in living systems, and due to our understanding of the secondary interactions that stabilize the ligand@protein complexes, it has been possible to design novel biomimetic guest@host supramolecular systems. A natural process such as protein-substrate binding often involves conformational changes, which can occur prior to the binding event ('conformational selection model' [2,3]) or during the binding event ('induced-fit model' [4,5]). According to the conformational selection model, the protein conformational changes may take place prior to ligand binding, and then the stabilization of a specific protein structure is caused by its complexation with the substrate. In contrast, in the induced-fit binding model, the conformational change takes place upon substrate binding [6].
The design of artificial ammonium receptors is an exciting topic of research in supramolecular chemistry, which takes inspiration from natural systems [7,8]. Thus, the recognition of ammonium guests by macrocycles such as calixarenes [9], pillararenes [10,11], prismarenes [12][13][14], cucurbiturils [15], naphthotubes [16], oxatubarene [17], cycloparaphenylenes [18], and saucerarenes [19], has received substantial attention in recent years. In particular, very recently, Jiang [17] reported a new class of macrocycles named oxatub [4]arenes, which showed biomimetic conformational adaptation behavior [20,21]. In fact, oxatubarenes can take on four interconvertible conformations through the flipping of naphthalene rings. Jiang showed that according to the "conformational selection model", a specific ammonium guest can select the best-fitting oxatubarene conformer, altering the initial equilibrium distribution of the conformers. Calix [2]naphth [2]arene [22] macrocycle, which we reported in 2020, is composed of two phenol and two naphthalene rings and can adopt five potential conformations, but the 1,2-alternate conformation is the only one that achieves the best binding when alkali metal cations are present.
Additionally, calix [5]arene macrocycles can show conformational response to ammonium guests [23]. In fact, very recently, we showed that the cone and partial cone are the best-fitting conformers of a calix [5]arene for secondary ammonium cations.
Molecules 2023, 28, x FOR PEER REVIEW 2 of 13 named oxatub [4]arenes, which showed biomimetic conformational adaptation behavior [20,21]. In fact, oxatubarenes can take on four interconvertible conformations through the flipping of naphthalene rings. Jiang showed that according to the "conformational selection model", a specific ammonium guest can select the best-fitting oxatubarene conformer, altering the initial equilibrium distribution of the conformers. Calix [2]naphth [2]arene [22] macrocycle, which we reported in 2020, is composed of two phenol and two naphthalene rings and can adopt five potential conformations, but the 1,2-alternate conformation is the only one that achieves the best binding when alkali metal cations are present. Additionally, calix [5]arene macrocycles can show conformational response to ammonium guests [23]. In fact, very recently, we showed that the cone and partial cone are the best-fitting conformers of a calix [5]arene for secondary ammonium cations.
As previously reported by us [30], the 1 H NMR spectrum of hexahexyloxycalix [6]arene 2b in CDCl 3 at 298 K ( Figure 2a) shows broad signals indicative of a slow conformational mobility of the macrocycle with respect to the NMR time scale. We showed that after lowering the temperature to 233 K, the 1 H NMR resonances of 2b decoalesced to form sharp signals compatible with the presence of 1,2,3-alternate (favored) and cone conformations of 2b. Based on these considerations, hexahexyloxycalix [6]arene 2b is the ideal candidate for studying the conformational response of the calix [6]arene skeleton to the presence of ammonium guests. The question to address now is specifically whether linear and branched alkylammonium cations 6a-c + can form complexes with 2b, as well as the possibility of complexation-induced selection of the 1,2,3-alternate/cone or alternative conformations of hexahexyloxycalix [6]arene 2b. [6]are The eight basic conformations of calix [6]arene derivatives.
As previously reported by us [30], the 1 H NMR spectru hexahexyloxycalix [6]arene 2b in CDCl3 at 298 K ( Figure 2a) shows broad signals ind of a slow conformational mobility of the macrocycle with respect to the NMR tim We showed that after lowering the temperature to 233 K, the 1 H NMR resonance decoalesced to form sharp signals compatible with the presence of 1,2,3-al (favored) and cone conformations of 2b. Based on these consider hexahexyloxycalix [6]arene 2b is the ideal candidate for studying the conform response of the calix [6]arene skeleton to the presence of ammonium guests. The qu to address now is specifically whether linear and branched alkylammonium cation can form complexes with 2b, as well as the possibility of complexation-induced se of the 1,2,3-alternate/cone or alternative conformations of hexahexyloxycalix [6]are As previously reported by us [30], the 1 H NMR spectrum of hexahexyloxycalix [6]arene 2b in CDCl3 at 298 K ( Figure 2a) shows broad signals indicative of a slow conformational mobility of the macrocycle with respect to the NMR time scale. We showed that after lowering the temperature to 233 K, the 1 H NMR resonances of 2b decoalesced to form sharp signals compatible with the presence of 1,2,3-alternate (favored) and cone conformations of 2b. Based on these considerations, hexahexyloxycalix [6]arene 2b is the ideal candidate for studying the conformational response of the calix [6]arene skeleton to the presence of ammonium guests. The question to address now is specifically whether linear and branched alkylammonium cations 6a-c + can form complexes with 2b, as well as the possibility of complexation-induced selection of the 1,2,3-alternate/cone or alternative conformations of hexahexyloxycalix [6]arene 2b. Consequently, prompted by these considerations, we decided to investigate the molecular recognition properties of 2b toward linear and branched alkylammonium ions 6a-c + as barfate salts [B(ArF) 4 ] -(Chart 1). presence of the endo-cavity 6a + ⊂2b complex (Scheme 1). In particular, the formation of the complex was ascertained based on the appearance of a new set of slowly exchanging signals in the up-field negative region of the spectrum ( Figure 2) attributable to the n-pentyl chain of 6a + shielded inside the cavity of the calix [6]arene macrocycle. 6a-c as barfate salts [B(ArF)4] (Chart 1).
Binding Ability of 2b toward n-Pentylammonium Guest 6a + [B(ArF)4] -
The complexation ability of 2b toward 6a + [B(ArF)4] -(Chart 1) was investigated at 298 K via 1D and 2D NMR experiments. The 1 H NMR spectrum of an equimolar (3 mM) solution of 2b and 6a + [B(ArF)4] -in CDCl3 at 298 K showed typical features [24] of the presence of the endo-cavity 6a + ⊂2b complex (Scheme 1). In particular, the formation of the complex was ascertained based on the appearance of a new set of slowly exchanging signals in the up-field negative region of the spectrum ( Figure 2) attributable to the npentyl chain of 6a + shielded inside the cavity of the calix [6]arene macrocycle. Scheme 1. Formation of the 6a + ⊂2b cone complex.
The NMR signals of 6a + complexed inside the cavity of 2b were assigned via a COSY-45 experiment ( Figure 2e). As a result, the NH3 + signal at 5.99 ppm correlated with the αprotons at -0.06 ppm, which coupled with the β-methylene group at -0.81 ppm. This, in turn, showed a cross peak with the γ-protons at -0.99 ppm. The γ-protons were coupled with the δ-methylene group at -0.81 ppm, which was correlated with the ε-methyl group at -0.28 ppm. The COSY-45 spectrum of the 6a + ⊂2b complex revealed the presence of an AX system (Figure 2f) at 3.47/4.48 ppm (Δδ = 0.99 ppm), which correlated in the HSQC spectrum with a 13 C resonance at 28.4 ppm, attributable to the ArCH2Ar groups. In agreement with Gutsche's " 1 H NMR" rule [28,30] and the " 13 C NMR single rule" of de Mendoza [29,30], these results are only compatible with the formation of the endo-cavity 6a + ⊂2b cone complex, in which the calix [6]arene adopts a cone conformation. In conclusion, the pentylammonium cation 6a + selected the cone as the best-fitting 2b conformation at the expense of the 1,2,3-alternate, which was the most abundant species in the initial conformational equilibria of 2b [30].
The calculation of the binding constant for the formation of the 6a + ⊂2b cone complex through direct peak integration was not possible, as the 1 H NMR signals (Figure 2b) of the free 2b and guest 6a + were not detected in the 1 H NMR spectrum of their 1:1 mixture, shown in Figure 2b. Consequently, a binding constant calculation was performed by means of a competition experiment in which 1 equiv of pentylammonium 6a + was mixed with a 1:1 mixture of 2b and n-butylammonium (in CDCl3) as barfate salt. Previously, we reported on the formation of the n-BuNH3 + ⊂2b cone complex in CDCl3 with a Kass value of 8.3 ± 0.1 × 10 6 M -1 [12,13,24,31]. After the mixing of 2b, n-BuNH3 + , and 6a + (1/1/1 molar ratio), the n-BuNH3 + ⊂2b cone complex was preferentially formed over the 6a + ⊂2b cone one in a 1.6:1.0 ratio. Thus, from these data, a binding constant of 5.2 ± 0.1 × 10 6 M -1 was calculated Scheme 1. Formation of the 6a + ⊂2b cone complex.
The NMR signals of 6a + complexed inside the cavity of 2b were assigned via a COSY-45 experiment ( Figure 2e). As a result, the NH 3 + signal at 5.99 ppm correlated with the α-protons at -0.06 ppm, which coupled with the β-methylene group at -0.81 ppm. This, in turn, showed a cross peak with the γ-protons at -0.99 ppm. The γ-protons were coupled with the δ-methylene group at -0.81 ppm, which was correlated with the ε-methyl group at -0.28 ppm. The COSY-45 spectrum of the 6a + ⊂2b complex revealed the presence of an AX system ( Figure 2f) at 3.47/4.48 ppm (∆δ = 0.99 ppm), which correlated in the HSQC spectrum with a 13 C resonance at 28.4 ppm, attributable to the ArCH 2 Ar groups. In agreement with Gutsche's " 1 H NMR" rule [28,30] and the " 13 C NMR single rule" of de Mendoza [29,30], these results are only compatible with the formation of the endo-cavity 6a + ⊂2b cone complex, in which the calix [6]arene adopts a cone conformation. In conclusion, the pentylammonium cation 6a + selected the cone as the best-fitting 2b conformation at the expense of the 1,2,3-alternate, which was the most abundant species in the initial conformational equilibria of 2b [30]. The calculation of the binding constant for the formation of the 6a + ⊂2b cone complex through direct peak integration was not possible, as the 1 H NMR signals (Figure 2b) of the free 2b and guest 6a + were not detected in the 1 H NMR spectrum of their 1:1 mixture, shown in Figure 2b. Consequently, a binding constant calculation was performed by means of a competition experiment in which 1 equiv of pentylammonium 6a + was mixed with a 1:1 mixture of 2b and n-butylammonium (in CDCl 3 ) as barfate salt. Previously, we reported on the formation of the n-BuNH 3 + ⊂2b cone complex in CDCl 3 with a K ass value of 8.3 ± 0.1 × 10 6 M -1 [12,13,24,31]. After the mixing of 2b, n-BuNH 3 + , and 6a + (1/1/1 molar ratio), the n-BuNH 3 + ⊂2b cone complex was preferentially formed over the 6a + ⊂2b cone one in a 1.6:1.0 ratio. Thus, from these data, a binding constant of 5.2 ± 0.1 × 10 6 M -1 was calculated for the formation of the 6a + ⊂2b cone complex in CDCl 3 at 298 K (see the Supplementary Materials for further details).
The DFT-optimized structure of the 6a + ⊂2b cone complex (Figure 3), calculated on the B3LYP/6-31G(d,p) theoretical level, revealed that the N + atom of 6a + sits 0.30 Å above the mean plane of the ethereal oxygen atoms of 2b. H-bonding interactions were detected between the + NH 3 −ammonium group of 6a + and the oxygen atoms of 2b, with a + N···O 2b average distance of 2.83 Å and an average + N-H··· O 2b angle of 165.1 • . In addition, CH···π interactions were detected between the aromatic rings of 2b and the pentyl chain of 6a + confined inside the cavity of 2b (Figure 2a-c), with an average C−H···π centroid distance of 2.99 Å. A second-order perturbation theory SOPT analysis [32] of the Fock matrix in the NBO [33] basis and NCI (non-covalent interactions) studies were conducted in order to rationalize the energy contribution of secondary interactions. rationalize the energy contribution of secondary interactions.
These values clearly demonstrated that the binding of the branched tertbutylammonium cation is generally less favored than that of linear n-pentylammonium, probably due to the greater steric encumbrance of the t-Bu group.
For the sake of clarity, hereafter, we report on the analysis of the 1D and 2D NMR spectra in Figure 4, supporting the identification of the four complexes formed upon mixing 2b and 6b + in an equimolar ratio (3 mM) in CDCl3 at 298 K. With these results in hand, we focused our attention on a branched alkylammonium guest such as tert-butylammonium 6b + [B(ArF)4] -, which revealed a very different behaviour compared to 6a + [B(ArF)4] -. Close inspection of the 1 H NMR spectrum (CDCl3, 600 MHz, 298 K) of the equimolar mixture 2b/6b + (3 mM) in Figure 4 showed typical signals indicative of the endo-cavity complexation of 6b + inside the cavity of 2b (Scheme 2).
These values clearly demonstrated that the binding of the branched tert-butylammonium cation is generally less favored than that of linear n-pentylammonium, probably due to the greater steric encumbrance of the t-Bu group.
For the sake of clarity, hereafter, we report on the analysis of the 1D and 2D NMR spectra in Figure 4, supporting the identification of the four complexes formed upon mixing 2b and 6b + in an equimolar ratio (3 mM) in CDCl 3 at 298 K. 6b + ⊂2b 1,2,3-alt : The presence of an ArCH 2 Ar AX system at 3.49/4.49 ppm and an AB system at 3.86/3.77 ppm provides evidence of the 6b + ⊂2b 1,2,3-alt complex, in which the calix [6]arene adopts a 1,2,3-alternate conformation. The HSQC correlations of these ArCH 2 Ar signals with carbon resonances at 28.3 and 33.7 ppm (Figure 4e) confirmed the presence of the calixarene host in the 1,2,3-alternate conformation, according to Gutsche's " 1 H NMR" rule [28,30] and the " 13 C NMR single rule" of de Mendoza [29,30]. Finally, the tert-butyl singlet of 6b + shielded inside the cavity of 2b 1,2,3-alt was detected at −1.15 ppm (marked in green), a value significantly up-field-shifted as compared to the analogous signals of 6b + hosted inside the cone-shaped cavity of 6b + ⊂2b cone (−0.92 ppm, marked in red) and the 6b + ⊂2b 1,2-alt complex (−0.91 ppm, marked in blue) (vide infra).
6b + ⊂2b cone : The methylene region of the 1 H NMR spectrum in Figure 4a,b showed the presence of an AX system (COSY spectrum, indicated in red in Figure 4d) at 3.52/4.34 ppm (∆δ = 0.82 ppm), which correlated in the HSQC spectrum ( Figure 4e) with a carbon resonance at 28.0 ppm, attributable to carbon atoms between syn-oriented aromatic rings. These signals can be assigned to the 6b + ⊂2b cone complex, in which calix [6]arene 2b adopts the cone conformation. Interestingly, the tert-butyl signal of 6b + shielded inside the aromatic cavity of 2b cone was found at −0.92 ppm (marked in red in Figure 4c). 6b + ⊂2b paco : The 1 H NMR spectrum in Figure 4a,b showed the presence of two less intense AX systems (COSY spectrum, Figure 4d) in a 1:1 ratio at 3.48/4.45 ppm (∆δ = 0.97 ppm) and 3.51/4.35 ppm (∆δ = 0.84 ppm), which correlated in the HSQC spectrum (Figure 4e) with carbon resonances at 28.3 and 28.6 ppm, respectively, attributable to ArCH 2 Ar carbon atoms between syn-oriented aryl rings. In addition, an AB system (COSY spectrum) was detected at 3.86/3.99 ppm, attributable to ArCH 2 Ar groups between anti-oriented aryl rings, which correlated with a carbon resonance at 33.8 ppm. According to the application of Gutsche's and de Mendoza's rules, these data were indicative of the presence of a 6b + ⊂2b paco complex in which the calix [6]arene adopted the partial cone conformation. Additionally, in this case, a singlet was detected at −1.51 ppm (marked in yellow in Figure 4), attributable to the -C(CH 3 ) 3 group of 6b+ shielded inside the cavity of 2b paco . 6b + ⊂2b 1,2-alt : The presence of the 6b + ⊂2b 1,2-alt complex was confirmed by three ArCH 2 Ar AX systems (COSY spectrum, Figure 4d) at 3.49/4.30, 3.51/4.20, and 3.51/4.36 ppm (∆δ = 0.81, 0.69, and 0.85 ppm, respectively) in a 1:1:2 ratio, which correlated in the HSQC spectrum with carbon signals at 28.5, 28.6, and 28.4 ppm, respectively, and an AB ArCH 2 Ar system at 3.86/3.96 ppm, which correlated with a carbon resonance at 33.6 ppm. In this case, in the negative region of the 1 H NMR, we observed the tert-butyl singlet of 6b + at -0.90 ppm (marked in blue in Figure 4c).
To investigate the energy contribution of noncovalent interactions, a SOPT analysis of the Fock matrix in the NBO basis was carried out on the DFT-optimized structures of the four 6b + ⊂2b complexes.
The DFT-optimized structure of the 6b + ⊂2b 1,2,3-alt complex indicated the presence of stabilizing H-bonding and C−H···π interactions between the guest 6b + and 2b 1,2,3-alt . A SOPT analysis indicated that the stabilization energy was mainly due to the formation of two + N-H···O 2b H-bonding interactions, which contributed 80% of the total binding energy ( Table 1). This value is significantly higher than that calculated for the + N−H···O 2b H−bonding interactions of the 6b + ⊂2b cone complex, which was 61% of the total energy of non-covalent interactions (Table 1) for this complex. The DFT-optimized structure of the 6b + ⊂2b cone complex calculated on the B3LYP/6-31G(d,p) theoretical level (Figure 5a) showed the presence of two + N−H···O interactions with an average + N···O 2b distance of 2.95 Å, being longer and weaker than that calculated for the 6b + ⊂2b 1,2,3-alt complex ( + N···O 2b average distance of 2.80 Å), while an average + N-H···O 2b angle of 165.1 • was calculated for the 6b + ⊂2b cone complex (163.3 • calculated for 6b + ⊂2b 1,2,3-alt ). Additionally, C−H···π interactions were detected between the tert-butyl group of 6b+ and the aromatic rings of 2b, with an average C-H···π centroid distance of 2.95 Å and an average C-H···π centroid angle of 153.3 • . 2b paco , with a distance of 2.88 Å and an angle of 166.6°. The SOPT analysis revealed a lower value for the contribution of the + N-H···O 2b hydrogen-bonding interaction (53% , Table 1). Similarly, the DFT-optimized structure of the 6b + ⊂2b 1,2-alt complex (Figure 5c) suggested the existence of a single H-bonding interaction between the ammonium group of 6b+ and an oxygen atom of 2b 1,2-alt . Here, the SOPT analysis also revealed a lower 36% contribution of the + N-H···O 2b hydrogen-bonding interaction (Table 1) to the total binding energy. These results clearly indicate that the tert-butylammonium guest 6b + prefers the 1,2,3alt-2b as the best-fitting host conformation to a greater extent than the other conformations. In addition, the NCI analysis suggests that the stabilization induced by the H-bonding interactions between the ammonium group of 6b + and the oxygen atoms of the calixarene 2b ( + N-H···O) plays a crucial role in determining the thermodynamic stabilities of the four complexes shown in Scheme 2. A careful comparison of the DFT-optimized structures of the 6a + ⊂2b and 6b + ⊂2b complexes reveals significant differences in the binding modes of the two guests. In particular, the greater steric requirements of the tert- Concerning the DFT-optimized structure of 6b + ⊂2b paco (Figure 5b), a single H-bonding interaction was detected between the NH 3 + group of 6b+ and an oxygen atom of 2b paco , with a distance of 2.88 Å and an angle of 166.6 • . The SOPT analysis revealed a lower value for the contribution of the + N-H···O 2b hydrogen-bonding interaction (53% , Table 1).
Similarly, the DFT-optimized structure of the 6b + ⊂2b 1,2-alt complex (Figure 5c) suggested the existence of a single H-bonding interaction between the ammonium group of 6b+ and an oxygen atom of 2b 1,2-alt . Here, the SOPT analysis also revealed a lower 36% contribution of the + N-H···O 2b hydrogen-bonding interaction (Table 1) to the total binding energy.
These results clearly indicate that the tert-butylammonium guest 6b + prefers the 1,2,3-alt-2b as the best-fitting host conformation to a greater extent than the other conformations. In addition, the NCI analysis suggests that the stabilization induced by the H-bonding interactions between the ammonium group of 6b + and the oxygen atoms of the calixarene 2b ( + N-H···O) plays a crucial role in determining the thermodynamic stabilities of the four complexes shown in Scheme 2. A careful comparison of the DFT-optimized structures of the 6a + ⊂2b and 6b + ⊂2b complexes reveals significant differences in the binding modes of the two guests. In particular, the greater steric requirements of the tert-butyl group of 6b + force it to occupy a deeper position inside the calix [6]arene cavity (in each of the four conformations), leading to greater deformation of the host, which, in turn, implies a higher energetical cost and a lower binding constant. In this way, the deeply positioned 6b + guest is able to form two stabilizing H-bonds with the 1,2,3-alt-and cone-2b conformations, whereas only one H-bond is possible with the other paco-and 1,2-alt-2b isomers.
These results clearly confirmed the role of the steric encumbrance, since the binding of the more branched tert-butylammonium cation is less favored than that of the less branched i-propylammonium, which, in turn, is less favored with respect to linear n-pentylammonium.
The H-bonding contribution to the total binding energy calculated for the four 6c + ⊂2b complexes ( Table 2) is in agreement with their thermodynamic stability order evaluated based on K ass values in CDCl 3 at 298 K: 6c + ⊂2b 1,2,3-alt > 6c + ⊂2b cone > 6c + ⊂2b paco > 6c + ⊂2b 1,2-alt . These results clearly indicate that, once again, the 1,2,3-alt-2b was selected as the best-fitting host conformation in the presence of a branched alkylammonium guest such as 6c + . The stability order of the 6c + ⊂2b complexes is in full agreement with that observed in the presence of the tert-butylammonium guest 6b+. The natural bond orbital and noncovalent interaction analyses indicate that H-bonding interactions between the ammonium group of 6c + and the oxygen atoms of the calixarene 2b ( + N-H···O) provide the key stabilization factor for the 6c + ⊂2b complexes. In fact, they account for 87%, 77%, 66%, and 52% of the total binding energy for the 6c + ⊂2b 1,2,3-alt , 6c + ⊂2b cone , 6c + ⊂2b paco , and 6c + ⊂2b 1,2-alt complexes, respectively (Table 2). At this point, it was important to verify whether the thermodynamic stability of the individual conformational complexes was in accordance with the Expanding Coefficient (EC) parameter recently proposed by our group [34]. In fact, it was found that the EC parameter can be conveniently correlated with the thermodynamic stability of supramolecular complexes obeying the induced-fit or conformational selection models and governed by weak secondary interactions. EC is defined as the ratio between the volume of the host cavity after complexation and that of the host cavity before complexation [34].
The EC values were thus calculated from the cavity volumes of the complexed and free host using the above DFT-optimized structures and that of the 1,2,3-alternate ground 2b host for all the 6b + ⊂2b and 6c + ⊂2b complexes (SI) with the Caver software [35][36][37]. In detail, taking the 6c + ⊂2b complexes as an example, the EC values of 6.03, 6.37, 4.84, and 6.03 were found for 6c + ⊂2b 1,2,3-alt , 6c + ⊂2b cone , 6c + ⊂2b paco , and 6c + ⊂2b 1,2-alt , respectively, whose log K app values were 4.49, 3.96, 3.79, and 3.08, respectively (SI). As is clearly evident, the previously observed linear correlation between the EC and log K app [34] is not a factor here [30]. This can be easily explained by considering the number of H-bonding interactions, which, as stated above, is the main determinant of the thermodynamic stability of these complexes. Thus, if we only consider those complexes with two H-bonding interactions, 6c + ⊂2b 1,2,3-alt (EC = 6.03, log K app = 4.49) and 6c + ⊂2b cone , (EC = 6.37, log K app = 3.96), we find a good correlation between an increasing EC and decreasing log K app . In the same way, considering only those complexes with one H-bonding interaction, 6c + ⊂2b paco (EC = 4.84, log K app = 3.79) and 6c + ⊂2b 1,2-alt (EC = 6.03, log K app = 3.08), we again find a good correlation between an increasing EC and decreasing log K app . Therefore, these findings are in accordance with our previous statement [34] that "the EC parameter can be considered of general applicability in all those instances in which no new strong intermolecular interactions (e.g., H-bonds) are generated during the induced-fit process".
Conclusions
This study clearly shows that hexahexyloxycalix [6]arene 2b can complex alkylammonium guests and shows a conformational adaptive behavior. Thus, in the presence of n-pentylammonium 6a + , the cone-2b is the best-fitting conformation at the expense of the 1,2,3-alternate-2b, which is the most abundant conformer in the absence of a guest. In a different way, branched alkylammonium guests, such as tert-butylammonium 6b + and isopropylammonium 6c + , select a combination of conformers of 2b. Four complexes were revealed and characterized using 1D and 2D NMR spectra, in which 2b adopted different conformations, namely, 6b + /6c + ⊂2b 1,2,3-alt , 6b + /6c + ⊂2b cone , 6b + /6c + ⊂2b paco , and 6b + /6c + ⊂2b 1,2-alt . The binding constant values determined through NMR experiments indicated that the 1,2,3-alternate was the best-fitting 2b conformation for the complexation of branched alkylammonium guests, followed by cone > paco > 1,2-alt. The NCI and NBO calculations suggest that the H-bonding interactions between the ammonium group of the guest and the oxygen atoms of the calixarene 2b ( + N-H···O) played a crucial role in determining the thermodynamic stability order of the four complexes. In addition, it was found that an increase in branching from n-Pent to i-Pr and in t-Bu groups leads to a corresponding decrease in binding affinity. Therefore, it can be concluded that higher steric encumbrance leads to weaker H-bonding interactions and, hence, to a lower binding energy.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-29T06:15:55.106Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "7565bc2952e611908984ce11435c1eff6977d11d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/molecules28124749",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3735a6465fa5f84692f69a204156f0624d029b88",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
923462 | pes2o/s2orc | v3-fos-license | Application of two passive strategies on the load mitigation of large offshore wind turbines
This study presents the numerical results of two passive strategies to reduce the support structure loads of a large offshore wind turbine. In the first approach, an omnidirectional tuned mass damper is designed and implemented in the tower top to alleviate the structural vibrations. In the second approach, a viscous fluid damper model which is diagonally attached to the tower at two points is developed. Aeroelastic simulations are performed for the offshore 10MW INNWIND.EU reference wind turbine mounted on a jacket structure. Lifetime damage equivalent loads are evaluated at the tower base and compared with those for the reference wind turbine. The results show that the integrated design can extend the lifetime of the support structure.
Introduction
In recent years, the size of offshore wind turbines is further increased beyond the 5 MW class to reduce the cost of energy. The cost effective design of support structures for such large machines is still a challenging task as the rotor diameter and tower height are exceeding 100 m. The support structure eigenfrequencies are analysed in the early stage of the design procedure to prevent significant resonances between the structural eigenfrequencies and excitations from waves, rotor frequency and its multiples. This can be acquired via the Campbell diagram which plots the eigenfrequency of the entire wind turbine system against the rotor speed including the harmonic excitations 1P, 3P, etc. Figure 1 illustrates qualitatively the trends for the rotor rotational frequency (1P) and blade passing frequency (3P) ranges along with the first eigenfrequency of the support structure as a function of the wind turbine size. In the design procedure of large offshore wind turbines, i.e. 7.5+ MW, with jacket structures, a strong and severe blade passing (3P) resonance is expected at low rotor speeds [1]. Figure 2 shows the Campbell diagram for the 10MW INNWIND.EU reference turbine [2]. It can be seen that at rotor speed of 5.7 rpm, the blade passing frequency (3P) coincides with the first natural frequency of the system. The rotor frequency (1P) is however not problematic as it is found far outside of the operational region. In order to reduce the dynamic excitation, a rotor speed exclusion zone between 5.2 rpm and 6.3 rpm is considered. Compared to the initial exclusion zone chosen in the preliminary stage [3], the exclusion zone is shifted downward by a reduction of the cut-in speed from 5 rpm to 4.5 rpm.
The larger the blades and the support structure of the wind turbine, the higher the bending loads. Consequently, strongly increased fatigue loads are experienced by critical components Figure 1. Design trends on the first design eigenfrequency and both the rotor and the blade resonance ranges for three-bladed offshore turbines in the 5 to 20MW class [1].
Figure 2.
Campbell diagram of the INNWIND.EU 10MW reference turbine. The yellow marked areas represent the operational ranges of the turbine. The cut-in speed is lowered to avoid overlapping with the exclusion zone. The first fore-aft and side-side modes are closely spaced. e.g. the tower base. Innovations are required in lowering the loads experienced by the support structure. This goal can be met through a variety of load mitigation strategies e.g. implementation of passive or (semi)-active damping devices and different control and regulation concepts. Nowadays, the application of a tuned mass damper (TMD) is becoming increasingly practical for the load mitigation of offshore wind turbines. Previous works have shown the potential for the integration of TMDs in the tower top location [4]. In addition, different solutions have been proposed for the application of semi-active or active dampers, e.g. togglebrace viscous fluid damper (VFD), tuned liquid column damper (TLCD), magnetorheological (MR) damper and hybrid mass dampers, see [5][6][7].
To control the dynamic excitation of the support structure, two innovative concepts are designed and integrated in the INNWIND.EU 10MW reference wind turbine (RWT). In the first study, the results of the integration of a passive structural damper to reduce the support structure loads are presented. A passive tuned mass damper (TMD) is designed to realize the best configuration according to the calculated tower base lifetime and damage equivalent loads (DELs) for the reference number of cycles of N ref = 10 7 and the S-N curve slope of m = 4. During the second analysis, the numerical model of a toggle-brace VFD is developed. Aeroelastic simulations are performed in DNV GL Bladed software [8] RWT mounted on a jacket structure. The loads obtained from two concepts are then compared with those for the reference turbine. We show that the applied loads can be effectively mitigated by the use of damping strategies which would result in an extension in the lifetime of the support structure.
Description of the performed studies
The performed studies are based on the offshore 10MW INNWIND.EU RWT [9]. A schematic picture of the jacket and the reference wind turbine (RWT) structures as well as the considered coordinate system are displayed in Figure 3. The design water depth is applied according to the North Sea site with a water depth of 50 m. The support structure consists of a 4-leged, x-braced jacket structure. The 3D turbulent wind field with the Kaimal spectrum is generated for six random seeds while the wave kinematics is modelled based on the irregular wave model with the JONSWAP spectrum. No yaw misalignment angle is assumed in this analysis. The design load case (DLC) 1.2 based on wind class IA according to IEC61400-1 standard [10] is considered and fatigue loads are calculated during the full operating wind speed range. The wind and wave characteristics are listed in Table 1. Figure 3. The global coordinate system (left) and a schematic drawing of the RWT and jacket structures (right).
Description of the load mitigation concepts
Two load mitigation concepts namely the tuned mass damper and the viscous fluid damper which have already been introduced in [11] are considered. These concepts are integrated into the final design of the support structure to ensure an optimised and affordable design. Figure 4 illustrates schematic models of these two concepts. turbine in operational conditions. In the current study, no wind-wave misalignment is considered. Fatigue loads are evaluated at the tower base and compared with the reference design. During the first concept, the numerical modeling of a passive tuned mass damper mounted on the tower top is performed (see Figure 4). This consists of a mass-spring-damper system which is characterised by its mass, the resonance frequency and the damping factor. The damper is omnidirectional meaning that it can vibrate at all direction. Tuned mass dampers are widely implemented and tested in recent commercial wind turbines, e.g. Vestas V90-3MW. However, the initial INNWIND.EU 10MW reference jacket is designed without considering any damper in the tower. As a consequence, the initial jacket design is not optimised and needs further improvements. The integration of a TMD has the potential to improve the jacket design by reducing interface loads. The interface spot, as shown in Figure 3, is the connection point between the tower and the jacket foundation, i.e. the tower base.
According to [4], the TMD mass is approximately either 8% of the nacelle mass or 6% of the tower top mass. Other studies propose to use a value between 2-4% of the modal mass associated with the fundamental lowest eigenfrequency which is commonly used for civil engineering structures [4]. In this analysis, simulations are performed with a TMD with mass ratios of 1% and 2% with respect to the modal mass. The resonance frequency of the TMD is tuned to the first fundamental mode of the support structure (the first fore-aft mode) and is able to reduce the amplitude of vibrations around this frequency.
The second concept describes the mathematical representation of a VFD. VFDs have already been used in civil engineering structures to dissipate seismic energy. Up to now, the application of VFDs in wind turbines did not come into practice and further investigations are required to accommodate a damper network system of this type in the tower. The optimal location of the damper is a challenging task. In this study, we assume that the VFD is installed at the tower top, whereas maximum vibrations occur and hence a higher damping efficiency is desired. The design formulas for a VFD system are based on the study described in [14]. For a system represented in Figure 4, the following relation exists: where u D is the relative displacement along the axis of the damper, u is the relative displacement of the attachment points and f is the magnification factor. The magnification factor depends on the configuration of the bracing system and is usually larger than 1. For a diagonal togglebrace-damper system, it equals to cos θ where θ is the inclined angle of the damper [14]. The horizontal component of the force exerted by the damper on the structure, F , is obtained from: with F D , the damper force. For a VFD, the damper force along its axis is proportional to the velocity and can be written as below: where C is the damping coefficient of the damper,u D represents the relative velocity between the attachment points of the damper and α is the damper nonlinearity. In this paper, we assume α = 1 which models a linear damper. Combining Eqs. 1 and 3 and knowing that any real number can be expressed as the product of its absolute value and its sign function, the magnitude of the damping force can be expressed as: The effective damping coefficient, C 0 = Cf α , has an important impact on the damper force. The damper force is applied to the structure and mitigates the external forces exerted on the structure. Several strategies can be used to calculate the viscous damper force. One of the most practically available parameters is the acceleration which can be easily measured using accelerometers. Figure 5 demonstrates the mechanism to calculate the viscous damper force using the tower top simulated accelerations. A Simulink model of a VFD is developed and connected to Bladed via an external Dynamic-Link Library (DLL). The simulated time series of accelerations at the ending points of the damper where it is attached to the tower are recorded. Velocity signals can be attained by integrating the simulated accelerations. The damper force, F D , is then obtained in Simulink using both velocities and the effective damping coefficient. This force is divided in two and returned to the Bladed whereas it will be applied as reaction forces at the damper ending points (see Figure 5). For multiple dampers, the damper force is split up into several components based on the orientation of the viscous dampers. The damper model performs as a passive device if a constant damping coefficient is assumed. Alternatively, for a semi-active device, the effective damping coefficient can be provided as a lookup table which is calculated and scheduled based on external or operating conditions. For such a damper, it acts semiactively as the damper force is adapted based on the environmental conditions. In this study, however, a constant effective damping coefficient is chosen at all wind speeds.
Design Load Verification
In this section, results of the load verification with the integration of two concepts are discussed. Figure 6 demonstrates the DELs of the tower base moment in the fore-aft (M y ) and side-to-side Figure 5. Numerical modeling of the VFD shows the mechanism to calculate the damper forces using the tower accelerations.
(M x ) directions, respectively. M x and M y are represented in the global coordinate system with X and Y axes pointing towards the north and west, respectively, when nacelle is faced to the wind inflow. The results are only for one set of load setup where the wind-wave misalignment angle is zero. The simulations are carried out with six random seeds for the turbulent wind and DELs are averaged at each wind speed. For all results shown here, the exclusion zone is activated. The influence of the TMD is not significant in the fore-aft direction, especially near the rated wind speed, while the TMD can effectively mitigate the fatigue loads in the sideways direction. It should be noted that despite the rotor speed exclusion zone is active, a resonance can be seen for the reference design which is further mitigated by the TMD even in the fore-aft direction. This is in agreement with findings of Kuhnle [11] where it has been shown that the DELs can be improved by the application of a tuned mass damper at the tower top location. In addition, the TMD is more effective for the sideways direction which is due to the low aerodynamic damping in this direction. The aerodynamic damping is the dominant phenomenon in the foreaft direction and therefore the performance of the TMD is marginal in this direction. Except for DELs in the fore-aft direction in a region around the rated wind speed, the TMD effectively dissipates side-to-side loads in the whole operational range.
The maximum reduction of DELs occurs in the sideways direction with a TMD with the mass ratio of 2% at the wind speed of 6 m/s which corresponds to a 60% reduction with respect to the reference turbine configuration. For a TMD with the mass ratio of 2%, the lifetime weighted DELs are reduced by 4% and 24.3% in the fore-aft and side-to-side directions respectively.
As explained in the previous section, the second concept is a viscous fluid damper installed at the tower top location. Three toggle brace viscous fluid dampers with 120 phase shifting i.e. 0 • towards north, 120 • and 240 • , are considered. The VFD model is created in Simulink and called through a DLL at each time step. Since the damper force is proportional to the effective damping coefficient, C 0 , a larger damper force is achieved with increasing the damping coefficient. Therefore, the effective damping coefficient is increased step by step up to a value for which the simulations became unstable. This corresponds the damping coefficient which causes maximum load reduction.
A typical time interval of the nacelle displacements in the fore-aft and side-to-side directions are demonstrated in Figure 7. It is obvious that the tower top vibration, particularly in the sideways direction, is considerably dissipated compared to the reference configuration. The tower base DELs of the bending moments in the fore-aft and sideways directions are plotted in Figure 8. It should be noted that these plots are for a loading setup where the windwave misalignment is zero. It can be seen that the DELs are remarkably decreased in both directions. The maximum load reduction, i.e. around 69%, occurs in the sideways direction at the wind speed of 6 m/s. In addition, the lowest loads reduction occurs near the rated wind speed where the highest variations of aerodynamic forces take place. Although these results are calculated without the wind-wave misalignment angle, the interface loads and moments calculated for the full load setup are likely to be resulted in optimized jacket geometry.
The lifetime weighted equivalent loads of the tower base moments for the reference turbine and both strategies are compared in Figure 9. The values are normalised with respect to the DEL of the reference turbine in the fore-aft direction. It can be concluded that the DELs are strictly improved with passive viscous dampers. The application of the VFD is however more effective than TMD.
Conclusions
This paper presents the results of the integration of two damping concepts in the 10MW INNWIND.EU reference wind turbine. These two concepts are: a passive tuned mass damper (TMD) and a toggle brace viscous fluid damper. The improved tower base loads are obtained and compared with the reference design. It can be concluded that for the load setup where inflow wind is aligned with the wave direction, the DELs in the sideways direction, where no aerodynamic damping is active, can be lowered up to 29% and 69% compared to the reference case for respectively a TMD and viscous fluid damper. In this condition, the impact of the TMD in the fore-aft direction is however not significant while the viscous damper gives remarkably lowered loads in both directions.
The integration of the damping concepts could have a positive impact on the lifetime of the system. The integration of the developed strategies could be considered for an optimized jacket | 2017-11-29T07:45:39.046Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "e1d35edffef05eef565fbaefe37f00c2a65bf78c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/749/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "048cced8081ae6ea600db390864d461c87dac5ca",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
16094644 | pes2o/s2orc | v3-fos-license | Theoretical foundations of the sound analog membrane potential that underlies coincidence detection in the barn owl
A wide variety of neurons encode temporal information via phase-locked spikes. In the avian auditory brainstem, neurons in the cochlear nucleus magnocellularis (NM) send phase-locked synaptic inputs to coincidence detector neurons in the nucleus laminaris (NL) that mediate sound localization. Previous modeling studies suggested that converging phase-locked synaptic inputs may give rise to a periodic oscillation in the membrane potential of their target neuron. Recent physiological recordings in vivo revealed that owl NL neurons changed their spike rates almost linearly with the amplitude of this oscillatory potential. The oscillatory potential was termed the sound analog potential, because of its resemblance to the waveform of the stimulus tone. The amplitude of the sound analog potential recorded in NL varied systematically with the interaural time difference (ITD), which is one of the most important cues for sound localization. In order to investigate the mechanisms underlying ITD computation in the NM-NL circuit, we provide detailed theoretical descriptions of how phase-locked inputs form oscillating membrane potentials. We derive analytical expressions that relate presynaptic, synaptic, and postsynaptic factors to the signal and noise components of the oscillation in both the synaptic conductance and the membrane potential. Numerical simulations demonstrate the validity of the theoretical formulations for the entire frequency ranges tested (1–8 kHz) and potential effects of higher harmonics on NL neurons with low best frequencies (<2 kHz).
INTRODUCTION
Synchronized neural activity underlies various types of information processing in the brain. A diversity of sensory neurons encode temporal information via phase-locked spiking (Carr and Friedman, 1999). Phase-locking, or the generation of action potentials at a certain phase of the reference signal, is prevalent in the auditory system (Oertel, 1999;Ashida et al., 2010;Brette, 2012). In the auditory brainstems of mammals, reptiles, and birds, neurons involved in sound localization convey precise temporal information of sound using phase-locked spikes (cats: Joris et al., 1994;gerbils: Dehmel et al., 2010;caimans: Carr et al., 2009;owls: Sullivan and Konishi, 1984;Köppl, 1997;chickens: Warchol and Dallos, 1990;Fukui et al., 2006;redwing blackbirds: Sachs and Sinnott, 1978). Among various animal species tested, auditory neurons in the barn owl show the highest temporal acuity with a precision of less than 0.1 ms (Köppl, 1997). The degree of phase-locking, measured as the vector strength (VS) (Goldberg and Brown, 1969), is significant for frequencies up to about 8 kHz in the owl's nucleus magnocellularis (NM) (Sullivan and Konishi, 1984;Köppl, 1997).
Both mammals and birds have specialized neural circuits to compute the interaural time difference (ITD), which is one of the most important cues for sound localization (see Joris and Yin, 2007;Grothe et al., 2010;Ashida and Carr, 2011, for reviews). In the avian brainstem, axons from the NM form delay lines and provide phase-locked spike outputs while their target neurons in the nucleus laminaris (NL) detect coincident synaptic inputs and change their spike rates with ITD (Carr and Konishi, 1990;Köppl and Carr, 2008). Previous modeling results suggested that a convergence of phase-locked spikes creates an oscillatory synaptic input whose period is the same as that of the stimulus tone ( Figure 1A; Gerstner et al., 1996;Reyes et al., 1996;Kempter et al., 1998;Ashida et al., 2007;Slee et al., 2010). Recent in vivo intracellular recordings revealed that the barn owl's NL neurons indeed show oscillating membrane potentials (Funabiki et al., 2011). This oscillation was termed the "sound analogue potential" because its waveform resembled the waveform of the stimulus tone delivered to the owl's ears. Both physiological (Funabiki et al., 2011) and modeling (Ashida et al., 2007) results showed that the amplitude of the sound analog potential changes periodically with ITD, and that the NL neurons vary their spike rates almost linearly to this oscillation amplitude. In the following text, the main oscillatory component is therefore referred to as the "signal" or "AC," whereas the average input level is called the "DC." The DC The half peak width W determines the speed of rise and decay, while H is the peak height of the curve (see text for equations). (C) Single compartment NL neuron model (Funabiki et al., 2011). Leak and low-voltage-activated potassium (K LVA ) conductances are included in the membrane. (D) Linear membrane impedance of the model neuron. Introduction of the K LVA conductance greatly reduces membrane impedance below 1-2 kHz. component was shown to be irrelevant to the ITD computation in NL (Funabiki et al., 2011). All the frequency components other than the AC and DC are regarded as "noise" because they do not encode ITDs (see Reyes et al., 1996;Ashida et al., 2007;Slee et al., 2010 for related discussion).
Our previous simulations demonstrated that, if appropriate parameters are chosen, sound analog potentials can be quantitatively reproduced by the NM-NL model (Ashida et al., 2007;Funabiki et al., 2011). In this model, phase-locked spikes of NM fibers ( Figure 1A) are described by an inhomogeneous Poisson process (Gerstner et al., 1996;Kempter et al., 1998;Shimokawa et al., 1999;Burkitt and Clark, 2001;Kuhlmann et al., 2002;Grau-Serrat et al., 2003); unitary synaptic inputs ( Figure 1B) are modeled by an alpha-function (Gerstner and Kistler, 2003); and the responses of the NM membrane are simulated by a conductancebased single-compartment model (Figures 1C,D) (Ashida et al., 2007;Funabiki et al., 2011) with leak and low threshold potassium conductances (K LVA ), which has been shown to benefit fine temporal coding (e.g., Svirskis et al., 2002;Gai et al., 2009;Jercog et al., 2010;Mathews et al., 2010). In this paper, we analyze the model in detail and theoretically formulate how phase-locked NM inputs lead to the sound analog potentials in NL. The primary goals of this paper are two-fold: (1) to relate the model parameters to the DC, AC, and noise components of the synaptic input and membrane potential combining the Poisson process with linear membrane impedance analysis techniques (e.g., Hutcheon and Yarom, 2000); (2) to test the validity of the theoretical descriptions using numerical simulation of the NM-NL model. In the accompanying paper (Ashida et al., 2013), we apply our theoretical results obtained in the present paper to investigate how presynaptic, synaptic, and postsynaptic factors may affect ITD coding in the NL neuron.
PHASE-LOCKED SPIKING OF PRESYNAPTIC FIBERS
Following previous studies, we use the inhomogeneous Poisson process to model phase-locked spiking activity (Gerstner et al., 1996;Kempter et al., 1998;Shimokawa et al., 1999;Burkitt and Clark, 2001;Kuhlmann et al., 2002;Grau-Serrat et al., 2003;Ashida et al., 2007;Kuokkanen et al., 2010). Output spikes of each NM neuron are modeled as an inhomogeneous Poisson sequence n(t) with a periodic intensity function λ(t) = λ 0 1 + ∞ k = 1 a k cos(2πkνt + η k ) , where λ 0 is the mean intensity, a k (k = 1, 2, . . .) is the strength of the k-th frequency component, ν is the fundamental frequency (i.e., 1/ν is the period), η k is the phase of the k-th component. The spike train n(t) is regarded as a sum of delta functions: where N is the total number of spikes in the sequence, and t j is the timing of the j-th spike. The degree of phase-locking of a spike sequence is measured as the vector strength r (Goldberg and Brown, 1969), which is defined as r = 1 N N j = 1 cos(2πf t j ) 2 + N j = 1 sin(2πf t j ) 2 , with f being the reference frequency. In the following text, we assumed that f = ν (i.e., we focus on the locking to the fundamental frequency) unless otherwise mentioned. For the inhomogeneous Poisson sequence introduced above, the VS is related to the intensity function as r = a 1 /2.
The power spectral density (PSD) P n (f ) of the sequence n(t) can be calculated as with δ(f ) being the delta function (Wiesenfeld et al., 1994;Hohn and Burkitt, 2001;Kuokkanen et al., 2010). The first term λ 0 corresponds to the noise or the randomness of the sequence, the second term λ 2 0 δ(f ) corresponds to the mean strength of the sequence, and the remaining term corresponds to the fundamental frequency component and higher order harmonics. If the sequence is not infinitely long but has a time length T, the PSD becomes where δ f = 1 (for f = 0) and δ f = 0 (otherwise).
The von Mises distribution and the wrapped Gaussian distribution have, in general, very similar shaped curves (Figures 2A,D). Their higher harmonics decrease rapidly for VS < 0.7 (Figures 2B,E). If the VS is higher than 0.7, higher harmonics need to be considered in estimating the noise component (see also Discussion). Especially in the case of perfect phase-locking (VS = 1.0), these distributions become a delta function and all the higher harmonics have vector strengths of 1.0. The possible effects of higher harmonics will be discussed later. Fisher (1993) points out that the above two distributions are hard to distinguish in practical applications. Prior modeling studies of phase-locking used either the von Mises distribution (e.g., Grau-Serrat et al., 2003;Ashida et al., 2007) or the wrapped Gaussian distribution (e.g., Gerstner et al., 1996;Kempter et al., 1998;Kuhlmann et al., 2002). Nevertheless, comparison of these models in terms of neuronal coding will be a subject of future study. In this paper, we use the von Mises distribution for our simulation.
SIMULATING PHASE-LOCKED SPIKE SEQUENCES
In our simulations, we modeled phase-locked input from each NM fiber using an inhomogeneous Poisson process with a timedependent periodic intensity function where f s is the frequency of the stimulus tone and λ 0 is the mean intensity (= mean spike rate). The degree of phase-locking measured by vector strength r can be related to the concentration parameter κ as r = I 1 (κ)/I 0 (κ). We assumed that all the NM fibers were mutually independent but locked to the same phase of the stimulus tone with single VS (Kuokkanen et al., 2010). Note that we considered only the "best ITD" situation where all the ipsiand contralateral NM inputs arrived perfectly in-phase because ITD dependence of the phase-locked synaptic input has already been examined in our previous study (Ashida et al., 2007). The parameters used in the model are summarized in Table 1.
SYNAPTIC INPUT
The excitatory postsynaptic conductance (EPSG) in the NL neuron induced by each presynaptic NM spike was modeled by an alpha function α(t) = (Ht/τ) exp(1 − t/τ) (t ≥ 0), with H = α(τ) being the peak height and τ being the time constant ( Figure 1B). The half peak width W of the alpha function can be calculated by solving α(t) = H/2. The two solutions of this equation are t 0 = −τW 0 (−1/2e) and t 1 = −τW −1 (−1/2e), where W 0 is the principal real branch and W −1 is the other real branch of the Lambert W function (Corless et al., 1996). Therefore, the half peak width W of the alpha function is obtained as 446τ. Note that the half peak width W is linear to the time constant τ (i.e., if the time constant τ is doubled, then the half peak width W is also doubled). The Fourier transform F α (f ) of the alpha-function α(t) satisfies the equation where S = eHτ is the area between the alpha function and the t-axis.
The compound synaptic input conductance g syn (t) is the sum of all the NM spikes filtered by the alpha function: where t i m denotes the timing of the i-th spike of the m-th NM fiber, M is the number of NM fibers, and I m is the number of spikes of the m-th fiber. These values, except for the stimulus frequency, are the same as those used in our previous study (Funabiki et al., 2011) and fixed in this paper. The number and the mean spike rate of the NM fiber are taken from previous anatomical (Carr and Boudreau, 1993a) and physiological (Peña et al., 1996) studies. How each of these parameters affects the formation of the oscillatory potential and ITD coding will be examined in the accompanying paper (Ashida et al., 2013).
SIMULATING UNITARY SYNAPTIC INPUTS
In our simulations, the values of the half peak width W(= 2.446τ) and height H ( Table 1) were determined so as to reproduce the sound analogue potentials observed in experiments (Funabiki et al., 2011). With these parameter values, the average total conductance is D G = SMλ 0 = eHτMλ 0 = 21.7 nS [see Equation (5) in Results]. Note that, since we are focusing on the steady state ITD computation in NL, transient effects such as short term synaptic plasticity (Kuba et al., 2002;Cook et al., 2003) are not explicitly included in the model; the value of H is assumed to be at the corresponding steady state input level.
SIMULATING NL MEMBRANE
A Hodgkin-Huxley type conductance-based single compartment model (Hodgkin and Huxley, 1952;Koch, 1999;Gerstner and Kistler, 2003) was used to simulate the membrane potential dynamics of the NL neuron ( Figure 1C). The model equations and parameters ( Table 2) are the same as those we used in our previous study (Funabiki et al., 2011). The single somatic compartment has leak and K LVA conductances. The amount of these conductances were determined so that the membrane resistance of the soma at −61 mV would be about 4.4 M (membrane time constant was about 0.1 ms), similar to the experimental data (Funabiki et al., 2011). The membrane capacitance was determined from the reported size and shape of the NL neuron (Carr and Konishi, 1990;Carr and Boudreau, 1993a). The Reversal potential of leak current E L −60 mV Reversal potential of potassium current E K −75 mV Reversal potential of synaptic current E syn 0 mV Temperature coefficient Q 10 2.5 Temperature T 40 • C The model consists of membrane capacitance, leak conductance, and K LVA conductance. The kinetics of the K LVA conductance was taken from a study of chicken NM (Rathouz and Trussell, 1998). Parameter values are the same as those used in our previous study (Funabiki et al., 2011) and fixed in this paper. slow GABAergic input (Funabiki et al., 1998;Kuo et al., 2009), which does not lock to high frequency stimuli (Yang et al., 1999;Coleman et al., 2011), and other slow conductances such as I h (Yamada et al., 2005;Khurana et al., 2011), were implicitly included in the constant leak conductance. Sodium and high voltage activated potassium conductances, which are required for spike generation, were not included in the model, because spikes in the NL neuron are considered to be generated at the first node of Ranvier (Funabiki et al., 2011) located about 60 μm away from the soma (Carr and Boudreau, 1993b) and because spike generation at the node does not significantly affect the integration of the synaptic input at the soma (Ashida et al., 2007). All the synaptic input is considered to occur at the cell body because the dendrites surrounding the soma of the owl's NL are short and stubby (Carr and Konishi, 1990;Carr and Boudreau, 1993a;Kuokkanen et al., 2010). Numerical integration was performed by using the forward Euler method with a time increment of 0.1 μs.
ANALYSIS OF SIMULATION DATA
We obtained 1100-ms-long simulated traces of conductance input and membrane potential for each parameter set. Discarding the first and the last 50 ms, we used 1000-ms traces for further analyses. To extract the component which oscillates at the stimulus frequency, a trace x(t) was fitted by a cosine function y(t) = D 0 + A 0 cos(2πf s t + φ), with f s being the stimulus frequency, t being time, and φ being the phase shift. D 0 and A 0 of the fitting function were, respectively, regarded as the "DC amplitude" and the "AC amplitude" of the trace. By subtracting the fitting cosine function y(t) from the original trace x(t), we obtained the "noise trace" z(t) = x(t) − y(t). The time-averaged standard deviation of the noise trace z(t) was regarded as the "noise amplitude" (see Figures 3A,B for an example).
In the frequency analyses, the 1000-ms trace was broken into ten 100-ms segments and resampled at 327,680 Hz. Each segment, consisting of 32,768 (= 2 15 ) data points, was Fourier-transformed with the frequency resolution being 10 Hz and the Nyquist frequency being 160 kHz. To derive the PSD, the absolute values of the Fourier transform were squared and averaged over the 10 segments to reduce jitter in the PSD curve (Bair et al., 1994).
RESULTS
Phase-locked spiking activity of converging presynaptic fibers gives rise to oscillatory membrane potential to the target neuron (Gerstner et al., 1996;Reyes et al., 1996;Kempter et al., 1998;Ashida et al., 2007;Slee et al., 2010). In the following sections, we derive analytical expressions that relate the model parameters to the DC (average input), AC (signal at the locked frequency), and noise (other frequency components) levels of the model synaptic input and the membrane potential. Then we test our theoretical results using simulations.
DC, AC, AND NOISE OF THE SYNAPTIC INPUT
We first consider the inhomogeneous Poisson spike sequence filtered by the synaptic process modeled by an alpha function (see Materials and Methods for definitions). The filtered synaptic input x(t) of each input fiber spiking at an average rate of λ 0 can be written as the convolution of the spike sequence and the alpha function, i.e., x(t) = n(t) * α(t). The power spectrum P x (f ) of the filtered sequence is P x (f ) = P n (f )|F α (f )| 2 . Using the Equations (1) and (3), the integration of P x (f ) over the entire frequency range (−∞, ∞) can be calculated as Thus the standard deviation of the noise is S 2 λ 0 τ , the DC component of the sequence is λ 0 S (see Equation (1) and following text). The AC component (at the fundamental frequency ν) is 2rλ 0 S 1 + (2πντ) 2 , with r being the VS (note that Peak = √ 2 RMS). If there are M independent sources of the inhomogeneous Poisson spike sequences locked to the same phase, λ 0 is replaced by Mλ 0 .
Therefore the average magnitude D G of the compound synaptic input conductance g syn (t) (Equation 4) is where S = eHτ. The magnitude A G of the signal component of the input conductance g syn (t) is with r being the VS of the input spike sequences which are phaselocked to the stimulus frequency f s . Similarly, the magnitude L k of the k-th harmonic is where r k is the VS at the k-th harmonic frequency (e.g., Figure 2B). Note that A G = L 1 . The magnitude N G of noise measured by standard deviation is Equations 5, 6, and 8 relate the DC, AC, and noise components of the synaptic input to the model parameters. Both A G and N G are linear to the average input level D G . A G is also linear to the VS denoted by r and decays with input frequency f s due to the low-pass property of the synaptic input.
LINEARIZED RESPONSE OF THE SINGLE COMPARTMENT NL MEMBRANE
Following Mauro et al. (1970), Koch (1999), and Richardson et al. (2003), we derive the linear membrane impedance of the RC membrane with K LVA conductance. The dynamics of the membrane potential V(t) and the K LVA activation variable d(V, t) are, respectively, written as: with I ext being the external input. We linearize these equations around the holding potential V = V * . By denoting v(t) := Assuming that the displacement from the holding potential V * is small, we fix τ d at V * , drop the second order term δv, and use the linear . Introducing a new variable w := δ/d * , and new parameters g v := g L + g K d ∞ (V * ), To obtain the linearized membrane impedance, we set I 0 = I DC + I AC cos(2πft) and solve the above linear equations to yield being the transient response and η(f ) being the phase lag. The magnitude of the impedance can be calculated as where ζ(f ) = g w which is the impedance of the simple RC membrane. For large f, the membrane impedance |Z(f )| decays according to 1/2πCf (see Figure 1D).
AC AND NOISE OF THE MEMBRANE POTENTIAL
In the preceding sections, we obtained equations for phase-locked synaptic inputs and the effects of the membrane filter. Using these results, we next derive analytical expressions that relate the AC and noise components of the membrane potential to the input parameters, such as the stimulus frequency (locking frequency) f s , number M of presynaptic NM fibers, their mean spike rate λ 0 , their vector strength r, the synaptic time constant τ, and the membrane impedance Z(f ).
To calculate the magnitudes of the AC(A V ) and noise (N V ) in the membrane potential, we incorporate the linear effects of the driving voltage (E syn − V * ) and the membrane impedance Z(f ). Using Equations 1, 3, 6, and 9, we have The holding potential V * here satisfies the equation Equations 10 and 11 describe the AC and noise components of the (sound analogue) membrane potential. Both A V and N V are linear to the average input D G . The AC amplitude of the membrane potential A V (i.e., the sound analogue potential) is also linear to that of the synaptic input A G , because of the linear membrane response. The validity of the linear approximation will be examined in the next section. Although the membrane response is assumed to be linear at each frequency, the noise amplitude of the membrane potential N V is not linear to that of the synaptic input N G because the effect of the membrane filter differs between frequencies (i.e., high frequency noise components are more likely to be reduced than low frequency components; see Figure 1D).
NUMERICAL SIMULATIONS
In order to test the validity of the theoretical results obtained above, we carried out numerical simulations. The basic settings of our simulation are the same as those in our previous study (Funabiki et al., 2011). Our model consists of NM fibers and an NL cell body, while the phase-locked spiking activity of each NM fiber is modeled as the von Mises distribution. In the following simulations and analyses, we assume that ipsi-and contralateral NM inputs arrive perfectly in-phase. The NL neuron is modeled as a non-excitable single compartment with leak and K LVA conductances. The large K LVA conductance greatly reduces the membrane impedance in the low frequency region (Figure 1D), yielding a very short membrane time constant of about 0.1 ms. Since the roles of the K LVA conductance have been studied and discussed extensively (Manis and Marx, 1991;Reyes et al., 1994;Svirskis et al., 2002;Rothman and Manis, 2003;Day et al., 2008;Gai et al., 2009;Jercog et al., 2010;Mathews et al., 2010), we do not investigate its effects further in this study. The kinetics of the K LVA conductance was adopted from a study of the chick NM (Rathouz and Trussell, 1998).
The simulated synaptic input ( Figure 3A) is oscillatory and can be decomposed into a signal (AC) component and a noise component. The amplitudes of the DC, AC, and noise components of the simulated synaptic conductance were 21.7, 12.7, and 4.6 nS, respectively. These simulation results agreed well with the theoretical predictions of D G = 21.7, A G = 12.7, and N G = 4.4 nS (Equations 5, 6, and 8). The periodic synaptic input induces an oscillatory membrane potential ( Figure 3B). The magnitudes of the AC and noise components of the simulated potential traces were 1.25 and 0.94 mV, respectively. These values matched the theoretical predictions of A V = 1.25 mV (Equation 10) and N V = 1.03 mV (Equation 11).
The power spectral densities of the simulated input ( Figure 3C) and the membrane potential ( Figure 3D) show large peaks at the signal frequency and smaller peaks at higher harmonics. The peak height of the second harmonic of the simulated membrane potential is over two orders of magnitude smaller than the main peak ( Figure 3D). The simulated power spectral densities are in excellent agreement with the theoretical prediction (gray curves and filled circles in Figures 3C,D). Due to the low-pass property of the membrane (Figure 1D), noise in the membrane potential consists mainly of the frequency components below the signal frequency. In the accompanying paper (Ashida et al., 2013), we systematically examine the roles of the number of converging NM fiber on the NL neuron, their average spike rate, their degree of phase-locking and the synaptic time constant to investigate how these parameters affect the formation of sound analogue potential in the NL neuron.
FREQUENCY DEPENDENCE
Simulated sound analogue potentials are frequency dependent (Figure 4A), even when all the other parameters including VS, the synaptic time constant and membrane properties are fixed. Effects of these parameters are studied in the accompanying paper (Ashida et al., 2013). For low frequencies (1-2 kHz), AC components are generally large, while for high frequencies (6-8 kHz), the simulated AC amplitudes are less than 1 mV (Figures 4A,D). This dramatic decrease in the AC component is due to the filtering properties of the synapse (Equation 3) and the membrane (Equation 9, Figure 1D). The higher the signal frequency, the more the AC component is diminished by the effect of these lowpass filters ( Figure 4B). To obtain a sound analogue potential exceeding 1 mV at over 6 kHz, both the synaptic and membrane time constants must be a few times smaller than the values used in our simulation. Membrane time constants of mammalian outer hair cells decrease with their characteristic frequency (Johnson et al., 2011). Similar frequency dependence may exist in auditory brainstem neurons.
Since all the simulation parameters except frequency are fixed, the baseline noise level of the PSD curve does not change with frequency (Equation 11, Figure 4B). For low frequencies (1-2 kHz), however, the total amount of noise is slightly higher than for other frequencies because of the second harmonic (Figures 4C,D). At the level of synaptic conductance (Figure 4C), the effect of the second harmonic is more prominent than at the level of membrane potential (Figure 4D), where the membrane filter ( Figure 1D) further reduces high frequency components. The overall contribution of the second harmonic to the membrane potential noise is therefore limited to frequencies below 2 kHz ( Figure 4D). It should also be noted that, for these low frequencies (e.g., Figure 4A, 1 kHz), the simulated traces do not FIGURE 4 | Simulations of the synaptic input in NL with different stimulus frequencies. All the parameters except the stimulus frequency are fixed (see Table 2 resemble pure sinusoids, because higher harmonics skew the waveform. Our analytical calculations for the DC conductance (Equation 5), AC conductance (Equation 6) and AC potential (Equation 10) match the simulation results well (Figures 4C,D) for frequencies of 2 kHz and above. For frequencies below 2 kHz, however, there is a slight discrepancy between the theoretical prediction of the membrane AC (7.43 mV) and its simulated value (6.67 mV). Also, for low frequencies, the second harmonic needs to be considered to predict conductance noise precisely ( Figure 4C). As mentioned above, the low-pass membrane filter effectively reduces higher harmonics on the membrane potential, resulting in smaller disagreement between the theoretical prediction and the simulation of the noise components (compare the noise amplitudes in Figures 4C,D).
DISCUSSION
The sound analogue membrane potential, which is created by a "volley" of phase-locked inputs (Wever and Bray, 1930;Joris and Smith, 2008), underlies coincidence detection in the owl's NL neurons (Funabiki et al., 2011). In principle, phase-locked input sequences from the NM axons are filtered by synaptic and membrane processes, inducing oscillatory membrane potentials in NL ( Figure 1A). The NL neuron linearly converts the AC signal component of the oscillatory potential into output spike rates (Funabiki et al., 2011). In the present paper, we derive theoretical equations that relate presynaptic, synaptic, and postsynaptic factors with the DC, AC, and noise components of the sound analogue potential, and test the agreement between theoretical predictions and numerical simulations. In the accompanying paper (Ashida et al., 2013), we carry out further simulations and analyses to examine how these factors affect the ITD coding in NL.
THEORETICAL FORMULATIONS
The main aim of this paper is to provide a detailed theoretical description of how phase-locked synaptic inputs lead to oscillatory membrane potentials. Phase-locked spiking activity was modeled as an inhomogeneous Poisson process with a periodic intensity function, and the PSD of the spike sequence was analytically calculated (Equations 1, 2). The presynaptic spikes were summed and then filtered by the synaptic conductance (Equation 3) and the membrane (Equation 9), resulting in the oscillatory membrane potential ( Figure 3B). Our model parameters are based on previous results on the owl's auditory system, but the analysis technique used here can be applied to other systems where phase-locking plays a role in information processing. These systems may include the electrosensory lateral line lobe (Kawasaki and Guo, 1996), olfactory system (Stopfer et al., 2003), barrel cortex (Ewert et al., 2008), visual cortex (Gray and Singer, 1989), and the hippocampus (Harris et al., 2002;Diba and Buzsáki, 2008;Mizuseki et al., 2009).
AGREEMENT OF THEORY AND SIMULATION
The power spectral densities of the simulated waveforms also showed excellent agreement with the theoretical predictions, including the peak heights and the overall noise levels (Figures 3C,D). In general, predictions for the membrane potential are worse than those for the synaptic conductance because the effects of the membrane (Equation 9) are further included in the calculation (compare Equations 6, 8 with 10, 11). Especially for sound analogue potentials of over 5 mV (e.g., Figures 4A,D, 1000 Hz), the assumptions for the linear membrane approximation no longer hold, resulting in a discrepancy between the analytical value and simulation results (see Koch, 1999, chapter 10 and references therein for related discussion on the validity of the linear approximation). Thus, the theoretical formulation, which can predict the property of the oscillatory membrane potential without doing computationally-demanding simulations, is most useful when the AC amplitude is within the range of a few mV.
The theoretical predictions for the DC and AC components largely agreed well with the simulation results (Figures 4C,D). The analytical predictions for the noise amplitudes were also comparable to the simulated values but slightly worse than the predictions for DC and AC, because not one but all frequency components contributed to the calculation of the noise amplitude (Equation 11). The prediction performance was also poorer for low frequency AC (2 kHz or below; Figure 4D). These disagreements stem from the fact that the effects of the K LVA conductance are most prominent in low frequencies below 2 kHz ( Figure 1D). Violating the assumptions of the linear approximation is more likely to affect low frequency parts of the analytical results.
FREQUENCY DEPENDENCE
The degree of phase-locking of the presynaptic NM fibers generally decreases with frequency (Köppl, 1997). In order to facilitate comparisons between different frequencies, however, we fixed this parameter to a typical value of 2-4 kHz NM neurons (VS = 0.6) in our simulation (Figure 4). Therefore, the amplitude of the AC component in NL neurons from the high best frequency (>6 kHz) regions could be much smaller than a few mV (Figure 4D), implying that high frequency NL neurons should be extremely sensitive to small AC signals. Simulations suggest that axonal Na conductance may amplify high frequency signals (Ashida et al., 2007). However, how and what cellular and synaptic properties of these neurons enable high frequency ITD computation in vivo remains to be investigated. In the accompanying paper (Ashida et al., 2013), effects of changing VS on ITD coding are examined in more detail.
HIGHER HARMONICS AND NOISE
In our definition, all the higher harmonics are considered noise, because their ITD dependence is different from that of the main AC signal (Ashida et al., 2007;Slee et al., 2010). Previous studies of the chicken NL in vitro pointed out that these higher harmonics could hinder ITD coding in NL neurons (Reyes et al., 1996;Slee et al., 2010). Our simulation results suggest that higher harmonics should be considered for frequencies below 2 kHz (Figures 4C,D). For higher frequency NL neurons, the low-pass filter properties of the synaptic and membrane processes effectively cut off the higher harmonics, minimizing their effects on ITD computation. Furthermore, owls' NL neurons with mid-tohigh best frequencies (over 3 kHz) recorded in vivo do not show a clear second harmonic (Funabiki et al., 2011), suggesting that higher harmonics play little or no role in the owl's computation of ITD at these frequencies. The best frequency of NL neurons in owls ranges up to 7.5-8 kHz (Carr and Konishi, 1990;Peña et al., 2001), whereas the frequency limit of the chicken NL is 3.5-4 kHz (Rubel and Parks, 1975). Thus, the effects of higher harmonics on ITD coding would be more salient in the chicken than in the owl.
The amplitudes of higher harmonics increase non-linearly to the amplitude of the fundamental frequency (Figures 2C,F). For large VS values (e.g., VS > 0.7), higher harmonics increase more rapidly than the signal, resulting in a faster increase in noise. In our simulation settings, the amplitude of the second harmonic of 1.3 mV at 2 kHz for VS = 0.6 increases up to 6.4 mV for VS = 1.0, showing a three times faster increase than the AC component at 1 kHz. These results suggest that perfect phase-locking may not always be beneficial to ITD coding. The owl's auditory nerve recordings show a prominent plateau of VS about 0.7 at 1.5-3 kHz (Köppl, 1997). This plateau might thus be related to the optimization strategy of the noise level in ITD computation. Noise may affect frequency tuning, temporal coding, and information capacity (e.g., Brunel et al., 2001;Richardson et al., 2003;Butts and Goldman, 2006;Gai et al., 2009;Rossant et al., 2011;See also Faisal et al., 2008;and McDonnell and Abbott, 2009; for recent reviews). Further investigation is necessary to conclude how higher harmonics and other neuronal noise positively or negatively contribute to high frequency ITD detection through oscillatory synaptic inputs. | 2015-07-17T21:52:16.000Z | 2013-11-08T00:00:00.000 | {
"year": 2013,
"sha1": "298f4a629ce5cfcf370fc80b52f41384ad21a690",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2013.00151/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "298f4a629ce5cfcf370fc80b52f41384ad21a690",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
3609933 | pes2o/s2orc | v3-fos-license | Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware
The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.
Introduction
The well-known IoT paradigm is comprised numerous devices that connect to the Internet and contribute to world-wide interconnectivity. Low energy consumption is generally expected of the connected devices, while at the same time being confronted with challenges such as a low expected manufacturing cost, mobility while being connected and deployment in often difficult-to-reach places. This makes minimizing the energy consumption, while still fulfilling strict reliability demands, one of the major challenges of IoT communications.
To achieve high reliability with minimal power consumption, many research works have been conducted on MAC protocols featuring these requirements [1]. An important development was the IEEE 802.15.4 MAC layer and more specifically the IEEE 802. 15.4e MAC amendment that proposed the TSCH mode. TSCH-enabled networks achieve a reliability of 99.999% with minimal power consumption, proving to be a promising solution for wireless industrial networks. TSCH uses channel hopping to improve reliability by minimizing the effects of external interference and multi-path fading. In order to limit the power consumption, it uses a time-synchronized schedule that tells a node exactly when to send and receive data and thus avoids wasting energy during contention periods and idle
Background and Related Work
In this section, we briefly introduce TSCH and the open-source OpenWSN project that is used as firmware. Afterwards, the used OpenMote hardware is discussed. Finally, we compare our work to existing energy consumption models.
Time-Slotted Channel Hopping
In TSCH networks, every node follows a time-synchronized schedule. This schedule instructs every node about exactly what to do and avoids wasting valuable energy. The TSCH schedule is divided into time slots. The duration of a time slot is typically 10 ms or 15 ms and sufficient to transmit a packet of the maximum size of 127 bytes, immediately followed by an optional acknowledgment frame indicating that the packet was successfully received. Multiple time slots are grouped into a slot frame, and the size of a slot frame defines the width of the schedule. These slot frames repeat continuously over time. TSCH also allows one to use multiple frequencies, leading to a two-dimensional matrix of cells. The number of available frequencies actually determines the height of the schedule.
A schedule can contain four possible cell types: TX, RX, shared and off. The first two indicate that the node should send or receive, respectively. Shared cells can be used by any node, and a contention-based back-off mechanism manages the access to it. They are used to synchronize, join and boot up the network [6]. An off cell indicates that the radio of the node should be turned off. The cells in a schedule are updated dynamically by a so-called scheduling function that takes into account the necessary resources to handle the traffic load and prevents wasting resources.
TSCH also uses channel hopping to combat multi-path fading and external interference [7]. This channel hopping depends on the Absolute Sequence Number (ASN) and the number of channels. The exact frequency on which two nodes will communicate is determined by f requency = F((ASN + channelO f f set) mod nFreq) where F is a lookup table containing the set of available channels, channelO f f set is the channel offset of the time slot in the schedule and nFreq is the number of available frequencies. The slot frame size (i.e., number of slots in a single slot frame) should be a prime number in order to be sure that every frequency is used. Figure 1 illustrates an example of a TSCH schedule with a slot frame size of 101 cells and 16 channel offsets. It represents a combination of all schedules of each individual node. Each cell in the schedule represents a specific time slot and channel offset in which directed communication between nodes can be assigned. These assigned cells can either be dedicated to a single transmitter (e.g., from node W to node Z in the cell with a slot offset of one and a channel offset of two), or they can be shared between multiple nodes (e.g., the shared cell with a slot offset of zero and a channel offset of one). All other cells are considered off cells.
This article focuses on TSCH in IEEE 802. 15.4e, but the research is also easily transferable to other protocols using TSCH, e.g., WirelessHART and ISA100.11a [8,9].
OpenWSN
OpenWSN is an open-source project that implements the IPv6 over the TSCH mode of IEEE 802. 15.4e (6TiSCH) architecture [10]. The 6TiSCH network architecture tries to standardize IPv6 on top of the TSCH mode of IEEE 802. 15.4e and as such bridge the gap between deterministic industrial networks and traditional IP networks [11]. It aims to provide low latency and high reliability for low-power, critical wireless applications. As such, the OpenWSN firmware provides a complete protocol stack based on IoT standards such as IPv6 over Low-Power Wireless Personal Area Network (6LoWPAN), Routing Protocol for Low-power and Lossy network (RPL) and Constrained Application Protocol (COAP) [12][13][14], as shown in Figure 2. The newest update of the OpenWSN firmware was used when rebuilding and extending the energy model [15]. The hierarchical design of the project makes it relatively easy to port the project to new hardware platforms. Hardware drivers for most common IoT hardware are already available as part of the OpenWSN project itself. Next to the firmware, useful software such as the OpenVisualizer is also provided. Although the main use of the OpenVisualizer project is to connect the OpenWSN network to the Internet, it also provides the ability to monitor the network. The tool shows the internal state of all the motes that are physically connected to the computer running the OpenVisualizer, e.g., the neighbor table, scheduling table and packet queue. It also has the ability to run simulated motes and to debug the communication with Wireshark [16].
OpenMote Hardware
The measurements presented in this paper are performed using OpenMote, a modular open-hardware ecosystem designed for the industrial IoT [3]. The platform was developed at UC Berkeley and is designed to efficiently implement IoT standards such as 6TiSCH.
OpenMote-CC2538 is the core of the OpenMote hardware ecosystem. It is the most important component, and other components (e.g., the OpenBattery) are considered to be extensions of it. It features a Texas Instruments CC2538 System-on-a-Chip (SoC) that consists of a 32-MHz micro-controller with 32 kB of RAM and an IEEE 802.15.4-compliant 2.4 GHz radio.
The OpenUSB version used in this article has a CC1200 radio chip. Unlike the CC2538, which has a 2.4 GHz radio, the CC1200 is a radio transceiver that operates in the 900-MHz range, e.g., the 868 MHz band in Europe. This allows for longer-range communication between the motes. As OpenUSB only holds the CC1200 radio transceiver, it needs to be connected to OpenMote-CC2538, which holds the microprocessor to control it.
Currently, a new board called OpenMote B is being released. OpenMote B will be a next-generation, dual-band OpenMote device [4]. It provides a dual-radio interface for short-and longer range communication, combined on one board.
TSCH Energy Modeling
As minimizing the energy consumption is one of the major challenges of IoT networks, much research has already been conducted on this topic. Some of the research already focused on TSCH energy modeling.
Some works target specific features in TSCH. De Guglielmo et al. proposed an analytical model of the IEEE 802.15.4e TSCH CSMA-CA algorithm that is used in shared time slots [17]. The authors also observed that the capture effect has a significant impact on the performance of the CSMA-CA algorithm. Papadopoulos et al. investigated the impact of the guard time in TSCH [18]. The authors decreased the guard time duration when motes were closer to their sink and concluded that this results in significant savings in energy consumption without compromising network reliability. While these works only aim at specific TSCH elements, the proposed work provides an energy consumption model for the whole of the IEEE 802.15.4e TSCH mode. Other works such as Juc et al. compared the performance of the TSCH and Deterministic and Synchronous Multichannel Extension (DSME) modes of 802.15.4e [19]. The authors do not propose a model themselves. They observed that TSCH mode tends to consume more energy than DSME mode. This is due to the large fixed guard time in TSCH and because DSME can aggregate multiple acknowledgments and transmit a single group of acknowledgments.
Finally, Vilajosana et al. presented an energy model for TSCH networks, using the OpenMote and OpenWSN for their experimental validation [2]. The values from the model were compared to measurements on the GINA and OpenMote-STM32 platforms. Our paper continues the work of Vilajosana, but explores several differences and improvements. As such, we propose a model with an extra time slot type (i.e., TxDataRxNoAck), provide an extended and a more up-to-date set of states per time slot and extend the model to support variable packet sizes. Furthermore, the OpenWSN firmware has been continuously updated, and the current software version has changed substantially since the version used by Vilajosana in 2013. Finally, by using the OpenMote-CC2538 and OpenUSB board, this paper focuses on state-of-the-art hardware. This allows us to consider the TSCH energy consumption in both the 868 MHz and 2.4 GHz band. To the best of our knowledge, we are the first to do this. We also explicitly look at the difference in power consumption between using a SoC and the case with a separate micro-controller and radio chip. All the steps in developing the model are explained in detail, allowing it to be used for different types of hardware by simply changing the measured consumption values.
TSCH Energy Model
In this section, the proposed TSCH energy model is introduced. First, all types of time slots are discussed, followed by a more detailed examination of the states in these time slots. Afterwards, the time slot energy model is presented. Finally, we explain how the slot model could be adapted for use with different hardware. The implementation of the proposed model can be found in [20].
TSCH Time Slots
A TSCH schedule can contain different types of time slots, i.e., cell types, to indicate that a node should transmit, listen or put its radio to sleep. In IEEE 802.15.4e, seven different types of time slots can be identified: The mote sends a frame and expects an ACK, but no ACK is received. This could be caused by a collision of the data frame.
The proposed model divides each time slot into different states. Figure 3 illustrates this and presents a general overview of the activity of a transmitter and receiver during a TxDataRxAck time slot and RxDataTxAck time slot, respectively. Some of the states seen in Figure 3 consist of two parts: one part where the CPU is active and one part where the CPU is sleeping. These two parts are considered as separate states in our model. The state of the radio in our model only changes at moments when the CPU state changes. This is a simplification as in the real world, the radio state changes slightly before or after this moment, typically while the CPU is active. The remainder of this section explains the TxDataRxAck time slot in full detail. The other slots are modeled similarly, and we limit the discussion to highlighting the differences with the TxDataRxAck slot.
Time Slot TxDataRxAck
The different states of the TxDataRxAck time slot are shown in Figure 3, and Table 1 lists the exact CPU and radio states at each moment. As can be seen in the table, the CPU has two states, i.e., Sleep and Active, while the radio has five states, i.e., Sleep, Idle, Listen, Transmit (TX) and Receive (RX). At the beginning of each time slot, the CPU wakes up and performs the tasks required for any slot. This includes incrementing the ASN and scheduling the next state depending on the type of the slot. The CPU then sleeps again during TxDataOffset until the moment the radio is needed.
During TxDataPrepare, the radio wakes up, the channel is set and the bytes to transmit are loaded into the radio. The duration of this state is variable, mainly because the time necessary to load the bytes depends on the frame size. Since this state always starts at the same offset and has a variable duration, there is some time left between the TxDataPrepare and the actual transmission. During this TxDataReady state, the radio is in Idle mode, while waiting until it is time to transmit. To minimize the energy consumption of the mote, the duration of the TxDataReady state should thus be as short as possible.
The first byte behind the Start-of-Frame Delimiter (SFD) has to be transmitted exactly TxOffset ms after the start of the time slot. In order to do so, the time required to switch the radio from Idle to TX mode has to be taken into account. The duration of the TxDataDelay equals the time between the TX command being sent to the radio and the moment the SFD has been transmitted.
After the RxAckOffset that follows where the mote sleeps, the RxAckPrepare state then prepares the radio again by waking it up and setting the correct channel. Any time less than the maximum duration of RxAckPrepare is then spent in the RxAckReady state.
The ACK is transmitted TxAckDelay ms after the end of the TxData state. Because the clocks of the transmitting and receiving node may not be perfectly synchronized, the ACK might arrive slightly earlier or later than expected. The radio is thus turned on at the start of the RxAckListen instead of just in time for the data. If no ACK is received during the Acknowledgment Guard Time (AGT) period, the mote turns off the radio and considers the transmission failed. The duration of the AGT is defined as 1000 µs in OpenWSN. When the clocks between the motes are perfectly synchronized, the RxAckListen state has a duration of AGT/2 plus the time to change the radio from Idle mode to RX mode (which is considered to be instantaneous in OpenWSN).
During the TxProc state, the ACK is read from the radio and the transmission is considered successful when the ACK is valid. The mote also synchronizes its clock based on the offset between TxAckDelay and the actual data reception time, if the ACK came from its parent in the network routing graph. For the remaining part of the time slot, both the CPU and radio are in Sleep mode.
Time Slot RxDataTxAck
This time slot can be considered the opposite of the TxDataRxAck. The states to handle the data in TxDataRxAck are found in handling the ACK in RxDataTxAck and vice versa. All states of the RxDataTxAck time slot can be found in Table 2.
The guard time for the data is however larger than the AGT that is used for ACKs. The Packet Guard Time (PGT) determines how long the radio listens for the data before the radio is turned off. When no data are received during the PGT period, we classify the time slot as RxIdle instead of RxDataTxAck. In OpenWSN, the PGT is defined as 2600 µs.
Time Slot TxData and RxData
When no ACKs are required (e.g., for broadcasts), only the first half of the time slot is used. During the TxData and RxData slots, the mote sleeps once the data have been transmitted or received. The states for both TxData and RxData are shown in Tables 3 and 4, respectively.
Time Slot RxIdle
When the transmitter has no data to send, the slot that could have been a TxDataRxAck becomes a Sleep slot. However, on the receiver side, a different type of slot is needed to represent the behavior of the node: the RxIdle slot occurs when the receiver expects data, but does not receive anything. The states of RxIdle are shown in Table 5. The behavior of RxIdle is not an error; it simply means that a slot was reserved, but the transmitter did not have any data to send at that moment.
Time Slot Sleep
In time slots where no data have to be transmitted or received, the node sleeps during the whole duration of the slot. The CPU of the node only briefly wakes up at the start of the slot, e.g., to increment the ASN. The states of the Sleep time slot are shown in Table 6. There are many error states in OpenWSN. The code would go into an error state when, for example, the radio remains active too long or when the prepare state lasts longer than the maximum allowed duration. It is unlikely that the code would end up in most of these error states unless there is a configuration issue. However, there is one error state that is likely to occur eventually: a missing ACK. In the TxDataRxAck slot, data are transmitted and an ACK is received, but in the slot that we refer to as TxDataRxNoAck, the ACK is expected, but not received. In this case, the node stays in the RxAckListen state during the AGT period and does not enter the RxAck state. After the AGT period, the radio goes to sleep during the TxProc and Sleep state, as can be seen in Table 7.
TSCH Energy Consumption Model
Having identified all states per time slot, the model for the charge drawn during a time slot can be constructed. The resulting charge (in coulombs) drawn from the battery during a slot, Q Slot , is represented by: with ∆t State and I State the state duration and current drawn in each state, respectively. The unit of the duration is milliseconds (ms), while the unit of the current is milliamperes (mA), meaning that the unit of the resulting charge is microcoulombs (µC). This can be used to calculate the total charge drawn for each of the slot types discussed in Section 3.1. Subsequently, the model previously proposed by Vilajosana et al. [2] can be employed to calculate the total charge drawn across a slot frame. This in turn can be used to compute the lifetime of a mote. That model, however, has one major shortcoming. It does not consider the actual packet size when calculating the charge drawn by a slot. Instead, it takes the consumed charge values for the maximum packet size and scales those linearly based on the actual packet size: With N sent the number of bytes being sent in the packet and max PktSize the maximum packet size for which measurements were performed. However, this leads to highly inaccurate estimates, especially for small packet sizes, as the duration of most states with the slot is independent of the packet size. In contrast, we propose a more accurate estimation of the charge drawn in a slot Q N sent slot , based on actual measurements with different packet sizes. This is achieved by expressing the duration of each state that depends on the packet size, as a linear function of the packet size, rather than a fixed value for the maximum packet size. This is elaborated on in Section 4.
Different Hardware Support
Since the model has an elaborate set of parameters, adapting the model to different hardware while maintaining an equal level of accuracy is a burdensome task. However, at the cost of a slight decrease in accuracy, the model can easily be simplified in order to apply it to different hardware. For example, one can set the duration of short states to zero (e.g., TxDataDelayStart and RxAckOffsetStart) and only update the states that have the most impact on consumption. Alternatively, the duration can be estimated instead of measured as most durations will be very similar to the ones presented in this article. Furthermore, the consumption of the CPU and radio does not have to be measured: these values can be found in the data sheet of the manufacturer. The resulting model will be slightly less accurate, but no or only a few additional measurements have to be made to use this model to simulate the charge drawn by other TSCH hardware.
Measurements
This section first presents the setup used to measure the duration and energy consumption of each state of each slot type. Afterwards, the measurements of the time slot state durations are discussed together with how the duration values are affected by the packet size. Finally, the consumption of each device state is presented with a detailed discussion for each of the two evaluated radios.
Methodology
In this section, the necessary adaptations to the OpenWSN firmware, that allowed performing the measurements, are briefly explained. Additionally, the two measurement setups for both the state duration and energy consumption measurements are discussed.
Firmware Changes
To perform valid measurements, the firmware code that toggles debug pins and LEDs on the OpenUSB board was disabled. Furthermore, the serial communication code was also completely disabled because even when the OpenUSB is not connected to a computer, the code would still try to output data, unnecessarily increasing power consumption.
In order to prepare the 2.4 GHz driver for the measurements, only small adaptations had to be made to the original firmware. The firmware for the 868 MHz driver however required additional implementation effort, as there were no working drivers for the CC1200 radio chip on the OpenUSB. Based on a branch of the official OpenWSN repository [21], we implemented a working CC1200 radio driver, which can be found in [15].
State Duration Measurements
All state duration measurements were done using the EFM32GG-STK3700 Giant Gecko Starter Kit from Silicon Labs [22]. The setup is shown in Figure 4. Using the Gecko board, an OpenUSB pin was connected to pin PB9 of the Gecko board, enabling the Gecko to measure how long the connected OpenUSB pin was made low. The OpenMote firmware would then make the connected pin low at the beginning of the measurement and high at the end of the measurement. The output was sent over Serial Wire Output (SWO) to the console in the proprietary software Simplicity Studio on the connected computer, where post-processing of the duration data was applied [23]. The duration measurements were averaged in case variability between different measurements was noticed.
Energy Consumption Measurements
In order to perform the different energy consumption measurements, a setup different from the Gecko setup, described in Section 4.1.2, needed to be used. As the consumption of the OpenMote hardware happened to exceed 50 mA (i.e., the maximum of the Gecko measuring range), we switched to using the Keysight N6705B DC Power Analyzer [24]. Using the two-wire mode, the Voltage Common Collector (VCC) and Ground (GND) pins of the OpenMote-CC2538, in the 2.4 GHz measurement setup, and of the OpenUSB with the OpenMote-CC2538 attached, in the 868 MHz measurement setup, were connected to the power supply output of the N6705B, which was configured to provide an input voltage of 3.0 V. This is the nominal voltage of two serially-connected AA batteries, which can be used to power an OpenMote via an OpenUSB or OpenBattery module. The measurement setups are shown in Figure 5a,b, for the 2.4 GHz and 868 MHz measurements, respectively. For the 868 MHz measurements, the OpenUSB has to be attached to the OpenMote-CC2538 because the former only holds a CC1200 radio transceiver and needs the microprocessor on the latter to control it. For most device states, the consumption was averaged over a period of 500 ms. However, some states (e.g., RX and TX states) only last as long as the radio takes to send all bytes. For these states, the average was taken over a period between 3 ms and 4 ms.
Time Slot State Durations
We measured the duration of each state in every time slot where the CPU is active. The durations in which the CPU is sleeping can then be trivially calculated, using the active durations and the timing constants found in OpenWSN firmware. The total length of a time slot was set to 15 ms. The state durations for all time slots are shown in Tables 8-14.
States do not always have the exact same duration for a variety of reasons. There can be multiple code branches (i.e., different execution paths); the packet size can vary and have an influence; or the duration of an operation can simply be variable (e.g., waking up the CC1200 chip). Therefore, multiple measurements were executed to find a single duration that could be associated with the state.
Changing the mode of the CC1200 radio from Sleep to Idle takes between 246 µs and 343 µs, which causes every state where the radio wakes up to have a variable duration. It only required a few measurements to find that the median for waking up is 268 µs. However, we decided to use the average value instead of the median because it resulted in a slightly more accurate energy consumption prediction. To avoid being susceptible to outliers, the wakeup time was measured over ten thousand times, and an average duration of 273 µs was observed.
For states with multiple code branches, the median value of multiple measurements was chosen. For example, in state TxDataOffsetStart, an Enhanced Beacon (EB) might be sent if and only if there are no data to send. Another example is state TxProc in which the execution path is different when a data packet has no retries left. However, these small variations of the duration only have a limited impact on the total slot consumption.
The durations of states where packets are loaded to and read from the radio were measured before and after the radio was accessed. Afterwards, the communication with the radio for different packet sizes (from 0 bytes-125 bytes with steps of 25 bytes) was measured. Linear interpolation was applied on the measured durations to come up with a formula that fits well to all packet sizes. The difference in durations in states where data are transferred between the radio and CPU (e.g., TxDataPrepare and TxProc) is caused by the way these bytes are transferred: the CC2538 radio is combined with the CPU in one chip, so data can just be copied in/out of memory while the CC1200 needs to use the slower Serial Peripheral Interface (SPI) to transfer data to and from the CPU in the CC2538 chip, resulting in longer durations.
The duration of transmitting and receiving also depends on the packet size. Since the radio has a baud rate of 250 kbps, the time it takes to transmit one bit is 4 µs, which makes the time to transmit a byte 32 µs. To calculate the duration, the amount of transmitted bytes had to be multiplied with 32 µs. The PHY header byte and two-byte Cyclic Redundancy Check (CRC) also have to be included as they are sent with the packet. To verify that this calculation is valid, the time between the start-of-frame interrupt and the end-of-frame interrupt was measured: the average error was only 0.13%.
To model the guard time, we assumed that the clocks are synchronized. Our model thus assumes that the packet always arrives exactly in the center of the guard interval.
Device State Current Consumption
The consumption of the OpenMote-CC2538 connected to the OpenUSB was measured during all possible device states. Since the CPU and radio are the two components responsible for the majority of the current consumption, these device states are all combinations between CPU and radio modes. Instead of measuring the consumption of the CPU and radio separately, we measured the consumption of the entire device. The result is that any current consumption not related to the CPU or radio (e.g., SPI or timers) are measured as part of the CPU usage. This allows for a slightly more accurate prediction of the charge drawn compared to models that ignore these other components.
2.4 GHz CC2538 Radio
In Table 15, the consumption values of the different device states when using the CC2538 radio, i.e., the 2.4 GHz radio, are shown. The values for the TX state were measured when the transmit power of the radio was set to 0 dBm. When the transmit power was set to 3 dBm, i.e., the current default in OpenWSN, the consumption values of the TX states are 33.04 mA and 29.01 mA, for the CPU in Active and Sleep state, respectively. The CC2538 radio has an identical consumption of 13.97 mA when the radio is in Sleep or Idle state because the OpenMote-CC2538 consists of both the CPU and radio, and the radio itself does not have a separate Idle or Sleep state. Instead, it has a single Off state for which the consumption was used for both the Sleep and Idle states. Thus, the CC2538 radio has only four states: TX, RX, Listen and Off.
As expected, the difference in the consumption between an active or a sleeping CPU is nearly identical for all radio states: the CPU in active mode consumes on average 3.92 mA more than when being in sleep mode, with a standard deviation of only 0.07 mA.
When switching the CPU of the CC2538 chip to the deeper sleep mode PM2 instead of PM_NOACTION (http://www.ti.com/product/CC2538/datasheet/), while the radio was in the Sleep and Idle state (which are actually both the Off state in the CC2538 radio), the consumption dropped to 1.56 µA.
868 MHz CC1200 Radio
The device state consumption values when using the CC1200 radio, i.e., 868 MHz, are shown in Table 15. The values for the TX state were measured when the transmit power of the radio was set to 0 dBm. When the transmit power is set to 14 dBm, i.e., the current default in OpenWSN, the consumption of the TX states is 91.94 mA for an active CPU and 88.25 mA for a sleeping CPU. The CC1200 radio Sleep state is the Idle state with the crystal oscillator turned off (http://www.ti. com/product/CC1200/datasheet/). Consumption is expected to be lower when the Sleep state of the CC1200 chip is used or when the CC1200 is turned completely off.
As expected, the difference in the consumption between an active or a sleeping CPU is nearly identical for all radio states: the CPU in active mode consumes on average 3.81 mA more than when being in sleep mode, with a standard deviation of only 0.15 mA.
When both the CPU and the radio are put in the Sleep state, the consumption is still high. This is caused by the high current consumption of the CPU, which is put in the least possible sleep mode PM_NOACTION. When putting the CPU in a deeper sleep, i.e., the PM2 power mode, while the CC1200 radio is in Sleep and Idle state, the consumption dropped to 0.27 mA and 2.64 mA, respectively.
Evaluation
In this section, the accuracy of the model is verified. First, the charge drawn per slot type for both radios is calculated and compared to the measured values. Afterwards, the accuracy of the charge drawn during a slot frame is validated using a small-scale test network. The developed packet size-aware model is also compared to a state-of-the-art model to show the accuracy improvement when including the packet size in the model. Finally, using the measured charge consumption values for both frequency bands' communication, several TSCH network simulations were conducted to observe the energy consumption effects in an end-to-end context.
Slot Charge Consumption
Using the duration and consumption of each state, the charge drawn during each type of slot is calculated using the formula shown in Equation (1). To verify the accuracy of our model, the entire consumption of each type of slot was also measured separately. Table 16 compares the measured and calculated values for both types of radio. Both radios are configured with a transmit power of 0 dBm and a packet size of 127 bytes, i.e., the maximum packet size when including the CRC bytes.
As seen in Table 16, the difference between the measured and calculated values is close to negligible. Among the main contributors to these differences are measurement errors and the variations in guard time duration. In the measured data, the guard time can be smaller or larger than in the calculated data, which assumes perfectly synchronized clocks. On average, the difference is limited to 5.08 µC or 1.55% with a standard deviation of 3.3 µC and a maximum difference of 14.89 µC, which proves the accuracy of our model.
More specifically, for the CC2538, the average relative difference is 0.75%, while for the CC1200 chip, the average relative difference is 2.3%. In the case of the CC1200 chip, the larger relative difference is explained by the fact that a specific device state (e.g., CPU is active and radio is sleeping) does not always result in exactly the same current drawn, which we abstracted in Table 15. Figures 6 and 7 show the current drawn of all time slots (except for the error time slot TxDataRxNoAck) over time according to both the model and the measurements, when using the CC2538 and CC1200 radio, respectively. For all time slots, the measured graphs and their modeled counterpart look very similar. The peaks on the graphs however do not perfectly match, because the model simplifies certain states. The radio state may be changed while the CPU is active, causing the CPU and radio to be active at the same time, while the model might only consider the radio as active once the CPU goes to sleep. This results in a peak in the measured time slot where there is no peak in the model.
Slot Frame Charge Consumption
When considering the charge consumption in the different time slots, the energy consumption of a slot frame can be calculated. To further verify the accuracy of our model, the calculated slot frame charge consumption of a small-scale, real-world 6TiSCH network is compared with the measured values, for both 868 MHz and 2.4 GHz.
The experiment network topology, depicted in Figure 8, used the OpenMote-CC2538 and the OpenMote-CC2538/OpenUSB board combination as hardware nodes for 2.4 GHz and the 868 MHz measurements, respectively. The root node is connected to a computer using OpenVisualizer to monitor the network. The leaf mote was configured to send a packet of 127 bytes (including CRC) every two seconds. The slot frame size was 51 time slots. Since there are 51 slots in a slot frame and every time slot lasts 15 ms, the duration of each slot frame is 765 ms. The first time slot in every slot frame was reserved for management messages, e.g., EBs, RPL DIOs, RPL Destination Advertisement Objects (DAOs) and 6TiSCH Operation Sublayer (6top) messages, but these were not considered. As such, the first time slot in each slot frame is thus considered to be of type RxIdle. The slot frame of the leaf mote always consists of one RxIdle slot and at least 49 Sleep slots. The last slot will either be of the type TxDataRxAck when there are data to send or another Sleep slot when there are no data. As such, two slot frame types were considered for the leaf node: a slot frame where no data were sent and a slot frame where the packet was sent in a TxDataRxAck slot. The charges consumed in each of these two slot frame types are represented by: and : For the relay node, a slot frame was considered where the packet coming from the leaf was received in the first slot of the slot frame; subsequently, the relay forwarded the packet to the root, but no acknowledgment was received, followed by a successful retransmission. As such, there are RxDataTxAck, TxDataRxNoAck and TxDataRxAck slots when a packet was received and forwarded, while the remaining 48 slots are Sleep slots. The charge drawn during the slot frame of the relay node is represented by the following formula: The charge consumption of the root node is not considered as the root device is typically connected to a computer using OpenVisualizer, serving as a gateway to the Internet. Therefore, the root typically does not run on batteries. Additionally, the serial communication between the root and OpenVisualizer cannot be disabled, making the comparison between the measured consumption and the proposed model invalid.
We measured the charge consumed over the length of an entire slot frame for both the leaf and relay node and compared these values to the values calculated using the proposed model and Equations (3)- (5). Table 17 shows the results. On average, the error between the calculated and measured values is lower than 1%. The differences between the CC2538 measured and calculated consumptions, for the leaf and relay nodes, are limited to 1.3% and 1.03%, respectively. For the CC1200 radio, the differences are even slightly smaller: respectively 0.88% and 1.01%. The consumption comparison results again show that our model is accurate, even when measuring across an entire slot frame.
Energy Model Comparison
In order to indicate the accuracy gain of using the proposed packet size-aware model over the model introduced by Vilajosana et al., these two models are compared in Figure 9. The two models are compared for packet sizes going from 58 bytes, which is the minimum 6TiSCH packet size without additional payload, up to the maximum packet size, i.e., 127 bytes (125 bytes and 2 CRC bytes). The consumption of both models is compared to the packet size-aware model using the exact duration measurements for the packet sizes of 75, 100 and 125 bytes. The device state current values of Table 15 are used for this comparison. In the states where the CPU is sleeping and the radio is sleeping or in idle mode, the PM2 values were preferred over the PM_NOACTION values. As can be seen in the graphs, the proposed model accurately represents the charge consumption for all packet sizes. The model of Vilajosana et al., however, becomes highly inaccurate especially when the packet size decreases. Looking at a packet size of 75 bytes, the average errors of the Vilajosana et al. model are 32.03 µC (σ = 21.05 µC) and 17.05 µC (σ = 12.41 µC) for the CC1200 and CC2538 radio, respectively. Of course, for the maximum packet size, both models estimate the consumption correctly.
The reason for this large inaccuracy introduced by the model of Vilajosana et al. is that their approach linearly scales the entire slot consumption. This is not the correct approach, as only the states in which data are transmitted over the radio or copied between the radio and the CPU can be scaled. Since a time slot consists of many more states than only data processing states, those states should not be scaled. Because the proposed model differentiates between the state durations that depend on the packet size and the durations that are independent of the size, as can be seen in Tables 8-14, it accurately models the slot consumption for different packet sizes.
Frequency Band Consumption Comparison
Using the measured energy consumption values for both 868 MHz and 2.4 GHz, we conducted several TSCH network simulations to analyze the end-to-end network performance and energy consumption at these frequency bands.
Simulation Setup
To perform the experiments, the 6TiSCH simulator is used: an open-source, event-driven Python simulator developed by the 6TiSCH Working Group (WG) [5]. The simulator supports IEEE 802.15.4e TSCH experimentation with straightforward parameter configuration. The configuration parameters for the simulation experiments discussed in this article, are listed in Table 18. To be able to compare the energy consumption for both 868 MHz and 2.4 GHz, we changed the default propagation model of the simulator (i.e., the so-called Pister-hack) to the International Telecommunication Union -Radiocommunications sector (ITU-R) Rural Macro model, which is applicable to both frequency bands [25]. To have a realistic low-power energy consumption comparison between 868 MHz and 2.4 GHz, we re-calculated the charge consumption values of Table 16, using the device state consumption values of Table 15 and adjusted them to make sure the measured CC2538 PM2 power mode consumption value 1.56 µA was used in all states where the CPU was sleeping. When the CPU and radio were sleeping, the Idle state (with the crystal oscillator turned off) consumption value of the CC1200 chip, i.e., the radio Sleep state, was also replaced by the power down Sleep state consumption value of the CC1200 datasheet, which is 0.5 µA (http://www.ti.com/lit/ds/symlink/cc1200.pdf). The resulting slot consumption values are listed in Table 19. The 6TiSCH simulator implementation used for these simulation experiments can be found in [26].
Simulation Results
In the first TSCH network experiment, the number of nodes in a random topology varies from 2-32 nodes. Figure 10 shows the average hop count and the total energy consumed per node over a period of 300 s. As seen in Figure 10a, for 2.4 GHz, the average hop count increases as the number of nodes in the network increases. For 868 MHz, the hop count stabilizes to one. Communication at 868 MHz is stable over longer ranges than communication at 2.4 GHz, resulting in a lower hop count to reach the root node. For the shorter range 2.4 GHz communication, the increase in consumption is significant when the average hop count increases. Since there are more nodes that have to relay additional packets towards the root, the total consumption per node increases. Figure 10b also clearly shows that the total consumption per node is higher for 868 MHz than for 2.4 GHz as is explained by the absolute slot consumption values in Table 19. In the second TSCH network experiment, the average charge drawn per node per cycle was observed over a period of 300 s. The results are shown in Figure 11, which differentiates between two grid topologies with different sizes: Figure 11a,b shows the Cumulative Distribution Function (CDF) for nine and 25 nodes, respectively. In the network of nine nodes, all eight nodes directly connect to the root node for both 868 MHz and 2.4 GHz. The CDF in Figure 11a shows that almost all of the nodes consume less charge when using 2.4 GHz compared to when using 868 MHz. The difference in consumption is explained by the measured slot consumption values shown in Table 19, which indicates that 2.4 GHz consumes less energy than 868 MHz. However, when looking at nodes between 0.8 and 0.9, we observe that 868 MHz consumes less. The same effect is observed in the results for 25 nodes: 60% of the nodes that use 2.4 GHz consume less than the nodes using 868 MHz. These nodes represent leaf nodes and nodes that do not have to forward many data packets originating from children in the routing graph. Apart from these nodes, there are also other intermediate nodes, as indicated by the 2.4 GHz hop count of 2.49 (σ = 1.21) for the grid scenario with 25 nodes, which have to relay many more packets towards the root and consume more energy. When using 868 MHz, the average hop count was 1.01 (σ = 0.1), which means that all nodes are directly connected to the root and thus do not relay other packets. For both the grid networks of nine and 25 nodes, the root nodes for 2.4 GHz consume less than those of 868 GHz, which again can be expected by looking at the consumption values in Table 19. Looking at the absolute energy consumption values for both 868 MHz and 2.4 GHz, an increased energy consumption for all sub-1-GHz communication is expected. However, these simulation results show that due to the longer-range capabilities of sub-1-GHz communication, there can be nodes that consume less energy compared to when using 2.4 GHz communication.
In the third TSCH simulator experiment, we observe the lifetime of all TSCH nodes in a grid of 25 nodes for different packet periods. Each node is assumed to be running on two AA batteries, i.e., a battery capacity of 2000 mA h. Figure 12 shows the results. The total number of children are all children of a node, e.g., the root node will have 24 children. It is clear that in the case of 2.4 GHz communication, there is much more variability in the number of children a node has, compared to when using 868 MHz communication. This is due to the longer range communication of 868 MHz that allows nodes to directly connect to the root over longer distances. In this 25-node grid topology, however, it is still possible that a 868 MHz leaf node needs multiple hops to reach the root: as observed in Figure 12, there are some nodes that have one, two or three children, which indicates that the signal of those children to their parent was better compared to the signal of their link to the root. With 2.4 GHz communication that lacks such longer range capability, a packet typically has to traverse more hops to reach the root. For 2.4 GHz, there is also more variability in the lifetime of nodes with the same amount of children. For 868 MHz, we do not observe this effect. This is because the quality of the different links between the 2.4 GHz nodes differs in every experiment, resulting in a variable number of transmission cells and retransmissions that are necessary to deliver packets, which in turn also influences the energy consumption. Most 868 MHz nodes however are directly connected to the root with good link quality, resulting in almost no variability.
The results show that for a higher packet frequency, the average number of days a node lasts decreases, e.g., the average lifetime for 1 packet/s is 204 days compared to 487 days when having a frequency of 1 packet/h, for 2.4 GHz. The graph also shows that on average, the lifetime in a 868 MHz network is lower, because of the higher consumption values shown in Table 19. However, the results in Figure 11 showed that this does not necessarily hold for all nodes in a TSCH network.
Conclusions
In this paper, we propose a more accurate energy model for IEEE 802.15.4e TSCH using dual-band OpenMote hardware. The model differs from previous work in several ways. First, it includes an elaborate and up-to-date set of time slots and states and accurately models variable packet sizes. Second, we present state durations and energy consumption measurements for both the 868 MHz and 2.4 GHz frequency bands, using the CC1200 and CC2538 radio, respectively. We have experimentally verified the accuracy of the proposed model by comparing measured values of all time slots to their modeled counterpart. Furthermore, the energy consumption of a small-scale TSCH network was compared with its modeled consumption. For both the time slot comparison and the small-scale network experiment, the average error was less than 3%, including measurement inaccuracies and variations of the guard time. Using the measured energy slot consumption for both 868 MHz and 2.4 GHz communication, we also conducted several TSCH network simulations to observe the energy consumption effects for both frequency bands in an end-to-end context. We have also shown that the proposed model can accurately model all packet sizes, a feature absent in current TSCH energy consumption models, which only consider the maximum packet size. These results prove that our model is suitable to accurately predict the energy consumption of TSCH networks. | 2018-04-03T00:05:59.405Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "a9d81129a4a4b777a46290ffaad8ac2748a75106",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/2/437/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9d81129a4a4b777a46290ffaad8ac2748a75106",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
254150059 | pes2o/s2orc | v3-fos-license | Disrupted in renal carcinoma 2 (DIRC2/SLC49A4) is an H+-driven lysosomal pyridoxine exporter
DIRC2 is an H+-driven lysosomal pyridoxine exporter involved in the regulation of cellular pyridoxine storage.
Introduction
The gene for disrupted in renal carcinoma 2 (DIRC2) was identified as a gene that spans a recurrent breakpoint, the translocation t(2; 3)(q35;q21) on chromosome 3q21, which occurs in affected members of families with hereditary renal carcinoma (Bodmer et al, 2002). A massively parallel paired-end transcriptome sequencing study described another translocation, t(2;3)(p14;q21), which is also located in the DIRC2 gene, in a prostate cancer cell line (Maher et al, 2009). These studies suggest that disruption of the DIRC2 gene plays a role in carcinogenesis. In a proteomic analysis of lysosomal membranes from human placenta, DIRC2 was identified as a putative transporter protein enriched in lysosomal membranes (Schroder et al, 2007). Accordingly, DIRC2 has been classified as a solute carrier (SLC) 49 family member, the fourth member (SLC49A4) of the family that consists of four members paralogous to the major facilitator superfamily transporters (Khan & Quigley, 2013). However, the functional characteristics of DIRC2 remain unknown, and no substrates have been identified. Of the other SLC49 family members, SLC49A1 is known as feline leukemia virus subgroup C receptor 1 (FLVCR1), a plasma membrane heme exporter that protects erythroid progenitors from heme toxicity during the heme synthesis phase of erythropoiesis (Quigley et al, 2004). SLC49A2 (FLVCR2), which is highly homologous to SLC49A1, but less so to SLC49A4, may be a plasma membrane heme importer, and it is mutated in patients with Fowler syndrome, a rare proliferative vascular disorder of the brain (Duffy et al, 2010). SLC49A3 is known as major facilitator superfamily domain containing 7 (MFSD7), which is associated with risk of ovarian cancer (Peedicayil et al, 2010), although its functional characteristics are mostly unknown.
Lysosomes are membrane-bound organelles responsible for the hydrolysis of various compounds during the turnover of small molecules primarily resulting from endocytosis, phagocytosis, and autophagy (Saftig & Klumperman, 2009). Recent studies on lysosomal enzymes have revealed that degradation in the lysosomal lumen is mediated by various hydrolases. Lysosomal hydrolases are tightly segregated with the lysosomal membrane, whereas degraded compounds are exported from the lysosomal lumen to the cytosol and can be reused to maintain cell homeostasis. The export process may be mediated, in part, by various secondary active transporters present in the lysosomal membrane. Various bioactive compounds, such as amino acids, peptides, nucleic acids, and vitamins, are exported from lysosomes by transporters (Sagne & Gasnier, 2008). Regarding the turnover of small molecules, lysosomes also play a role in cellular nutrient storage. For example, cellular storage of folate, also known as vitamin B9, is regulated by a transporter present in the lysosomal membrane for polyglutamated folate, which is an inactive retention form hydrolyzed by a lysosomal enzyme (Barrueco et al, 1992). Functional studies of lysosomal transporters have been reported; however, the underlying molecular mechanisms have not been fully elucidated, and many of the lysosomal transporters remain unidentified.
In this study, we attempted to identify a lysosomal transporter for pyridoxine (vitamin B6). Pyridoxine is converted to the coenzyme pyridoxal-59-phosphate and plays various physiological roles along with cytosolic enzymes, although its turnover and storage mechanism associated with lysosomes have not been elucidated. The attempt was conducted in conjunction with our recent efforts to identify pyridoxine transporters at the plasma membrane, in which we found that SLC19A2 and SLC19A3, known as thiamine transporter 1 (THTR1) and THTR2, respectively, also transport pyridoxine (Yamashiro et al, 2020). We searched bioinformatically for candidate lysosomal transporter-like proteins with unknown function to test for pyridoxine transport activity. DIRC2 was among the candidate proteins. We here describe our successful attempt to characterize DIRC2 as an H + -driven lysosomal pyridoxine exporter. To characterize DIRC2 function, we used a recombinant DIRC2 protein (DIRC2-AA), in which the N-terminal dileucine motif involved in its lysosomal localization was removed by replacing with dialanine by site-directed mutation for redirection to the plasma membrane (Savalas et al, 2011). The dileucine motif is located at the 14th and 15th positions with an accompanying glutamic acid at the 10th position (E 10 RQPL 14 L 15 ). As lysosomes cannot be readily used for transport experiments, the cellular model that has DIRC2-AA localized to the plasma membrane was used for examining the ability of DIRC2 to transport pyridoxine. Furthermore, we confirmed the contribution of DIRC2 to cellular pyridoxine accumulation, suggesting that DIRC2 plays a role in the turnover and cellular storage of pyridoxine.
DIRC2 is a lysosomal pyridoxine transporter
To examine the presumed transport function of human DIRC2, we used the DIRC2-AA mutant because several lysosomal transporters have been functionally characterized in whole cells using recombinant proteins ectopically expressed at the plasma membrane, exposing intralysosomal segments to the extracellular space (Kalatzis et al, 2001;Sagne et al, 2001;Morin et al, 2004). The advantage of this approach is to replace poorly tractable lysosomal efflux by simple whole-cell influx. We first reconfirmed the contribution of the N-terminal dileucine motif for lysosomal membrane localization in COS-7 cells by using DIRC2 fused to a FLAG tag on the N-terminus (FLAG-DIRC2) and similarly FLAG-tagged DIRC2-AA (FLAG-DIRC2-AA). In transiently transfected COS-7 cells, although FLAG-DIRC2 was primarily observed in the intracellular compartment immunocytochemically, FLAG-DIRC2-AA was predominantly observed at the plasma membrane, indicating that lysosomal targeting depends on this motif ( Fig 1A).
Increased plasma membrane expression of FLAG-DIRC2-AA compared with FLAG-DIRC2 was confirmed biochemically by cell surface biotinylation using the membrane-impermeable reagent sulfo-NHS-SS-biotin and streptavidin-agarose pull-down for the recovery of the biotinylated protein. In Western blots (Fig 1B), a small amount of FLAG-DIRC2 was also detected in the biotinylated protein fraction (plasma membrane fraction), which was also observed in an earlier study using HeLa cells (Savalas et al, 2011). However, in comparison, an increased amount of FLAG-DIRC2-AA was detected in the plasma membrane fraction, whereas the amount of FLAG-DIRC2-AA was comparable with that of FLAG-DIRC2 in total cell lysates.
An earlier study indicated that when expressed in HeLa cells, DIRC2 protein undergoes extensive proteolytic processing and resides mostly as a cleaved product at the lysosomal membrane, whereas redirected DIRC2-AA little undergoes the processing (Savalas et al, 2011). However, we found that FLAG-DIRC2-AA as well as FLAG-DIRC2 is mostly present as the cleaved product when expressed in COS-7 cells ( Fig 1B). Therefore, although the reason for the difference in the proteolysis of DIRC2-AA is unknown, it was deemed possible to use DIRC2-AA to assess the function of DIRC2, which is mostly present in the cleaved form, in this study.
Using COS-7 cells transiently expressing FLAG-DIRC2-AA, we examined the uptake of pyridoxine (10 nM) under an acidic extracellular condition (pH 5.0), which mimics the natural environment for DIRC2 in the lysosome ( Fig 1C). COS-7 cells were used for this assessment as a host cell with low pyridoxine uptake activity. Pyridoxine was efficiently transported in FLAG-DIRC2-AAtransfected COS-7 cells, exhibiting a fivefold greater uptake compared with that in mock cells, but not in FLAG-DIRC2-transfected COS-7 cells. This indicates that the increased pyridoxine uptake is associated with the presence of FLAG-DIRC2-AA at the plasma membrane. In an earlier study (Savalas et al, 2011), a metabolite mixture supplemented with 5% Bacto Yeast Extract (BYE) was applied to clamped oocytes expressing DIRC2-AA fused to an EGFP tag on the C-terminus (DIRC2-AA-EGFP) in an acidic extracellular medium (pH 5.0). BYE immediately induced a small, but significant, electrogenic current in DIRC2-AA-EGFP-expressing oocytes but not in water-injected or DIRC2-EGFP-expressing oocytes, indicating that DIRC2 functions as an electrogenic transporter. It appears that the BYE contained pyridoxine, and its plasma membrane transport by DIRC2-AA-EGFP induced the electrogenic current.
To further investigate the physiological role of DIRC2, we examined the expression of DIRC2 in human tissues by quantitative real-time PCR. As shown in Fig 1D, although DIRC2 mRNA was found to be ubiquitously expressed, high expression was observed in the placenta, brain, and heart. We further examined the expression in various human cell lines, in which DIRC2 mRNA was also found to be ubiquitously expressed ( Fig 1E). However, it should be noted that its expression tended to be low in cell lines derived from blood cancer cells, such as HL60, HPB-ALL, HuT78, and MOLT4. An earlier study showed by Northern blot analysis that tumor cells with a t(2;3) chromosomal translocation had normal DIRC2 mRNA transcripts, indicating that the remaining intact chromosomal allele is normally transcribed, and had no additional abnormal transcripts, suggesting that disrupted transcripts of DIRC2 were absent or expressed at very low levels (Bodmer et al, 2002). Based on this, DIRC2-disrupted cells could have normal transcripts from the remaining intact allele, although the expression levels would be low because the disrupted allele cannot be transcribed. Therefore, taking into consideration that DIRC2 disruption can lead to carcinogenesis, there is a possibility that low DIRC2 expression in those blood cancer cell lines may be because of the presence of a disrupted DIRC2 allele, but further studies are needed to clarify the association.
Functional characteristics of DIRC2-AA in pyridoxine transport
The pH-sensitive characteristics of pyridoxine transport mediated by DIRC2-AA were examined for a range of extracellular pHs between 5.0 and 8.0. As shown in Fig 2A, the uptake of pyridoxine (10 nM) in transiently DIRC2-AA-transfected COS-7 cells was highest at pH 5.0, at which regular uptake assays were performed. The uptake of pyridoxine decreased with increasing pH and reached a low level comparable to that in mock cells, which was low and nearly unchanged over the entire pH range, at neutral pH and above. Thus, the specific uptake of pyridoxine by DIRC2-AA was found to be highly sensitive to extracellular pH. The enhancement of the specific uptake under acidic conditions may depend on the pH of the extracellular medium, or because cells maintain their cytosol at around neutral pH, it may have resulted from an inward transmembrane H + gradient. To clarify whether DIRC2 uses an H + gradient as a driving force for transporting pyridoxine or not, we examined the specific uptake of pyridoxine by DIRC2-AA in transiently transfected COS-7 cells in the presence of protonophores, carbonylcyanide m-chlorophenylhydrazone and carbonylcyanide p-trifluoromethoxyphenylhydrazone, in the uptake solution at pH 5.0 to dissipate the H + gradient across the plasma membrane ( Fig 2B). Treatment with these protonophores significantly reduced the specific uptake, indicating that an inwardly directed H + gradient is required for pyridoxine transport. This observation suggests that DIRC2 transports pyridoxine in an H + -coupled manner. It should be noted, however, that pyridoxine is increasingly cationized with a decrease in pH in the pH range where the specific uptake was highly pH-sensitive because it contains a pyridine structure and a nitrogen located at the first position can be protonated with pKa 5.1 (Dos Santos et al, 2010). Therefore, these is a possibility that cationized pyridoxine is preferred by DIRC2-AA for transport and a pHdependent change in the extent of cationization is also involved in the pH-sensitive uptake.
To determine whether other extracellular ions influence DIRC2-AA-mediated pyridoxine transport, NaCl in the uptake solution was replaced by an isotonic concentration of KCl, Na-acetate, K-acetate, and mannitol ( Fig 2C). The specific uptake of pyridoxine (10 nM) by DIRC2-AA in transiently transfected COS-7 cells was almost completely abolished when Cl − was removed by replacement with acetate or mannitol, suggesting that DIRC2 requires Cl − for the efficient transport of pyridoxine. According to an earlier study (Li et al, 2019), the Cl − concentration is higher in lysosomal lumen (60-80 mM) than in the cytosol (10-40 mM). Therefore, there is a large Cl − gradient that provides favorable conditions for DIRC2 to transport pyridoxine from the lysosomal lumen to the cytosol in the intracellular environment. The characteristic of the H + and Cl − requirement for the pyridoxine transport suggests that DIRC2 is involved in the export of lysosomal pyridoxine to the cytosol.
To delineate the kinetic characteristics of DIRC2-AA-mediated pyridoxine transport, we examined the time courses of pyridoxine uptake in transiently DIRC2-AA-transfected COS-7 cells and mock cells. As shown in Fig 2D, the uptake of pyridoxine (10 nM) increased in proportion to time up to 1 min in DIRC2-AA-transfected COS-7 cells; however, it remained very low in mock cells. Based on this, an uptake period of 1 min was set for the evaluation of pyridoxine transport across the plasma membrane in the initial uptake phase. Kinetic analysis indicated that the DIRC2-AA-specific uptake of pyridoxine was saturable with a V max of 6.16 nmol/min/mg protein and a K m of 522 μM (Fig 2E). This Michaelis constant is much higher than the serum concentration of vitamin B6, which was previously reported to be 60 nM (Naurath et al, 1995). This suggests that DIRC2 is capable of transporting pyridoxine without saturation even when the concentration in lysosomes increases because of endocytosis, phagocytosis, or autophagy, although pyridoxine concentration in lysosomes has not been fully evaluated to date. In addition, the uptake of pyridoxine at a low concentration (10 nM) in DIRC2-AAtransfected COS-7 cells rapidly increased, reaching a ninefold higher level compared with that in mock cells at 4 min, when equilibrium was almost achieved (Fig 2D). Based on these results, pyridoxine transport by DIRC2 should be fully and efficiently functional in the body.
We also examined the effect of pyridoxine analogs and cationic vitamins (1 mM) on the DIRC2-AA-specific uptake of pyridoxine (10 nM) to probe the possibility that DIRC2 may also be involved in their transport ( Fig 2F). Among the tested compounds, only 4-deoxypyridoxine other than pyridoxine significantly inhibited the DIRC2-AA-specific uptake, suggesting that it may also be recognized and transported by DIRC2, although 4-deoxypyridoxine is not responsible for a coenzyme function like vitamin B6. Surprisingly, pyridoxal and pyridoxamine, which are B6 vitamins, had no inhibitory effects on the DIRC2-AA-specific uptake, suggesting that they are not transport substrates of DIRC2. Notably, THTR1 and THTR2, which have recently been found to transport pyridoxine in an H + -coupled manner (Yamashiro et al, 2020), also exhibited affinity for pyridoxal and pyridoxamine. This suggests that even among compounds classified as vitamin B6 because of their similar bioactivity, there are differences in disposition characteristics. It is also notable that THTR1 and THTR2 both transport thiamine and pyridoxine, whereas thiamine was suggested not to be a transport substrate of DIRC2 because of the lack of its inhibitory effect on DIRC2-AA-specific pyridoxine uptake. Similarly, nicotinamide, which is a form of vitamin B3, may not be a transport substrate because it also did not inhibit DIRC2-AA-specific pyridoxine uptake.
DIRC2 regulates cellular pyridoxine storage
Because the results of this study suggest that DIRC2 exports lysosomal pyridoxine to the cytosol, we determined the contribution information: Data represent the mean ± SD of three biological replicates using different preparations of cells (C) and of three technical replicates for each sample (D, E). For statistical analysis, ANOVA followed by Dunnett's test was used (C). *P < 0.05 compared with the control (mock). of DIRC2 to cellular pyridoxine storage. As shown in Fig 3A, DIRC2 fused to an HA tag on the N-terminus (HA-DIRC2) was transiently overexpressed in human embryonic kidney 293 (HEK293) cells and the effect on the cellular accumulation of pyridoxine (10 nM) was examined after 30 min of incubation at pH 5.0. HEK293 cells were used as a human cell line with moderate DIRC2 expression (Fig 1E), in which the putative lysosomal storage system for pyridoxine could be in operation and introduced HA-DIRC2 could have an impact on that. However, transient expression of HA-DIRC2 alone did not alter the cellular accumulation, compared with that in mock cells. The unchanged cellular accumulation may have resulted from the low cellular uptake of pyridoxine because of the absence or minimal expression of endogenous plasma membrane transporters for pyridoxine in HEK293 cells. To investigate the role of lysosomal DIRC2 in pyridoxine storage, the cellular uptake of pyridoxine was increased by transient introduction of human THTR2 fused to a FLAG tag on the N-terminus (FLAG-THTR2). THTR2 is an aforementioned H + -driven pyridoxine transporter, which operates at the plasma membrane. Although the cellular accumulation was increased by the introduction of FLAG-THTR2, HA-DIRC2 significantly reduced the cellular accumulation by its coexpression. The protein level of FLAG-THTR2 assessed by Western blotting was unchanged following coexpression with HA-DIRC2 (Fig 3B), and, in addition, FLAG-THTR2 was almost exclusively observed at the plasma membrane immunocytochemically, apparently separated from HA-DIRC2 localized in the intracellular compartment ( Fig 3C). Therefore, the decrease in the cellular accumulation was suggested not to be caused by a decrease in the expression or plasma membrane localization of FLAG-THTR2. It could be a result from an increase in the lysosomal export by the introduction of HA-DIRC2, which reduces the lysosomal accumulation. HA-DIRC2 was also confirmed to be mostly present as the cleaved product when expressed in HEK293 cells (Fig 3B). These results indicate that DIRC2 regulates pyridoxine storage by operating at the lysosomal membrane.
The overexpression of HA-DIRC2 at the lysosomal membrane significantly decreased the cellular accumulation of pyridoxine induced by FLAG-THTR2, as described above. To support a physiological role of DIRC2 in cellular pyridoxine storage, we compared the values of the V max /K m , which represents the index of transport activity, by kinetic analysis using COS-7 cells transiently expressing N-terminal EGFP-tagged DIRC2-AA (EGFP-DIRC2-AA) and THTR2 (EGFP-THTR2). Kinetic analysis indicated that the EGFP-DIRC2-AAspecific uptake of pyridoxine was saturable with a V max of 10.0 nmol/min/mg protein and a K m of 476 μM, resulting in a V max /K m of 21.1 μl/min/mg protein (Fig 3D), and the EGFP-THTR2-specific uptake was so with a V max of 414 pmol/min/mg protein and a K m of 20.0 μM, resulting in a V max /K m of 20.7 μl/min/mg protein (Fig 3E). Thus, the V max /K m values were comparable, indicating that the pyridoxine transport activity of EGFP-DIRC2 and EGFP-THTR2 was almost the same. A Western blot analysis of the expression of EGFP-DIRC2-AA and EGFP-THTR2 indicated that their protein expression levels were also comparable ( Fig 3F). Therefore, it may be possible that a decrease in the lysosomal accumulation of pyridoxine owing to the addition of HA-DIRC2-mediated export has an impact on the THTR2-induced cellular accumulation (total amount in the cells) and results in its significant decrease (Fig 3G), as demonstrated in Fig 3A. EGFP-DIRC2-AA was also confirmed to be mostly present as the cleaved product when expressed in COS-7 cells, although there were minor Western blot bands that may represent aggregated EGFP-DIRC2-AA (Fig 3F).
To gain further evidence for the involvement of DIRC2 in cellular pyridoxine storage, we determined the effect of DIRC2 knockdown on pyridoxine accumulation in Caco-2 cells, which exhibited the highest DIRC2 expression among the human cell lines (Fig 1E) and, hence, could be highly responsive to DIRC2 knockdown. To measure cellular pyridoxine storage, the cellular accumulation of pyridoxine (10 nM) was determined after 30 min of incubation at pH 5.0. The acidic condition was set in accordance with our previous finding that the uptake of pyridoxine in Caco-2 cells occurs by H + -driven transport that involves THTR1 and THTR2 (Yamashiro et al, 2020). As a result, the cellular accumulation was significantly increased in Caco-2 cells in which the expression of endogenous DIRC2 was knocked down by RNA interference (DIRC2-KD Caco-2 cells) compared with negative control cells (Fig 4A). The localization of HA-DIRC2 was also observed immunocytochemically in the intracellular compartment of transiently transfected Caco-2 cells (Fig 4B), indicating the lysosomal localization of DIRC2. Furthermore, the increase in the cellular accumulation of pyridoxine by DIRC2 knockdown was demonstrated to be associated with an increase in the lysosomal accumulation of pyridoxine (Fig 4C), which was evaluated by fractionating lysosomes after the incubation of Caco-2 cells for pyridoxine accumulation. The lysosomal fraction was confirmed to be enriched with lysosomal-associated membrane protein 1 (LAMP1), a lysosomal marker protein, whereas its level was low in the total cell lysate (Fig 4D). To check the specificity of DIRC2 knockdown, we measured the expression levels of the mRNAs of DIRC2 (Fig 4E), THTR1 (Fig 4F), and THTR2 ( Fig 4G) by real-time PCR. In DIRC2-KD Caco-2 cells, the expression of DIRC2 was significantly decreased, whereas that of THTR1 and THTR2 was not. Taken together, the increase in the cellular accumulation by DIRC2 knockdown could be a result from the suppression of the DIRC2mediated export at the lysosomal membrane (Fig 4H), which increases the lysosomal accumulation.
Concluding remarks
This study represents an important contribution to understanding the molecular mechanism that involves DIRC2 as a newly identified H + -driven lysosomal pyridoxine exporter and especially with respect to the turnover of pyridoxine, which has not been fully explored. A decrease in the function or expression of DIRC2 may cause impaired pyridoxine turnover, which reduces cytosolically available pyridoxine, with retaining pyridoxine excessively in lysosomes. This could be a causative factor for renal carcinogenesis known to be linked to DIRC2 disruption (Bodmer et al, 2002). It is also notable that vitamin B6 preparations containing pyridoxine are reportedly effective for suppressing and preventing various cancers, such as colorectal cancer (Schernhammer et al, 2008), lung cancer (Johansson et al, 2010), and breast cancer (Zhang et al, 2003). Therefore, these is a possibility that a decrease in the supply of vitamin B6, which might be caused in various organs by the impairment of ubiquitously expressed DIRC2, could also be a risk factor for the onset and development of such cancers. More importantly, the ubiquitous presence of DIRC2 in various organs may suggest that DIRC2 could potentially be involved in the regulation of the disposition of pyridoxine as a vitamin widely in the body. Although the role of the suggested lysosomal pyridoxine storage system, including the involvement of DIRC2, in the physiological processes that require pyridoxine remains to be verified, the newly identified function of DIRC2 as a lysosomal pyridoxine exporter should help guiding future studies for verification of the role and exploration of its clinical relevance. This is the first report to examine the molecular mechanism that controls the cellular accumulation and reuse of vitamins, and it will contribute to an understanding of the relationship between vitamin dynamics and certain diseases.
Materials
[ 3 H]Pyridoxine (20 Ci/mmol) was obtained from American Radiolabeled Chemicals. Unlabeled pyridoxine was obtained from Tokyo Chemical Industry. DMEM was obtained from Wako Pure Chemical Industries, and FBS was from Invitrogen. All other regents were of analytical grade and commercially obtained.
Cell culture COS-7, Caco-2, and BeWo cells were obtained from the RIKEN BioResource Research Center, and other cells were obtained from the Cell Resource Center for Biomedical Research, Tohoku University. The cells were maintained at 37°C in a 5% CO 2 atmosphere in DMEM supplemented with 10% FBS, 100 U/ml penicillin, and 100 μg/ ml streptomycin as described previously (Mimura et al, 2017).
Preparation of plasmids
The cDNA for human DIRC2 (GenBank accession number, NM_032839.3) was cloned using an RT-PCR-based method as described previously (Mimura et al, 2017). Briefly, an RT reaction was replicates) were obtained using the total cell lysates (10 μg protein aliquots). The blots showing β-actin expression are for reference. (G) Schematic model showing HA-DIRC2 contributing to cellular pyridoxine accumulation in HEK293 cells transiently coexpressing HA-DIRC2 and FLAG-THTR2 by exporting pyridoxine at the lysosomal membrane. (A, D, E) Data information: Data represent the mean ± SD of three biological replicates using different preparations of cells (A, D, E). For statistical analysis, ANOVA followed by a Bonferroni test was used (A). *P < 0.05 compared with the control (mock) and HA-DIRC2 alone. #P < 0.05. carried out to obtain a cDNA mixture from total human placental RNA (BioChain Institute) using 1 μg of total RNA, an oligo(dT) primer, and the ReverTra Ace reverse transcriptase (Toyobo). The cDNA for DIRC2 was amplified by PCR using PrimeSTAR Max DNA Polymerase (Takara Bio). A second PCR reaction was performed using the amplified product as a template to incorporate restriction sites. The PCR primers are listed in Table S1. The cDNA for DIRC2-AA was generated by site-directed mutagenesis using a PrimeSTAR Mutagenesis Basal Kit (Takara Bio) and the KOD ONE polymerase. The PCR primers for this step are listed in Table S2. The cDNA for human THTR2 (GenBank accession number, NM_006996.3) was prepared as described in our previous study (Yamashiro et al, 2020). All final cDNA products were incorporated into the pCI-neo vector (Promega) to prepare plasmids for transfection, and their sequences were determined with an automated sequencer (ABI PRISM 3130; Applied Biosystems), as described previously (Yamashiro et al, 2019).
The plasmids for HA-or FLAG-tagged transporters were generated by transferring their coding regions into the pCI-neo vector that was modified to fuse the HA or FLAG tag to the N-terminus. For DIRC2-AA and THTR2, plasmids using the pEGFP-C1 vector (Promega) were similary prepared for the generation of EGFPtagged transporters.
Knockdown of DIRC2 in Caco-2 cells
In experiments to evaluate cellular accumulation of pyridoxine, Caco-2 cells (1.0 × 10 4 cells/ml, 0.5 ml/well) were cultured in 24-well coated plates for 6 h, transfected with 20 pmol/well of Silencer Select siRNA specific to DIRC2 mRNA (Thermo Fisher Scientific) using 1.5 ml/well of Lipofectamine RNAi MAX (Thermo Fisher Scientific), and cultured for 5 d. In experiments to evaluate lysosomal accumulation of pyridoxine, Caco-2 cells (1.0 × 10 4 cells/ml, 2.0 ml/ well) were cultured in six-well coated plates for 6 h, transfected with 80 pmol/well of Silencer Select siRNA specific to DIRC2 mRNA using 6.0 ml/well of Lipofectamine RNAi MAX, and cultured for 5 d. The sequences of the siRNAs are listed in Table S3. For control, negative control DsiRNA (Integrated DNA Technologies) was used.
Quantitation of DIRC2 mRNA by real-time PCR
Total RNA samples from various human tissues (BioChain Institute) and those from human cell lines, which were isolated by a guanidine isothiocyanate extraction method (Chomczynski & Sacchi, 1987), were used to obtain cDNA using the ReverTra Ace reverse transcriptase. Real-time quantitative PCR was done using a Luna Universal qPCR Master Mix (New England Biolabs) on a CFX Connect Real-Time PCR Detection System (Bio-Rad Laboratories) with gene-specific primers (Table S4). The mRNA expression levels were normalized to that of ubiquitin C for tissues and GAPDH for cell lines.
Isolation of lysosomal fraction
The lysosomal fraction was isolated by a method reported previously (Hrikumar et al, 1989). Briefly, Caco-2 cells cultured in six-well plates were collected and homogenized in ice-cold homogenization solution containing 50 mM mannitol and 20 mM Hepes (pH 7.4). The homogenized sample was centrifuged at 1,500g for 10 min to remove cellular components such as nuclei, cytoskeleton, and mitochondria. The supernatant was centrifuged at 24,000g for 30 min to obtain the lysosomal fraction as the precipitate. All procedures were performed at 4°C.
Western blot analysis
Western blot analysis was done to examine protein expression in transiently transfected COS-7 and HEK293 cells cultured in 24-well coated plates. To obtain the total cell lysate, the cells were lysed with ice-cold lysis buffer 1 (50 mM Tris-HCl, 1% SDS, 4 M urea, 1 mM EDTA, 150 mM NaCl, pH 8.0). To collect the plasma membrane fraction, transiently transfected COS-7 cells were washed twice with ice-cold Hanks' solution, added with 10 ml ice-cold Hanks' solution containing 2.5 mg of Biotin-SS-Sulfo-Osu (Dojindo), and shaken on ice for 30 min. The solution was removed, ice-cold quench buffer (50 mM Tris-HCl, 0.1 mM EDTA, 150 mM NaCl, pH 8.0) was added to the cells, and the sample was shaken on ice for 10 min. The buffer was removed, and the cells were washed once with ice-cold Hanks' solution. The cells were solubilized with 1 ml of lysis buffer 2 (1% Triton X-100, 25 mM Tris-HCl, 100 mM NaCl, pH 8.0) and gently scraped with a scraper. The whole lysate was collected in a tube, ultrasonicated, and centrifuged at 16,000g for 15 min at 4°C. The supernatant was transferred to a new tube, and Avidin-Agarose from egg white (Sigma-Aldrich) was added, and the mixture was vortexed with rotation overnight at 4°C. The sample was centrifuged at 220g for 3 min at 4°C, and the supernatant was removed, and the residual was added with 0.5 ml of lysis buffer 2. This step was repeated twice more, and the supernatant was completely removed following centrifugation at 220g for 3 min at 4°C. Finally, lysis buffer 1 was added to obtain a solubilized sample representing the plasma membrane fraction.
For examination of LAMP1 and histone H3 in Caco-2 cells cultured in six-well coated plates and treated for DIRC2 knockdown, the cells were processed to obtain the total cell lysate similarly or to isolate the lysosomal fraction.
The total cell lysates, solubilized samples representing the plasma membrane fraction, and the lysosomal fraction were processed for protein detection as described previously (Furumiya et al, 2015). The primary antibodies for the detection of tagged transporters were mouse monoclonal anti-FLAG antibody (FUJIFILM Wako Pure Chemical), mouse monoclonal anti-HA antibody (Medical & Biological Laboratories), and mouse monoclonal anti-GFP antibody (Proteintech). Mouse monoclonal anti-β-actin antibody (Proteintech) and rabbit polyclonal anti-ATP1A1 antibody (Proteintech) were used as the primary antibody for the detection of β-actin and ATP1A1, respectively, as loading controls. The primary antibodies for the detection of LAMP1 and histone H3 were rabbit polyclonal anti-LAMP1 antibody (Proteintech) and rabbit monoclonal anti-histone H3 antibody (Cell Signaling Technology), respectively. The primary antibodies were all used at a dilution of 1:1,000. The secondary antibodies, which were conjugated to horseradish peroxidase, were goat antimouse IgG antibody (Jackson ImmunoResearch) and goat antirabbit IgG antibody (Jackson ImmunoResearch), respectively, for the mouse-derived primary antibodies and the rabbit-derived primary antibodies. They were both used at a dilution of 1:10,000. Following color development using Luminate Forte Western HRP Substrate (Merck Millipore), the protein levels were determined by enhanced chemiluminescence using a ChemiDoc Touch imaging system (Bio-Rad Laboratories).
Immunofluorescence staining
Transiently transfected COS-7 and HEK293 cells cultured in 35-mm glass bottom dishes were washed twice with PBS and incubated for 20 min at −20°C with methanol. After washing three times with PBS, the cells were incubated for 1 h at room temperature with 1 mg/ml BSA in PBS. After removing the BSA solution, the cells were incubated for 2 h at room temperature in PBS with required primary antibodies at a dilution of 1:500. Primary antibodies were mouse monoclonal anti-FLAG antibody, rabbit polyclonal anti-HA antibody (Proteintech), and rabbit polyclonal anti-ATP1A1 antibody, respectively, for the detection of FLAG-tagged transporters, HA-DIRC2, and ATP1A1. The cells were washed three times with PBS. Then, the cells were incubated for 1 h at room temperature in PBS with goat polyclonal anti-mouse IgG antibody coupled to Alexa Fluor Plus 488 (Invitrogen) and goat polyclonal anti-rabbit IgG antibody coupled to Alexa Fluor 594 (Jackson ImmunoResearch) at a dilution of 1:500 and visualized using a confocal laser-scanning microscope (LMS510-META; Zeiss). Caco-2 cells transiently transfected with HA-DIRC2 and cultured in 35-mm glass bottom dishes were similarly treated, using rabbit polyclonal anti-HA antibody and mouse monoclonal anti-ATP1A1 antibody (Abcam) as primary antibodies for detection.
Uptake study
Uptake assays were done as described previously (Mimura et al, 2017), using transiently transfected COS-7 and HEK293 cells cultured in 24-well coated plates. Briefly, uptake solutions were prepared using Hanks' solution supplemented with 10 mM MES (pH 6.5 and below) or 10 mM Hepes (pH 7.0 and above) and added with [ 3 H]pyridoxine as the substrate. The cells were preincubated for 5 min in 1 ml of substratefree uptake solution. Uptake assays were initiated by replacing the substrate-free uptake solution with the one containing [ 3 H]pyridoxine (0.25 ml). All procedures were conducted at 37°C. In experiments for intracellular acidification using the protonophores, carbonylcyanide p-trifluoromethoxyphenylhydrazone and carbonylcyanide m-chlorophenylhydrazone were added in the solutions for preincubation and uptake. To examine the effect of ionic conditions, NaCl was replaced as indicated. For inhibition experiments, test compounds were added to the solution only during the uptake period. After termination of [ 3 H] pyridoxine uptake into the cells, the cells were solubilized and the associated radioactivity was determined by liquid scintillation counting. Uptake was normalized to cellular protein content, which was determined by the bicinchoninic acid method (Protein Assay BCA Kit; FUJIFILM Wako Pure Chemical) using BSA as a standard. Uptake assays were similarly conducted using Caco-2 cells cultured in 24-well coated plates.
To examine lysosomal accumulation of pyridoxine in Caco-2 cells, uptake assays were conducted using the cells cultured in six-well coated plates, 4 ml of substrate-free uptake solution for preincubation, and 1 ml of uptake solution containing [ 3 H]pyridoxine. After termination of [ 3 H]pyridoxine uptake into the cells, the lysosomal fraction was isolated for determination of the associated radioactivity and protein content.
Data analysis
The saturable uptake of pyridoxine by each transporter was analyzed using the Michaelis-Menten model equation as follows: v = V max × s/(K m + s). The V max and K m were estimated by fitting this equation to the experimental profile of the uptake rate (v) versus concentration (s) of the substrate (pyridoxine) using a nonlinear least-squares regression analysis program, WinNonlin (Certara).
Unless otherwise indicated, the data are presented as the mean ± SD with the number of experiments conducted using different preparations of cells. Each experiment was conducted in duplicate as biological replicates. Statistical analysis was performed using a t test or when multiple comparisons were needed, ANOVA followed by Dunnett's test or Bonferroni test. A level of P < 0.05 was considered statistically significant.
Data Availability
All data are contained within the article. | 2022-12-03T06:17:09.738Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "cb354578572fc69f7d2fda476577b89b47756085",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "187aa4ea67ff7a61ebb868f364eb6ef765a0a31b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227072695 | pes2o/s2orc | v3-fos-license | Safety of preoperative carvedilol in a patient with recent atenolol-induced pheochromocytoma crisis and cardiomyopathy: A case report
Introduction Beta-adrenergic blockade without adequate alpha blockade is an established trigger of pheochromocytoma crisis (PC). Carvedilol is a nonselective beta-adrenergic and alpha 1-adrenergic blocking agent, and its use for preoperative preparation of pheochromocytoma patients with prior cardiomyopathy secondary to PC resulting from unopposed beta-blocker therapy has never been reported. Case presentation A 48-year-old woman was admitted to the Urology Department for evaluation of a huge right upper abdominal mass. She developed hypertensive crisis with acute pulmonary edema resulting in respiratory failure after administration of atenolol to treat hypertension and tachycardia. Transthoracic echocardiogram revealed global hypokinesia. The patient was managed with intravenous nicardipine, furosemide, and prazosin because of the clinical suspicion of pheochromocytoma that was subsequently confirmed by elevated plasma and urine catecholamine levels. Within 3 days of alpha-adrenergic blockers treatment, there was rapid amelioration of hypertension and pulmonary congestion, as well as normalization of left ventricular function by echocardiography. However, tachycardia persisted after 1 month of adequate alpha-adrenergic blockade. Given the benefit of beta-adrenergic blockers in patients with systolic dysfunction, we slowly titrated carvedilol while carefully monitoring the patient's condition in the intensive care unit. Tachycardia was controlled without inducing PC. Surgical resection was successful without perioperative complications. Conclusion Clinicians should be cautious when prescribing beta-adrenergic blocker in patients with hypertension and upper quadrant mass of unknown etiology. The mass may be pheochromocytoma. Preoperative use of carvedilol after sufficient alpha-adrenergic blockade for control of tachycardia in a patient with prior cardiomyopathy associated with atenolol-induced PC is safe and effective.
Introduction
The classic symptoms of pheochromocytoma are paroxysmal headache, diaphoresis, and palpitations. However, the clinical presentations of these tumors are extremely variable, ranging from an absence of symptoms to hemodynamic instability and end-organ dysfunction to pheochromocytoma crisis (PC), which has a significant mortality rate. PC can manifest spontaneously or be triggered by certain factors, including surgical procedures, general anesthesia, and beta-adrenergic blockade without prior adequate alpha-adrenergic blockade [1].
Transient cardiomyopathy secondary to hypertensive crisis after beta-blocker therapy is a rarely reported presentation of pheochromocytoma [2]. Furthermore, using carvedilol, a nonselective beta-blocker and selective alpha-1 blocker, to treat tachycardia in a patient who has previously presented with hypertensive crisis after beta-blocker therapy has never been reported. We present a pheochromocytoma patient who presented with transient left ventricular dysfunction secondary to hypertensive crisis after receiving a single dose of atenolol. After the hypertensive crisis had been successfully treated by alpha-adrenergic blockade, carvedilol was administered to control the patient's heart rate (HR) before tumor resection. The operation was successfully performed without any complications. This study has been reported in line with the SCARE 2018 criteria [3].
Case report
A 48-year-old Thai woman, farmer, complained of epigastric pain lasting 4 years. She was treated with proton pump inhibitors without further abdominal imaging. Her symptom progressively worsened. Her past medical history revealed she completed treatment for pulmonary tuberculosis 12 years earlier. She is non-smoker and has no other medical history. A month earlier, she had been admitted to another hospital for severe headache and forceful heartbeat. Her blood pressure (BP) was 224/190 mmHg and HR was 120 beats/min. On physical examination, she had normal cardiopulmonary and neurological systems. She was treated with amlodipine 10 mg/day and hydralazine 75 mg/ day. The symptoms improved and the systolic BP decreased to 160-170 mmHg. Abdominal computed tomography (CT) scan was performed to evaluate the chronic epigastric pain. The CT scan showed a large heterogeneous enhancing mass (9.9 × 12.7 × 14.7 cm) with central necrosis in the right upper abdomen. The differential diagnosis included renal or suprarenal tumors (Fig. 1). Therefore, antihypertensive agents were continued and she was referred to the department of Urology for further management.
On this admission, physical examination revealed BP of 170/100 mmHg, HR 120 beats/min, regular rhythm. She was well looking with mildly pale conjunctiva, without peripheral edema. The point of maximal impulse was at the 5 th intercostal space and left midclavicular line. There was no heaving or thrill, S1 and S2 were normal. On bimanual palpation, no abdominal mass was detected. Initial blood laboratory test results indicated mild anemia and hyponatremia ( Table 1). The cardiothoracic ratio on the chest radiograph was 0.55 ( Fig. 2A). Electrocardiogram (ECG) showed sinus tachycardia, left ventricular hypertrophy, and normal axis.
Twelve hours after admission, she complained of palpitations. Her BP and HR were 160/90 mmHg and 120 beats/min, respectively. She was treated with 50 mg of atenolol. Twelve hours after receiving this single dose of atenolol, she presented with sudden dyspnea and agitation. Her BP rose to 220/120 mmHg, HR was 150 beats/min, respiratory rate was 40 breath/min, and oxygen saturation was 85% with ambient air. High jugular venous pressure was observed, and she presented with bilateral lung crepitations. An urgent chest radiograph (Fig. 2B) showed new bilateral patchy infiltrates compatible with acute pulmonary edema. Urgent ECG revealed sinus tachycardia, without definite ST-T changes. Therefore, she was diagnosed with hypertensive crisis and acute pulmonary edema. Intubation and positive pressure ventilation were initiated immediately. Moreover, she received a total dose of nicardipine 30 mg and furosemide 160 mg intravenously.
A transthoracic echocardiography showed global hypokinesia of the left ventricular wall and the overall estimated left ventricular ejection fraction was 36%. There was no significant valvular disease. She had elevated cardiac markers (Table 1). Based on the onset of hypertensive crisis after a single dose of atenolol and the huge right upper quadrant mass, a diagnosis of pheochromocytoma was suspected. Therefore, an alpha adrenergic antagonist (prazosin) was administered and titrated until the BP normalized at approximately 12 hours after the onset of hypertensive crisis. Seventy-two hours later, her clinical status had improved dramatically. She was extubated and able to breathe without support. She received prazosin 6 mg/day and furosemide 40 mg/day, and her BP was 130/80 mmHg. Further, the HR was 86 beats/min. Intravenous nicardipine was tapered and eventually discontinued. A repeated echocardiography 72 hours after hypertensive crisis indicated completed resolution of the affected wall and a normal ejection fraction of 63%.
The biochemical markers of pheochromocytoma were significantly elevated except urine metanephrine (Table 1). Iodine-131 meta-iodobenzylguanidine (I-131 MIBG) scintigraphy revealed a sizable, intense, heterogeneous uptake in the right upper abdomen. Single-photon emission CT images also showed inhomogeneous tracer uptake in the right retroperitoneal mass. It was suggestive of pheochromocytoma without evidence of distant metastases. We switched from treatment with prazosin to doxazosin for greater pharmacokinetic results.
One week after doxazosin administration, the BP was well controlled at 120/80 mmHg with doxazosin 8 mg/day. The patient had neither orthostatic hypotension nor paroxysmal symptoms. However, she had sinus tachycardia (100 beats/min). Thus, carvedilol was added to control her HR. We slowly titrated the dose of carvedilol and closely monitored her symptoms in the intensive care unit. Two weeks later, the carvedilol dose reached 25 mg/day, and her HR was 60-70 beats/min. The patient had no signs or symptoms of heart failure and was ready to undergo surgery by an experienced urological surgeon.
During open right adrenalectomy, the intraoperative BP varied between 120/80 and 160/90 mmHg controlled with nitroprusside. She had no congestive symptoms or hypoglycemia. The operative findings revealed a large right adrenal mass (Fig. 3). The left adrenal gland appeared normal. No extra-adrenal nodules were seen. The pathological report of the right adrenal gland revealed a mass weighing 819 g. Focal hemorrhage and geographic necrosis were noted without evidence of extra-adrenal extension or positive margins. Furthermore, the mass was positive to chromogranin that was compatible with pheochromocytoma. The immediate postoperative BP was 110/80 mmHg without antihypertensive agents. Seventy-two hours postoperatively, the BP rose to 150/90 mmHg without any symptoms. She received doxazosin 2 mg/ day. Subsequently, the urinary vanillylmandelic acid (VMA) level returned to normal within 4 weeks after the tumor resection. Moreover, the follow-up I-131 MIBG scintigraphy revealed no I-131 avid lesions. The genetic testing for pheochromocytoma, including VHL, RET, and SDH genes were negative. She is healthy after surgery. To date, the patient has been treated with doxazosin 1 mg/day, and has not presented with any spells or congestive symptoms. She has been followed up at endocrinology clinic for three years and has normal result of urine metanephrine and normetanephrine.
Discussion
Pheochromocytoma is a catecholamine producing tumor the adrenal medulla with an annual incidence of 2-8 per million [4]. Clinical manifestations of pheochromocytoma are highly variable, ranging from asymptomatic forms to life-threatening cardiovascular events. This tumor can mimic other disorders because of its gastrointestinal, neurological, and metabolic manifestations [2,5]. Because pheochromocytoma can lead to hypertensive crisis and is curable by surgical resection, it is important to have clinical suspicion when faced with a potential case, and to confirm the diagnosis and resect the tumor promptly. Catecholamines affect many cardiovascular and metabolic processes by T. Wannachalee and P. Chunharojrith activation of three types of adrenergic (α, β, and dopamine) receptors.
The α1-adrenoreceptors mediate vascular and smooth muscle contraction; their activation causes vasoconstriction and increased BP. The vasodilatation in skeletal muscle from stimulation of β2-adrenoreceptors is important because it protects against catecholamine excess. If this protection is blocked, unopposed α1-adrenoreceptors stimulation may lead to a hypertensive crisis [1,6,7]. The present case illustrates the onset of hypertensive crisis and pulmonary edema after the administration of atenolol, a β1-antagonist, during a 12-h period without previous administration of α-adrenergic blocker. Furthermore, left ventricular dysfunction and elevated cardiac markers are consequences of acute catecholamine excess. Several mechanisms of acute myocardial damage associated with catecholamines have been proposed. One mechanism is that catecholamines have a direct toxic effect on the myocardium by enhanced lipid mobility, calcium overload, and free radical production [2]. Another mechanism is catecholamine-induced myocardial stunning [8] and coronary vasoconstriction that lead to focal myocardial necrosis [9]. The left ventricular dysfunction patterns associated with pheochromocytoma have been heterogeneous including reversible or irreversible dilated cardiomyopathy [2], hypertrophic cardiomyopathy, Takotsubo, and atypical (inverted) Takotsubo cardiomyopathies [2,8,10]. Fortunately, most of them are reversible after treatment with alpha adrenergic antagonists or complete excision of the tumor. Reportedly, it may take a few days to several months for complete resolution after treatment [2]. As in this patient, the global left ventricular wall hypokinesia returned to its normal function after a 72-h alpha adrenergic blockade treatment.
The diagnosis of pheochromocytoma must be confirmed by the presence of high concentrations of fractionated catecholamines (metanephrine, normetanephrine, VMA) in urine or plasma. According to a literature review, there is a positive linear correlation between the level of 24-h urine VMA and the size of tumor mass [11] because the metabolism of catecholamines is primarily intratumoral and VMA is its final metabolite. This patient had high levels of 24-h urine VMA and slightly elevated 24-h urinary normetanephrine excretion corresponding to the remarkable tumor size.
Surgical removal is the gold standard of treatment of pheochromocytoma. There were approximately 140 cases of pheochromocytoma in our hospital during the past ten years. Every case was managed by multidisciplinary team and surgically operated by experienced surgeons. Before the operation, alpha adrenergic blockers should be administered to control BP and restore vascular volume. Beta blockers may be used as an alternative in patients unable to achieve the target BP with at least 3 days of treatment with alpha blockers [1,12]. Reportedly, patients with PC, triggered by the administration of beta blockers alone without alpha blockade use, can be rechallenged with beta blockers after adequate treatment with alpha adrenergic blockade for control of the increased HR. This could be caused by either the high levels of circulating catecholamines or the alpha adrenergic blockade therapy [13,14]. Further, there was no hypertensive crisis during the rechallenging with beta blockers. In general, combined alpha and beta adrenoreceptor antagonists (labetalol and carvedilol) are not recommended for the first choice of preoperative adrenergic blockade because they have a ratio of alpha to beta antagonist activity of approximately 1:5, which may cause hypertensive crisis from the surge of alpha adrenergic activity [1,12]. However, carvedilol has been proven to reduce the risk of death and hospitalization from cardiovascular causes in patients with heart failure [15]. Therefore, we prescribed the non-selective beta and selective alpha-1 blocker (carvedilol) to control HR after hypertensive crisis was controlled with the adequate dose of alpha blocker therapy. There were no complications because α1-adrenoreceptors were blocked completely before starting carvedilol. Postoperative BP is usually reduced to normal limits. However, our patient's BP remained elevated. The probable reasons are coincident essential hypertension, long-standing hypertension with structural changes of blood vessels and resetting of baroreceptors.
Conclusion
Beta-adrenergic blocker should be used cautiously in patients with hypertension and upper quadrant mass. The mass may be pheochromocytoma for which beta-blockers without appropriate alpha-receptor blockade are contraindicated. Preoperative use of carvedilol after sufficient alpha-adrenergic blockade for control of reflex tachycardia in a patient with prior cardiomyopathy associated with PC was safe and effective.
Data availability
The data that support the findings of this case report are available from the corresponding author on reasonable request.
Patient confidentiality
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Funding
This case report did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
Declaration of competing interest
All authors have no financial or non-financial conflicts of interest related to this report.
This case report was presented as a poster presentation at International Symposium on pheochromocytoma and paraganglioma 2014 in Japan.
Please state any sources of funding for your research
No funding.
Ethical approval
The case report is exempt from ethical approval in our institution.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images.
A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Author contribution
Dr. Wannachalee conceptualized, drafted, reviewed and revised the manuscript. Dr. Chunharojrith contributed to the concept of the report, critical manuscript review and approval. All authors approved the final manuscript as submitted and agree to be accountable for all aspects of the work.
Registration of research studies
1. Name of the registry: 2. Unique Identifying number or registration ID: 3. Hyperlink to your specific registration (must be publicly accessible and will be checked): | 2020-11-12T09:09:37.687Z | 2020-11-06T00:00:00.000 | {
"year": 2020,
"sha1": "458143f24cfa26e62cbe05d1ca7079996d0afdc7",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2020.11.014",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0208c5fa841a59ba0f864f060df4e598ebae7e50",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40148223 | pes2o/s2orc | v3-fos-license | Liver resection in hepatitis B related-hepatocellular carcinoma: clinical outcomes and safety in elderly patients.
AIM
To compare the morbidity and mortality in young and elderly hepatocellular carcinoma (HCC) patients undergoing liver resection.
METHODS
We retrospectively enrolled 1543 consecutive hepatitis B (HBV)-related HCC patients undergoing elective hepatic resection in our cohort, including 207 elderly patients (≥ 65 years) and 1336 younger patients (< 65 years). Patient characteristics and clinical outcomes after liver resection were compared between the two groups.
RESULTS
Elderly patients had more preoperative comorbidities and lower alanine aminotransferase and aspartate aminotransferase levels. Positive rates for hepatitis B surface antigen (P < 0.001), hepatitis B e antigen (P < 0.001) and HBV DNA (P = 0.017) were more common in younger patients. Overall complications and their severity classified using the Clavien system were similar in the two groups (33.3% vs 29.6%, P = 0.271). Elderly patients had a higher rate of postoperative cardiovascular complications (3.9% vs 0.6%, P = 0.001), neurological complications (2.9% vs 0.4%, P < 0.001) and mortality (3.4% vs 1.2%, P = 0.035), and had more hospital stay requirement (13 d vs 12 d , P < 0.001) and more intensive care unit stay (36.7% vs 27.8%, P = 0.008) compared with younger patients. However, postoperative hepatic insufficiency was more common in the younger group (7.7% vs 3.4%, P = 0.024).
CONCLUSION
Hepatectomy can be safely performed in elderly patients. Age should not be regarded as a contraindication to liver resection with expected higher complication and mortality rates.
METHODS:
We retrospectively enrolled 1543 consecutive hepatitis B (HBV)-related HCC patients undergoing elective hepatic resection in our cohort, including 207 elderly patients (≥ 65 years) and 1336 younger patients (< 65 years). Patient characteristics and clinical outcomes after liver resection were compared between the two groups.
RESULTS:
Elderly patients had more preoperative comorbidities and lower alanine aminotransferase and aspartate aminotransferase levels. Positive rates for hepatitis B surface antigen (P < 0.001), hepatitis B e antigen (P < 0.001) and HBV DNA (P = 0.017) were more common in younger patients. Overall complications and their severity classified using the Clavien
INTRODUCTION
Hepatocellular carcinoma (HCC) is the fifth most common cancer and the third most common global cause of cancer-related deaths [1] . Fifty to fifty-five percent of HCC cases are attributed to chronic hepatitis B virus infection worldwide, and up to 80% in China [2] . There will be more elderly patients as people live longer, especially in China, which has the world's largest population. Moreover, aging itself is a risk factor for HCC carcinogenesis and development [3] . Therefore, the number of elderly HCC patients will increase [3] , which may also result in social problems. The elderly tend to be considered clinically ''fragile'' due to comorbidity and a poorer performance status, which make them less amenable and tolerant to resection [4] . Liver resection is the treatment of choice in HCC patients, however, elderly HCC patients with comorbidity may have an increased surgical risk and may have higher morbidity and mortality [5] . These age-related contraindications have prevented elderly HCC patients from receiving optimal surgical treatment. With the refinement of surgical techniques and perioperative management in liver surgery during the last few decades, liver resection in elderly HCC patients has become safer [3] .
Many studies have reported the safety of liver resection in elderly HCC patients, but have drawn inconsistent conclusions. The aim of our study was to evaluate the safety of liver resection in a large sample of elderly HCC patients, by comparing the outcome of liver resection performed in patients younger and older than 65 years.
Population and study design
We carried out a retrospective study. Between January 2009 and March 2013, 1543 consecutive hepatitis B virus (HBV)-related HCC patients undergoing elective hepatic resection were included in this study. All included patients were diagnosed with HCC by histology and with current or a history of HBV infection. All patients underwent surgery only when the Child-Turcotte-Pugh (CTP) class was A. Patient data on pre-, intra-, and postoperative parameters were collected prospectively from the West China Hospital of Sichuan University HCC database (HCCWCHSU System). The protocol was approved by the West China Hospital Ethics Committee and written informed consent was obtained from all patients before inclusion. Based on the age distribution, the patients were divided into the elderly group (≥ 65 years) and the younger group (< 65 years). The primary outcomes were preoperative mortality and postoperative complications in the elderly group compared with the younger group.
Perioperative management
All the included patients were managed by the same surgical team. All patients underwent a thorough history enquiry, physical examination and routine preoperative laboratory measurements. Echocardiography, chest radiography or computed tomography, pulmonary function test and coronary angiography were carried out if necessary. Routine preoperative imaging examinations to evaluate the tumor included contrast computed tomography or magnetic resonance imaging of the abdomen. American Society of Anesthesiologists (ASA) category was used for anesthetic assessment. Patients were explored through an extended right subcostal incision and intraoperative ultrasonography was performed routinely. Hemihepatic vascular inflow occlusion [6] or the Pringle maneuver [7] were used according to the surgeon's preference in most patients. Liver parenchymal transection was performed using the Hooking ligation technique or an ultrasonic dissector with coagulator [6] . Based on preoperative and intraoperative conditions, patients were transferred to the intensive care unit for treatment if necessary.
Definition of the parameters used
Mortality was defined as death within 30 d after surgery or death before discharge involving a hospital stay of more than 30 d. The Clavien-Dindo complications classification system [8] was used to grade postoperative complications. Liver resection of more than 3 segments was defined as major resection, and liver resection of less than 3 segments was defined as minor resection [5] . Portal hypertension was defined as esophageal varices detected by endoscopy or splenomegaly (major diameter > 12 cm) with a platelet count < 100000/mm 3 according to the Barcelona Clinic Liver Cancer Group criteria [9] . For individual pre-existing disease, we used the Charlson index [10,11] to quantify comorbidities. The 50-50 criteria [12] defined as prothrombin time < 50% and serum bilirubin level > 50 µmol/L on day 5 after liver resection, was defined as liver failure. Hepatic insufficiency was defined as serum bilirubin > 60 µmol/L on postoperative day 5. Extrahepatic procedures included all other operations, except liver resection, such as bowel resection, adrenalectomy, diaphragm resection, biliary tract exploration and adhesion separation due to reoperation.
Statistical analysis
Statistical analysis was performed using SPSS Version 17 statistical analysis software and significance was set at P < 0.05. The Student t and Mann-Whitney U tests were used to compare continuous variables when appropriate.
The χ 2 test and Fisher exact test were used to compare categorical variables.
Patient clinical characteristics
Clinical characteristics of the elderly and younger groups are shown in Table 1. All 1543 patients were diagnosed with HBV-related HCC. Of these patients, 13.4% were elderly, with a median age of 68 years (interquartile range: 66-73 years), and the median age of the younger group was 47 years (interquartile range: 40-56 years). A similar distribution in gender and portal hypertension was seen in both groups. No significant differences were found for platelets, white blood cells and body mass index.
With regard to comorbidities ( Table 2), 122 of 207 (58.9%) elderly patients had more than one comorbidity, compared with 326 of 1336 (24.4%) in the younger group (P < 0.001). In addition, elderly patients had a significantly higher Charlson index, a higher proportion of Charlson index > 3 (17.9% vs 8.5%, P < 0.001) and ASA grade Ⅲ-Ⅵ (64.3% vs 7.1%, P < 0.001). In the elderly group, the most common comorbid conditions were hypertension, diabetes mellitus, pulmonary disease, cardiovascular disease and renal-related disease, which were all significantly higher, with the exception of renal-related disease, than those in the younger patients.
Intraoperative data
The same proportion (37.2%) of patients undergoing major liver resection was found in both groups (Table 3). For the elderly and younger groups, respectively, 38 (18.4%) and 266 (19.9%) patients underwent a simultaneous non-hepatic procedure, most commonly adhesiolysis, portal vein tumor thrombus resection, biliary tract exploration, diaphragm resection, splenectomy and bowel resection. There were no significant differences between the two groups with regard to the parameters analyzed (Table 3). Information on the Ishak score was available in only 713 patients and there was no significant difference in the rate of cirrhosis (Ishak score ≥ 5) between the elderly and younger groups (P = 0.404).
Postoperative outcome
Postoperative complications and their severity are shown in Table 4. The elderly group had similar morbidity and levels of complications (from grade Ⅰ to Ⅵ) to those in the younger group. However, the elderly patients had a higher mortality than the younger group (3.4% vs 1.2%, P = 0.035). Cardiovascular complications and neurological complications were more frequent in the elderly patients (P = 0.001). In addition, the incidence of hepatic insufficiency was higher in younger patients (P = 0.024). The most common complications in elderly patients were pul- in morbidity and mortality, however, other studies [20][21][22] found that elderly patients had more complications. The present study assessed the safety of liver resection in a large sample of elderly patients enrolled in a retrospective cohort. Our elderly and younger HCC patients differed with regard to several features. Elderly patients had a higher rate of comorbidity, lower AST and ALT levels, and lower positive rates for HBsAg, HBeAg and HBV DNA, but higher cardiovascular and neurological complications. The overall morbidity was similar in the two groups, but elderly patients had high mortality and longer hospital and more intensive care unit (ICU) stay requirement. The cut-off age for elderly HCC patients varies widely in the literature from 65 to 80 years [16,[23][24][25] . However, most studies [13,17,[20][21]24,[26][27][28] used 70 years as the cut-off age and these studies included HCC due to various etiologies, such as HBV, hepatitis C virus (HCV) and nonalcoholic fatty liver disease. The average age at onset of HBV-related HCC was reported to be 10 years younger than that of HCV-related HCC [3] . In our cohort, HCC was related to HBV infection and we defined elderly patients as aged more than 65 years. That was because the mean age at HCC diagnosis was found to be 55-59 years in China [29] , and in South Korea [2] , however, the mean age at HCC diagnosis was 63-65 years in Europe and North America and 75 or older in low-risk populations [29] . Thus, 65 years is a more suitable cut-off for elderly patients in China.
Based on the cut-off age of 65 years, 13.4% of the patients in our cohort were elderly, which was less than that in other studies [13][14]17,20,[23][24] . This may be because elderly patients were highly selected for liver resection based on preoperative general condition and assessment of hepatic reserve in our center.
In agreement with previous reports [17,20] , elderly patients showed a higher rate of comorbidities, ASA grade ≥ Ⅲ and higher quantitative comorbidity (Charlson index). The higher prevalence of hypertension and cardiovascular disease may be the reason for the higher rate of postoperative cardiovascular complications in elderly patients. The function of most organs usually deteriorates with age [16] and this could explain why elderly patients had more neurological complications. Cho et al [17] reported that confusion after liver resection was far more common in the elderly than in younger patients.
Although both elderly and younger patients had preserved liver function with CTP class A, the AST and ALT levels were significantly higher in younger patients. Approximately 80% of HBV-related HCC cases occur in patients with cirrhosis [1] , and cirrhosis severity influences liver function and postoperative complications. The rate of cirrhosis was not different between the two groups in our cohort. We compared the rate of portal hypertension, which caused the underlying liver damage, but no difference was observed between the two groups. The positive rates of HBsAg, HBeAg and HBV DNA were significantly lower in the elderly group, this meant that the younger patients had worse underlying liver damage resulting from HBV infection. This study revealed an monary complications, followed by infectious complications, cardiovascular disease, hepatic insufficiency, ascites, and neurological complications.
DISCUSSION
The incidence rate of HCC among the elderly is progressively increasing [3] , however, only a minority undergo curative procedures [13] . Historically, there were biases against cancer treatment for the elderly as life expectancy of elderly patients will be determined by medical comorbidities and not malignancy [14] . Moreover, aging leads to a number of structural and functional changes in the liver, including a decline in liver volume, a reduction in the mass of functional hepatocytes, and alterations in hepatic microcirculation, which may make liver resection less tolerable [15] . With the refinement of surgical techniques and perioperative management in liver surgery during the last few decades, some studies have suggested that liver resection is a safe and effective treatment in elderly patients, even for elderly patients over 80 years old [16] . However, there is still controversy as to whether age influences the postoperative outcome of HCC patients.
Background
There will be more elderly patients in the future and this may cause social problems as elderly patients have more comorbidity and a poorer performance status. Hepatocellular carcinoma is a common cancer and usually occurs in older patients. The safety of liver resection in elderly patients is still a concern.
Research frontiers
Aging not only results in more and severe comorbidities, but usually leads to a number of structural and functional changes in the liver, including a decline in liver volume, a reduction in the mass of functional hepatocytes, and alterations in hepatic microcirculation. These may make liver resection less tolerable. The research hotspot is to evaluate the safety of liver resection in elderly patients.
Innovations and breakthroughs
The authors' study found that elderly patients had more preoperative comorbidities compared with younger patients. However, elderly patients did not only have better liver function, but also had less hepatitis B infection. Overall complications were similar in the two groups. However, elderly patients had more postoperative cardiovascular complications, mortality and less hepatic insufficiency.
Applications
In general, hepatectomy can be safely performed in elderly patients with expected higher complications and mortality rates. Aging should not be regarded as a contraindication to liver resection when surgeons make decisions before surgery.
Peer review
The authors present a series of 1543 liver resections in patients diagnosed with hepatitis B virus related hepatocellular carcinoma. There were 1336 young patients and 207 elderly patients. It is a series collected in a period of four years. The article is well redacted and its conclusions are very interesting for the international literature.
age-related difference in HBV infection status (current or previous infection) and more elderly patients had a history of HBV infection. This difference may also indicate that inflammation of the liver due to HBV infection was less active in elderly patients as younger patients had higher AST and ALT levels. The same phenomenon was observed in the study by Oishi et al [30] in which elderly patients > 75 years with HCC had better liver function than younger patients as assessed by prothrombin time, AST and ALT. Several studies have also found [23][24]30] better preoperative liver function in elderly patients. In addition, postoperative hepatic insufficiency was found to be more common in younger patients and Yau et al [31] also found that young patients had a significantly higher rate of liver derangement after TACE than elderly patients. Several reasons could explain this result. Firstly, the preoperative AST and ALT levels and HBV infection status may influence postoperative liver function. Secondly, HCC is less frequently associated with cirrhosis in elderly patients [32] . It is possible that patients with cirrhosis and HCC died before reaching elderly status and the surviving patients had well preserved hepatic function [32] . In addition, this result may be due to elderly patients being highly selected for liver resection based on the assessment of hepatic reserve in our center. Thus, considering postoperative liverrelated complications, age is not a contraindication to liver resection, although aging may lead to a number of structural and functional changes in the liver. Compared with younger patients, overall complications and their severity, classified using the Clavien system, were similar in elderly patients, but these patients had higher mortality (3.4%). Despite higher mortality in elderly patients, a mortality rate of 3.4% suggested that liver resection was relatively safe in the elderly, compared with mortality of 3.15% in a meta-analysis which included 35000 hepatic resections [33] . Many studies [13][14][17][18][19] have also drawn the same conclusion in that there were no significant differences in postoperative complications. These data suggest that hepatic resections can be safely performed in elderly patients. However, elderly patients had a longer hospital stay and more ICU stay requirement. Therefore, although the elderly were not predisposed to postoperative complications, recovery in these patients may be slower due to less physiologic reserve compared with younger patients [14] . Therefore, advanced age is not the major determinant in the incidence and severity of postoperative complications.
The results of our study should be interpreted cautiously, as our analysis was restricted to patients with HBV-related HCC, and may not be appropriate for other etiologies. Moreover, it is important to point out that the elderly patients in our study were highly selected for surgical safety.
In conclusion, liver resection can be safely performed in carefully selected elderly patients. Although elderly patients had more cardiovascular and neurological complications, age should not be regarded as a contraindication to liver resection. | 2018-04-03T04:44:32.784Z | 2014-06-07T00:00:00.000 | {
"year": 2014,
"sha1": "301f43d5dc9dc1335d70d3d65e1f617b7842fad9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i21.6620",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a54d6daa5683658beb107025004675a5b0248285",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
50330367 | pes2o/s2orc | v3-fos-license | Modeling sea-level change using errors-in-variables integrated Gaussian processes
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The input data to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. These data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. The model we propose places a Gaussian process prior on the rate of sea-level change, which is then integrated and set in an errors-in-variables framework to take account of age uncertainty. The resulting model captures the continuous and dynamic evolution of sea-level change with full consideration of all sources of uncertainty. We demonstrate the performance of our model using two real (and previously published) example data sets. The global tide-gauge data set indicates that sea-level rise increased from a rate with a posterior mean of 1.13 mm$/$yr in 1880 AD (0.89 to 1.28 mm$/$yr 95% credible interval for the posterior mean) to a posterior mean rate of 1.92 mm$/$yr in 2009 AD (1.84 to 2.03 mm$/$yr 95% credible interval for the posterior mean). The proxy reconstruction from North Carolina (USA) after correction for land-level change shows the 2000 AD rate of rise to have a posterior mean of 2.44 mm$/$yr (1.91 to 3.01 mm$/$yr 95% credible interval). This is unprecedented in at least the last 2000 years.
Introduction
Sea-level rise poses a hazard to the intense concentrations of population and infrastructure that are increasingly located at the coast [Nicholls and Cazenave, 2010]. Effective mitigation and management of this hazard is reliant upon accurate estimation of historic, current, and future rates of sea-level rise. Data for estimating such rates generally arise in two forms: instrumental data (tide gauges and satellites) and proxy data (derived from organisms or chemical deposits). The former are generally more precise but span a much shorter time period, whilst the latter are much less precise but cover a much longer time interval. In this paper we use both data sources to estimate rates of sea-level change with thorough quantification of uncertainty.
The instrumental data we use provides a historic time series of fixed and known ages with estimated sea levels and associated measurement errors. Although there are now more than 1000 operational tide gauges worldwide [Jevrejeva et al., 2006], most were installed since the 1950s. Thus the global compilation relies on fewer gauges further back in time. The most widely used global tide-gauge compilation spans the period since 1880AD (Church and White, 2011; Figure 2A). Since late 1992AD satellite altimetry measurements have further provided a near global record of sea-level change [Nerem et al., 2010]. Church and White [2011] demonstrate that there is good agreement (within uncertainty bounds) between their global sea-level record based on tide gauges and satellite altimetry measurements over the period from 1993AD to 2009AD. Thus we use only the tide-gauge data as our instrumental record.
The proxy data provide a time series of sea level measurements going back hundreds to thousands of years. These data place modern rates of sea-level change in an appropriate context and characterize the long-term relationship between climate and sea level. In our case study we use proxy data that have been pre-processed from their raw form (counts of fossilised species living in the tidal range) into estimates of sea level. We do not explore the pre-processing in this paper; see Birks [1995], Horton and Edwards [2006], Juggins and Birks [2012] for a discussion of how this is done. The resulting processed data are comprised of sea-level estimates that are irregularly spaced in time and have uncertain ages in addition to sea-level uncertainties.
Instrumental and proxy reconstructions both estimate relative sea level (RSL), which is the product of simultaneous land-and ocean-level changes. In the absence of tectonics, land-level changes primarily arise from the ongoing, slow rebound of the solid Earth to deglaciation [Peltier, 2004], which is called glacio-isostatic adjustment (GIA). Regions that were under the thickest ice at the last glacial maximum (between 26,000 and 19,000 years ago) are experiencing uplift (RSL fall), while areas that were peripheral to the ice sheet are experiencing subsidence (RSL rise). To compare sea-level measurements or reconstructions from different locations and to isolate the climate-related component of RSL change it is necessary to estimate and remove the contribution from GIA [Engelhart et al., 2009]. Since GIA is a rate (mm/yr), it affects older sediments more than younger sediments. This has repercussions for our model as it introduces correlation between the individual age and sea-level reconstructions. We defer full discussion of this to Section 4.
To accurately estimate rates of sea-level change and reliably compare instrumental compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. We develop models to estimate instantaneous rates of sea-level change and account for all available sources of uncertainty in instrumental and proxy-reconstruction data. Our response variable is sea level after correction for GIA. Our models place a Gaussian process prior on the rates of sea-level change and model the data as the integral of this rate process. By embedding the integrated process in an errors-in-variables (EIV) framework (which takes account of time uncertainty), and removing the estimate of GIA, we quantify rates with better estimates of uncertainty than previously possible.
To demonstrate the application of these models we apply them to an example global tide-gauge dataset [Church and White, 2011]. Our analysis of this record indicates that the rate of sea-level rise increased continuously since 1880AD and is currently 1.92mm/yr (95% credible interval of 1.84 to 2.03mm/yr). We also apply the model to a late Holocene proxy reconstruction from North Carolina [Kemp et al., 2011]. Such reconstructions are important to understand the response of sea level to known climate variability such as the Medieval Climate Anomaly and the Little Ice Age [Mann et al., 2008]. Application of our model to the North Carolina proxy reconstruction indicates that the mean rate of rise in this locality since the middle of the 19th century (current rate of 2.44mm/yr with a 95% credible interval of 1.91 to 3.01mm/yr) is in agreement with results from the tidegauge analysis and is unprecedented in at least the last 2000 years. The two examples show the importance and utility of the new models in estimating dynamic rates of sea-level change with full and formal consideration of the uncertainties that characterize instrumental and proxy datasets.
The remainder of this paper is structured as follows. Section 2 describes how the example datasets were produced. Section 3 discusses previous work in estimating rates of sea-level change, and in stochastic processes for rate estimation. In Section 4 we introduce our new Errors-In-Variables Integrated Gaussian Process (EIV-IGP) model. We fit our model to both a tide-gauge and proxy data set in Section 5 and discuss the results. Conclusions are presented in Section 6.
Sea-level Datasets
This section describes how the global tide-gauge record [Church and White, 2011] was compiled and how RSL in North Carolina was reconstructed using proxies preserved in salt-marsh sediment [Kemp et al., 2011]. The methods for data collection are specific to our case studies, but the resulting records are typical of available sea-level datasets.
Tide gauges
Monthly RSL averages for individual locations are held by the Permanent Service for Mean Sea Level [Woodworth and Player, 2003]. To reliably estimate rates and trends against a background of annual to decadal variability, analysis is commonly restricted to records with more than ∼ 60 years of data [Douglas et al., 2001]. Global sea level is estimated by spatially averaging tide-gauge records after individual records were corrected for GIA. The most commonly used dataset is that of Church and White [2011] which includes annual sea-level data between 1880AD and 2009AD from up to 235 individual locations. This dataset employed the spatial variability in sea level observed by satellites to interpolate between tide gauge locations and estimate global sea level.
Salt-marsh reconstructions
Salt marshes keep pace with sea-level rise by accumulating sediment [Morris et al., 2002]. As a result, modern salt marshes may be underlain by several meters of sediment, which is an archive of past sea-level changes. Cores are used to recover this material for analysis. The ages of discrete depths in the core are estimated using techniques such as radiocarbon dating to provide a history of sediment accumulation. Radiocarbon dates are calibrated into calendar ages and assimilated with other chronological constraints (e.g. pollution markers of known age) using an age-depth model.
For the North Carolina reconstruction, age errors for the RSL data were calculated from Bchron , 2011], a Bayesian, statistical age-depth model that estimates uncertain interpolated ages between radiocarbon dated levels. This tool is particularly useful in reconstructing RSL from a core of coastal sediment because most levels in the core were not directly dated. Bchron assumes that the calibrated radiocarbon ages arise as realisations of a Compound Poisson-Gamma (CPG) process, which enforces the rule of superposition. Bchron calibrates the radiocarbon (and non-radiocarbon) dates, estimates the parameters of the CPG and identifies outliers. Other age-depth models are available (see Parnell et al. [2011] for a review), but Bchron was designed specifically for use in palaeoenvironmental reconstructions [Parnell et al., 2012].
The age errors used for the North Carolina proxy reconstruction are the Bchron marginal means and standard deviations for each layer in the core that was used to reconstruct RSL, which we approximate as being normally distributed. This would be a poor assumption for individual calibrated radiocarbon dates that are skewed and multi-modal. However, the CPG produces slightly more regular ages, and the effect is further reduced when combined with our smoothing approach. For a justification of the use of this Gaussian assumption in smoothing methods see Parnell and Gehrels [2014]. We can envisage a superior model, where our smoothing approach is combined with an age-depth model and fitted simultaneously, but we do not explore such a model here, and view our modelling assumptions as providing a conservative estimate of uncertainty.
Core sediment contains the preserved remains of micro organisms such as foraminifera. The distribution of foraminifera is controlled by tidal elevation (i.e. sea level) because some species are more tolerant of submergence than others [Scott and Medioli, 1978]. The modern, observable relationship between counts of foraminifera and sea level provides an analogy for interpreting similar assemblages preserved in core material. This analogy is exploited to reconstruct relative sea level (e.g. Kemp et al., 2013) using a transfer function [Birks, 1995, Horton and Edwards, 2006, Juggins and Birks, 2012. The calibration of counts of foramnifera into estimates of RSL (via these transfer functions) requires further statistical modeling techniques that we do not discuss here. The output of the transfer function involves the creation a one sigma uncertainty estimate in RSL which we include as an input to our model. The validity of this approach has been demonstrated by comparison between reconstructions and instrumental measurements from nearby tide gauges [Kemp et al., 2009]. To extract climate-driven rates of sea-level rise, the reconstructions are corrected for GIA, which over the last 2000 years is assumed to be linear because of the slow response time of the solid Earth [Peltier, 2004].
Previous Work
This section reviews how rates of sea-level change were estimated from uncertain data in existing literature. We also review papers that describe and/or utilize the stochastic methods that we have employed in this paper.
Sea-level rise: rates and accelerations
The motivation for analyzing tide-gauge records and reconstructing sea level is to establish how unusual modern rates of sea-level rise are in comparison to longer term trends and for understanding the role of climate variability as a driver of sea-level change (e.g. Donnelly et al., 2004, Engelhart et al., 2009, Shennan and Horton, 2002. Comparisons of past and present rates are only complete and fair if all sources of uncertainty are accounted for. The global tide-gauge record is the primary source of historic and current sea-level data. The record includes sea-level uncertainty that is greater earlier in the record. The age of each annual sea-level observation is fixed and known ( Figure 2A). Tide-gauge records are commonly analyzed using simple linear regression to estimate a rate of sea-level rise for the entire record or a shorter segment (e.g. Barnett, 1984, Church and White, 2006, Douglas et al., 2001, Gornitz et al., 1982, Holgate and Woodworth, 2004, Jevrejeva et al., In Press, Peltier and Tushingham, 1991, Sallenger et al., 2012. For example, Church and White [2011] calculated the rate of global sea-level rise to be 1.6mm/yr ± 0.3mm from 1880AD to 2009AD compared to 1.1mm/yr ± 0.7mm between 1880AD and 1936AD, and 1.8mm/yr ± 0.3mm after 1936AD. A similar approach has been widely employed to characterize acceleration or deceleration of sea-level rise, where a polynomial rather than linear function is fitted to the tide-gauge record (e.g. Boon, 2012, Houston and Dean, 2011, Jevrejeva et al., 2008, Woodworth et al., 2009. For example, Church and White [2011] estimated a sea-level acceleration of 0.009mm/yr 2 ± 0.003mm/yr 2 for the period 1880AD to 2009AD. In contrast, Houston and Dean [2011] obtained a small sea-level deceleration (-0.0123 ± 0.0104mm/yr 2 ) by analyzing U.S. tide gauges from 1930AD to 2010AD and suggested similar decelerations for the global dataset over the same time interval. A limitation of sub-dividing the tide-gauge record into segments identified by visual inspection is that individual data points are ascribed undue importance and information is lost in the autocorrelated dataset by discarding earlier and/or later intervals.
In considering proxy reconstructions with bivariate uncertainties, some studies divided the data series into sections based on changes in slope that were qualitatively positioned by the researcher at a single time point (e.g. Gehrels and Woodworth, 2013). Consequently, a rate of change was calculated for each segment of the sea-level reconstruction by simple linear regression of mid points with no formal consideration of age and sea-level uncertainty or their covariance. Other studies used an errors-in-variables change point approach to objectively place changes in slope across a range of timings and to estimate linear rates for each segment with consideration of uncertainty [Kemp et al., 2011, 2013, Long et al., 2014. A limitation of this approach is that phases of persistent sea-level behavior are approximated by linear trends that do not accurately represent the underlying physics of sea-level change and mask (to some degree) the continuous evolution of sea-level through time.
Stochastic processes and rate estimation
The model we propose makes use of the errors-in-variables (EIV) approach, where we do not assume that the explanatory variable (which we denote as x) is known, but that it is instead measured with some error [Dey et al., 2000]. EIV models have been successfully applied to a multitude of different subject areas. Most recently, for example, de Castro et al. [2013] deals with Bayesian inference in the EIV model for replicated data. Xiaoshuang et al. [2013] investigate partially linear EIV models with longitudinal data. The EIV approach can be used with multivariate and hierarchical models including our application to proxy sea-level reconstructions with age and sea-level errors. We embed our EIV regression within a non-parametric model.
We use a Gaussian Process (GP) as a prior on the rate process, which is then integrated to produce estimates of sea level. The opposite approach, whereby a GP is placed on the data itself and then differentiated to produce rates, has a long literature [Cramer andLeadbetter, 1967, O'Hagan, 1992], both where the derivatives were observed and where they are to be estimated. A fuller description of GPs is found in Williams and Rasmussen [1996] and Rasmussen [2006]. Most recently in sea-level research, Kopp [2013] use an empirical Bayesian analysis that involves the use of Gaussian processes to asses the statistical significance of the sea-level acceleration 'hot spot' in the mid-Atlantic region. However, GPs are not the only means for creating rate estimates. Other work exists in the field of splines (e.g. Chaniotis and Poulikakos, 2004, Mardia et al., 1994, Yue et al., 2012 or in diffusion processes and differential equation models (e.g. Liang and Wu, 2008).
We focus on a novel Errors-in-Variables Integrated Gaussian Process approach. The GP has advantages over other methods mentioned in the previous paragraph due its simplicity and flexibility despite using only a small number of parameters. The Integrated GP we employ is an inverse model where the GP is applied to the rate process rather than the observed data. Holsclaw et al. [2013] outline a method for posterior computation of such models which we employ in the next section.
Methods
In this section we outline the EIV-IGP model we use for estimating past sea level whilst accounting for age uncertainty. We apply this model in our second case study (North Carolina proxy reconstruction) presented in Section 5. Our first case study (global tide-gauge record) requires a slightly simplified version of this model (which we term S-IGP) as age uncertainty is not present. The raw data available from the reconstruction are scalars (y i , σ y i , x i , σ x i ) for i = 1, ..., n data points, where y i is the reconstructed raw relative sea-level measurement and σ y i is the sample specific estimate of uncertainty for the measurement which is treated as one standard deviation, x i is the estimated age measurement from the chronology model, and σ x i is the age standard deviation, also taken from the chronology model. Ignoring GIA correction for the moment, we can write: (1) . τ 2 is a variance term that was included in the model error to account for any unexplained variation that may be present in the data.
is a stochastic process in continuous time and χ i is the true unobserved age for observation i. The likelihood for the observed data is dependent on the stochastic process that we want to estimate and the model is set up to have a classical errors-in-variables structure. The key parameters are those in h and the model error τ 2 . The estimated true ages χ i are nuisance parameters. Our focus lies in posterior inference about h and most importantly its derivative.
As discussed in Section 3.2, there are numerous non-parametric priors on functions that provide stochastic derivatives, though our situation is complicated by the inclusion of age uncertainties. If the the data were modeled with a Gaussian process we would write y i = α + g(χ i ) + i , where g(χ i ) is a Gaussian process with a mean function µ g (which we set to 0) and a covariance function denoted υ 2 C g (χ i , χ j ). Since our focus is directly on the rate process, g (χ i ) = w(χ i ), we prefer to place a GP prior distribution on this and integrate to create estimates of the observed data process, which we now denote h. Writing h(χ) = χ 0 w(u)du, we place a GP prior on w. For the purposes of our model we chose to use a GP with a mean function µ w (which we set to 0) and a stationary powered-exponential covariance function, which we denote C w (χ i , χ j ). We use a re-parameterized version so with ρ ∈ (0, 1) and κ ∈ (0, 2]. The distribution of h is available also as a GP, though it involves a difficult double integration to obtain the covariance terms in h, namely, the covariance of the integrated Thus, the covariance function for the observed data process (h) can be obtained by integrating the covariance function for the deriviative process (w) twice. In many cases, this double integral is expensive to compute and numerically unstable, especially over small ranges (i.e. small values of χ i and/or χ j ). However, an advantage is that the resulting h created from such a situation will be non-stationary.
A solution to the problematic double integration is provided by Holsclaw et al. [2013]. They used an approach given in Yaglom [2011] to bypass the calculation of the double integral by approximating the integrated process on a grid x * = (x * 1 ....x * m ) for arbitrarily large m. This yields: is an m × m matrix containing the covariance function for the derivative process and w m ∼ GP(µ w , υ 2 C * * w ). Most usefully, K * hw is the covariance between the rate process and the integrated process, and so only involves integrating the covariance once, i.e.
This integral is calculated numerically, using Chebyshev-Gauss quadrature Equation (4) [Abramowitz and Stegun, 1965]: The use of Chebyshev-Gauss quadrature necessitates the transformation of the integral of the covariance function so that the integration limits are between -1 and 1 Equation (5).
Applying Chebyshev-Gauss quadrature, (5) can be approximated by: The Holsclaw et al. [2013] approach replaces the integral estimate with the conditional mean of the integrated process given the derivative process, whilst ignoring any conditional variance. In fact this approach is strongly related to that of the predictive processes (PP: Banerjee et al., 2008). In the PP approach, a spatial covariance matrix is approximated onto a smaller grid also by its conditional mean, resulting in smaller matrix manipulations for large spatial problems. However, the PP has serious disadvantages, as the low rank approximation can yield poor estimates of the correlation structure. By contrast, our processes are one dimensional and we can set the grid size m large so as to arbitrarily reduce the approximation error with little computational cost. In that sense it has elements in common with high-rank approximations such as Lindgren [2012].
Lastly, we must account for GIA in our model. This introduces an extra fixed parameter γ (measured here in mm per year) to account for isostatic (land-level) movements at an individual site. The GIA correction involves subtracting x i from the year of core collection, denoted t 0 . This is then multiplied by the rate of GIA γ and added to y i for each observation i. The introduction of the GIA parameter has the effect of raising or lowering the sea level associated with each data point, and additionally introduces a correlation between the age and sea-level reconstructions since older sealevel observations are raised/lowered to a greater degree. As an illustration, consider the single data point given in Figure 3(A), taken from the North Carolina data set (see Section 5.2). The data point is given by the quadruple (y i , x i , σ x i , σ y i ) with the density of this data point represented as contours, and samples shown for illustration. Once the GIA effect is removed we obtain Figure 1 (B), where the left side of the density has been raised to a greater degree than the right hand side because it is older.
Algebraically, the GIA effect can be removed via an affine transformation of the data and the variance matrix by matrices A = 1 0 −γ 1 and b = 0 γt 0 . The GIA-corrected model is now: where Since Az i and AV i A T are both deterministic functions of the data they can be calculated off-line prior to any analysis.
The rate of GIA to be applied is spatially variable because of the underlying physical process [Engelhart et al., 2009]. For our North Carolina case study we apply the rate used in the original publication. Equation 7 forms the likelihood for the observed data based on the EIV-IGP model. This completes our model specification. All the models we outline were fitted in the JAGS (Just Another Gibbs Sampler) langauge [Plummer, 2003]. JAGS is a tool for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. Although writing customised MCMC sampling algorthims can in some cases be relatively straightforward, it has become more common practice to make use of Bayesian MCMC fitting software such as the Bayesian analysis Using Gibbs Sampling (BUGS) software. JAGS is an engine for running BUGS and allows users to write their own functions, distributions and samplers. JAGS offers cross platform support, and a direct interface to R using the package rjags. We validated our model using two methods. Firstly, we simulated data under ideal and non-ideal conditions. The ideal scenario is one where the parameters are simulated from the same distributions as the priors that are placed on the parameters. The non-ideal scenarios lead to the prior distibutions over/under estimating the mean and the variance of the parameters. The aim was to determine for each scenario how often the model was capable of estimating the true rate process within 95% and 68% credible intervals. Secondly, we performed a 10-fold cross validation on our case study data. Results were highly satisfactory for both validation methods and we are confident that using this model for instrumental and proxy sea-level data allows us to estimate the underlying rates of sea-level change with a high degree of accuracy. Further details of how the validation was carried out along with results can be found in the Appendix.
All code is available in the supplementary materials. In the next section we outline our prior distributions in further detail for each of our case studies. In the first case study we use tide-gauge measurements which have small age uncertainties and so are ignored; effectively removing the EIV structure and allowing us to demonstrate the IGP aspect of the model. Our second case study, the proxy data, contains all the elements outlined in this section.
Case Studies
To illustrate the utility of the S-IGP and EIV-IGP models we apply them to the global tide-gauge record since 1880AD [Church and White, 2011] and a proxy sea-level reconstruction spanning the last 2100 years [Kemp et al., 2011]. The goal is to obtain the posterior distribution of sea-level and of the rate process of interest. To determine the degree to which the models are identifiable and fit the data, we use simulated data which violate our model assumptions, and produce posterior predictive fits for hold-out data sets created through 10-fold cross-validation. See Appendix for further details.
For both case studies we initially ran the appropriate model for 5000 iterations with a burn-in of 500 and we thinned by 3. In both cases we saw good convergence. We then ran the model for a long run of 50,000 iterations to ensure convergence remained and results were consistent. The R package 'coda' was used to run diagnostics. We used autocorrelation plots, Geweke plots [Geweke, 1992], the Gelman and Rubin diagnostic [Gelman and Rubin, 1992] and the Heidelberher and Welch diagnostic [Heidelberger and Welch, 1983], which all indicated model convergence. We also ran multiple chains from different starting values to ensure good mixing.
Global tide-gauge record
For the period since 1880AD, Church and White [2011] generated an annual record of global sea-level change by compiling tide-gauge records from sites around the world. A complete description of the approach and methods employed is presented in Church and White [2006] and Church and White [2011]. The original data are held by the Permanent Service for Mean Sea Level and are comprised of monthly relative sea-level averages for each location [Woodworth and Player, 2003]. Data from up to 235 individual locations were corrected for datum changes and GIA and centered to set sea level in 1990AD as 0m. The reported uncertainty (one sigma) includes 0.1mm/yr for the GIA correction applied and an estimated contribution from the incomplete global coverage of tide-gauge stations and the procedure used to average them. The data file includes 3 columns; time in years AD, Global Mean Sea Level (GMSL) in meters, and a one-sigma sea-level error in meters.
The Simple Integrated Gaussian Process (S-IGP) model was used to analyze this dataset. The likelihood for the data is, where, is an m × m matrix containing the covariance function for the derivative process and w m ∼ GP(µ w , υ 2 C * * w ). Recall, K * hw is the covariance between the rate process and the integrated process i.e.
Prior distributions were specified for each unknown parameter. The correlation parameter ρ was defined on the interval (0,1). The tide-gauge record [Church and White, 2011] spans a relatively short period of time, during which there was a single mode of climate warming and sea-level rise [Rahmstorf, 2007]. So even though this record is highly correlated, climate forcing, as opposed to time change, is the driver for sea-level change over this instrumental period. Therefore, we set a mildly informative prior for ρ that favors low values of the correlation paramter that are close to 0.2, where p(ρ) = Beta(2, 8). Another somewhat informative prior was used for τ 2 . To determine a prior for this parameter we considered other global tide-gauge compilations such as Jevrejeva et al. [2008] where error estimates range from 0.01-0.07 m (assumed to be one sigma). In choosing our prior we used this information but chose to conservatively place a prior on τ 2 that favours values for τ close to 0.1. We decided on a prior for υ 2 , the variance of the rate process, by looking at the information currently available regarding the rate of global sea-level rise. Between 1950AD and 2000AD the rate of global sea-level rise varied from 0 to 4mm/yr [White et al., 2005]. Over multi-centennial timescales during the last 2000 years (prior to industrialization), global sea level was likely close to stable after correction for land-level movements (i.e. rate ∼ 0 mm/yr). Alternatively, at decadal to multi decadal time scales, higher regional rates (up to 4mm/yr) are observed in instrumental records after correction for land-level movements. A Gaussian process prior centered on 0 was used to describe the rate proces for our model. The prior information suggests that rates can reach up to 4mm/yr. Therefore, we deemed the range of the rate of sea level, -4 to 4mm/yr appropriate. If this range is treated as a 95% confidence interval, it is reasonable to assume that the standard deviation is ∼ 2mm/yr. Hence, we set up the prior for υ 2 to favor values close to 4, where υ 2 ∼ Gamma(80,20). An uninformative normal prior is placed on the unknown intercept parameter α.
The results from the analysis of the global tide-gauge record are presented in Figure 2 which shows, the original RSL data (A), our RSL estimates (B) and our rate estimates (C). The 20th century rate of sea-level rise estimated from linear regression of the global tide-gauge record was 1.7mm/yr ± 0.3mm [Church and White, 2006]. There is close agreement between this estimate and the rate of sea-level rise from approximately 1965AD to 1975AD estimated from the S-IGP model ( Figure 2C).
Our analysis of the tide-gauge record using the S-IGP model indicates that the mean rate of global sea-level rise constantly increased (accelerated) from 1.13mm/yr in 1880AD to 1.92mm/yr in 2009AD ( Figure 2C). The recognition of accelerating sea-level rise agrees with projections for the 21st century that can only be realized with continued acceleration [IPCC, 2013]. Houston and Dean [2011] analysed both U.S. and global tide gauge records and found small sea-level decelerations. suggest this is due to their focus on acceleration since the year 1930, which represents a unique minimum in the acceleration curve and is correlated with to the mid-twentieth-century plateau in global temperature. Church and White [2011] analyzed the changing rate of sea-level rise between 1880 and 2010 by calculating linear trends for adjacent 16 year time intervals. This analysis showed oscillations in the rate of sea-level rise which is fundamentally different to our analysis that shows a continuous increase in the rate of rise. This difference occurs because calculating linear trends for sub-periods of the dataset does not utilize all available information resulting in conclusions that may not accurately represent the underlying process. We demonstrate that use of the S-IGP model negates the need to analyze sections of temporal data and consequently provides more accurate and representative results.
North Carolina proxy reconstruction
The example dataset from North Carolina is a proxy reconstruction spanning the last ∼2100 years that was developed from cores of salt-marsh sediment located at two sites (Tump Point and Sand Point) ∼ 120km apart [Kemp et al., 2011]. As such it provides a regional record of RSL change. For each core the history of sediment accumulation was constrained by radiocarbon ages, Cs-137 and Pb-210 activity, and changes in pollen reflecting the arrival of Europeans and the onset of widespread land clearance. A Bchron agedepth model allowed the age of any sample in the cores to be estimated with 95% confidence . Foraminifera were employed as sea-level indicators and RSL was reconstructed with a sample-specific (1 sigma) error using a transfer function. The correction for GIA was estimated from local and U.S. Atlantic coast databases of late Holocene sea-level index points [Engelhart et al., 2009]. The rate of GIA is 0.9mm/yr at Tump Point and 1.0mm/yr at Sand Point. The data file includes 4 columns; RSL in metres, age in year AD, a one-sigma RSL error, and a two-sigma age error.
The Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model was used to analyze this dataset. The likelihood for the data is given in detail in Section 4. Prior distributions were specified for each unknown parameter. As with the S-IGP model, the correlation parameter ρ is defined on the interval (0,1). The chosen prior, p(ρ) = Beta(2, 8) which suggests a mean of approximately 0.2 with a standard deviation of approximately 0.1. This assumes that data points more than 1000 years apart have minimal affect on one another. This is a reasonable assumption given that the reconstruction spans a 2100 year time period and includes multiple phases of sea-level and climate behavior, including the Medieval Climate Anomaly and Little Ice Age [Mann et al., 2008]. We used the same prior for the variance parameter τ 2 as for the previous case study. Following the same reasoning as the tidegauge data in section 5.1, a gamma prior, υ 2 ∼ Gamma(80, 20) was used for the variance of the derivative process. An uninformative normal prior was placed on the unknown intercept parameter α.
Application of the EIV-IGP model to the proxy sea-level reconstruction from North Carolina shows four persistent phases of sea-level behavior ( Figure 3C). The model predictions are a good fit to the proxy reconstructed data which gives confidence in the model. From the start of the record at approximately 100BC to 1000AD there is little change in sea level following correction for GIA. The period from 1000AD to 1400AD is characterized by sea-level rise. Between 1400AD and about 1850AD there is a fall in sea level and since 1850AD sea level has risen rapidly in North Carolina. This evolution in sea level is reflected in the modeled rate of sea-level rise ( Figure 3D), where the first period has a mean sea-level change of approximately 0mm/yr. The second period saw a maximum rate of rise of 0.53mm/yr with a 95% credible interval (0.39,0.68) which Kemp et al. [2011] attributed to a warmer climate during the Medieval Climate Anomaly. The sea-level fall between 1400AD and 1850AD occurred at a maximum rate of 0.3mm/yr with a 95% credible interval (-0.43,0.16) and was likely a sea-level response to the cooler Little Ice Age [Kemp et al., 2011]. The transition from the Little Ice Age is marked by a dramatic increase in the rate of sea-level rise that continues to a mean rate of 2.44mm/yr in 2000AD with a 95% credible interval (1.91,3.01). The rate of sea-level rise since the middle of the 19th century is without precedent in North Carolina for at least the previous 2000 years. The modeled mean rate of rise departs from earlier 95% credible intervals at around 1845AD.
Conclusion
We propose and validate a model that allows for the direct estimation of rates of sea level whilst quantifying uncertainties more throughly than previously possible. The method involves a non-parametric reconstruction of the derivative process. A GP prior is placed on the derivative process and we view the likelihood of the observed data to be the integral of this process. For our case study data, the derivative at a particular time point is representative of the rate of sea-level change at that time point. This enables us to estimate instantaneous rates of change at any given time point, thus observing how rates evolved through time. The model also provides a flexible fit and allows us to estimate the uncertainty about the rate process of interest.
Taking into account all sources of uncertainty for estimating trends is essential to allow instrumental measurements and proxy reconstructions of sea level to be compared directly and fairly. Previous analysis incorrectly ignored some or all of the uncertainties. An important result of our analysis of the global tide-gauge record shows that not only is sea level rising but that the rate of rise is continuously increasing over time. Another significant result showed that the mean rate of sea-level rise in North Carolina has been continuously increasing in the 20th century and at 2000AD was 2.44mm/yr with a 95% credible interval of 1.91 to 3.01mm/yr.
We do not cover spatio-temporal modelling of sea-level rates in this paper, focusing instead on individual sites. The behavior of sea level in space is highly irregular and relates to numerous physical features and processes that are beyond the range of the statistical models we discuss.
A.1 Simulated Scenarios
In this section we determine the validity of our model. Through the use of simulated data, parameters β 0 and σ 2 y , which are associated with the data likelihood, proved to be robust. Within reason, there was no difficulty in estimating the values of these parameters, regardless of prior choice. We found the parameters that related to the Gaussian process i.e σ 2 g and ρ are the more sensitive parameters in the model and as a result the validation focuses on these.
For the purposes of this validation we have used a simpler likelihood i.e. the version of the model that is not set in the errors-in-variables framework. The parameters that are introduced in cases where an errors-in variables approach is necessary are all estimated directly from the data and thus we can exclude this component of the model in the validation process in order to simplify things. Therefore, the data was simulated using the following likelihood; is an m × m matrix containing the covariance function for the derivative process and w m ∼ GP(µ w , υ 2 C * * w ). K * hw is the covariance between the rate process and the integrated process.
To validate the model we consider several different scenarios under ideal and non ideal conditions. For each scenario we simulate values for the unknown parameters, which in turn are used to simulate data from an integrated gaussian process model. The simulation of the data requires simulation of the underlying rate process, which based on our model assumptions, is a Gaussian process. Therefore, we know the true underlying rate process. As the focus of this work is in establishing rates of sea-level change, our primary concern is whether or not our model is successful in estimating the true underlying rate process. We will observe how often the true rate falls within the 95% and 68% credible intervals for the rate predicted from the model.
For the purposes of this validation, the priors that are placed on the parameters σ 2 g and ρ are; σ 2 g ∼ gamma(10, 10) ρ ∼ beta(2, 8) Therefore, σ 2 g will be centered around 1 with a variance of 0.1 and ρ will be centered around 0.2 with a variance of 0.01. The following (a) to (g) will outline the different scenarios under which we run our simulations: (a) We simulate parameter values from the same distributions as our prior distributions for the parameters. This is the ideal case and we expect the model to perform best under these conditions. σ 2 g ∼ gamma(10, 10) ρ ∼ beta ( 500 simulations were run for each scenario. The percentage of time the true rate was found inside the 95% and 68% credible intervals was observed. An average over the 500 simulations was taken for our validation results. Scenario 1 is the ideal case, where the means and variances for the simulated parameter distributions are the same as the prior distributions. The remaining scenarios are those described in (b)-(g) above. Overall it appears that the the model is capable of estimating the rate process well, even if the prior distributions for the parameters are over/under estimating means and variances. For the ideal scenario the true rate falls within the 95% credible interval and 68% credible interval of the estimated rate approximately 95% and 68% of the time as expected. For scenarios (b) and (e) the rate falls into the credible intervals a higher proportion of the time. This suggests that underestimating the mean values of our parameters or overestimating the variance of our parameters will result in wider than expected credible intervals for the rate. For scenarios (c) and (d) the rate falls into the credible intervals a lower proportion of the time. This suggests that overestimating the mean values of our parameters or underestimating the variance of our parameters will result in narrower credible intervals than expected for the rate.
For scenario (f) the rate fell into the 95% and 68% credible intervals more than 95% and 68% percent of the time. For this scenario the priors are underestimating both the mean and variance for the parameters of interest. When we look at the results from the cases where the mean and variance where underestimated separately i.e. (b) and (d) the results are wider credible intervals and narrower credible intervals respectively. Comparing these results with case (f), it appears that underestimating the mean dictates the results here and causes the credible interval for the rate to be wider than expected. For scenario (g) the true rate falls into the 95% and 68% credible interval less than 95% and 68% of the time. In this case when we look at the scenarios where the means and variances are overestimated i.e (c) and (e) the results indicate narrower and wider credible intervals respectively. When compared with scenario (g) this suggests that overestimating the means dictate the results and cause the credible intervals to be narrower than expected.
To conclude, from this validation we are made aware of some of the consequences of mis-specifying prior distributions for the parameters σ 2 g and ρ in our model. The results indicate that although for all of the cases excluding the ideal scenario, we overestimate or underestimate our confidence around the rate, we do not over/under estimate the credible intervals by enough to cause concern, in fact the model appears to perform reasonably well in all cases and we are satisfied that we can estimate the rate process successfully.
A.2 10-Fold Cross Validation
A second method we used to validate our model was a 10-Fold cross validation. We performed this on both our case study datasets. Each observation was numbered 1:N. where N is the total number of observations. A random permutation of these numbers was taken using a function in R. The first 10% of these numbers were taken and the corresponding observations were removed from the data. The model was run on the data with these observations missing. We then used the model results to predict values for our missing observations. By definition the observed data is an integral of the rate process. Therefore, by integrating the rate process that we obtain from running the model, at the points where we have missing data, we can obtain predictions for our missing data. If the predicted values correlate well with the true values then we can have confidence that our model is performing well. This process is repeated by taking the next 10% from the permutation vector of observation numbers and so on until every observation has been left out and predicted.
The cross validation was first carried out on the Church and White tidegauge data. The results are shown in Figure4. The cross validation was repeated for the North Carolina proxy data. The results for this dataset are shown in Figure5.
Looking at the results for both case studies we have confidence that our model is performing well. It is worth noting, for North Carolina, the error associated with each sea-level observation is relatively large when compared to errors associated with the sea-level observations for Church and White. North Carolina also had age errors to consider. The credible intervals for the rate process for the North Carolina data are wider than those for Church and White as one would expect. As a result, for North Carolina, we have wider credible intervals for the estimates of the missing data values. For Church and White our estimated values are much more precise. The true values do not always fall into the credible intervals, however the estimates are extremely close to the true values. | 2015-09-11T12:49:28.000Z | 2013-12-24T00:00:00.000 | {
"year": 2013,
"sha1": "0184cacb41589713041e00cc0b63b4f4c4e50125",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1214/15-aoas824",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0184cacb41589713041e00cc0b63b4f4c4e50125",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Geology",
"Mathematics"
]
} |
265235173 | pes2o/s2orc | v3-fos-license | NPS from the customer’s perspective: The influence of the recent experience
The Net Promoter Score (NPS) is a popular metric for measuring customer loyalty and is claimed by Reichheld to predict a company’s growth. However, various academic studies provide controversial results regarding its reliability and prediction powers. This study analyzes how respondents answer the likelihood-to-recommend (LTR) question in different predescribed and validated situations. One thousand participants are presented with situation descriptions that consist of previous and recent experiences with a bank and are asked how they would respond to the LTR question after such an experience. The results indicate that respondents do not always give a high score for good experiences, and a low score for bad experiences. However, with a high number of respondents, the different answering approaches even out, and the NPS results are higher for good than for bad experiences. Additionally, we notice that whereas negative experiences are evaluated as low by all respondents, Generation X and Boomers tend to give lower scores for neutral and positive experiences. Those with lower income and basic education give lower scores for neutral experiences. The recent experience influences the customer’s likelihood of recommending more than the previous experiences with the company.
Introduction
Customer satisfaction and loyalty are effective ways for a company to improve future sales (Baehre et al., 2022;Gruca & Rego, 2005;Marsden et al., 2005), market share (Morgan & Rego, 2006), and customer profitability (Jahnert & Schmeiser, 2021;Korneta, 2018).After Reichheld (2003) proposed that the only number a company needs to grow is the Net Promoter Score (NPS), it has gained immense popularity amongst companies for its straightforward and easy application in various sectors (Florea et al., 2018), and some critique from researchers because of the methodological and other approaches to the original NPS study (Baehre et al., 2022;Fisher & Kordupleski, 2019).
Even if the company has provided a decent service level, this does not mean that the customer's intention to recommend, which the NPS measures, would consistently translate into an actual recommendation, or that a detractor would criticize the company (Stahlkopf, 2019).Therefore, it is important to understand customers' views on how they answer the likelihood-to-recommend (LTR) question.
This study examines the NPS methodology from the customer's perspective.The NPS, one of the most popular measures of customer satisfaction and loyalty, defines whether the customer is a promoter, detractor, or has a neutral attitude.However, the LTR scale does not explain how to choose the score or what these scores mean; therefore, customers must interpret the scale.When calculating the NPS value, 9-10 in the LTR scale are defined as promoters, 7-8 as neutrals, and 0-6 as detractors.The NPS is calculated as the difference between the percentages of promoters and detractors.However, this rationale for grouping the scores has been challenged by several researchers (Grisaffe, 2007;Kristensen & Eskildsen, 2014;Lewis & Mehmet, 2020;Pingitore et al., 2007).
We conducted a pre-study of semi-structured interviews with 20 respondents in September 2022 to determine how they answered the LTR question.The logic for answering the questions differed among respondents.Some respondents were optimistic, giving a score of 10 when nothing bad happened, whereas others never give a score of 10 in their lifetimes.Interpretations of the different scores on the scale differed among the respondents.Additionally, the influence of the most recent experience with the company varied.Seven respondents were willing to recommend a company even after a bad experience, whereas six stated that one bad experience meant no positive recommendation.
Customers' views on how they answer the LTR question are analyzed in previous literature only briefly, mainly by categorizing customer comments (Følstad & Kvale, 2018) and analyzing their explanations for the selected score (Lewis & Mehmet, 2020).Nevertheless, in all previous studies, conclusions have been made about the scores, but the actual customer experience has been unknown.Therefore, the interpretation is biased by customer experiences being different, and studying the impact of the recent experience is impossible as the recent and previous experiences are unknown.
The aim of this study is to define the influence of the recent experience on the LTR score.We also describe the logic behind answering the LTR questions in different scenarios and how different sociodemographic factors influence the NPS score.
This study analyzes customer views by comparing answers to the LTR questions in the same imaginative situations.To compare the subjective evaluations of individual respondents for the same questions and circumstances, we created nine different situations wherein positive, neutral, and negative previous and recent experiences are combined.This enables us to understand customer reactions after knowing the underlying situation and enrich the current literature by understanding how respondents react to different situations and how the recent experience impacts NPS results.We use an ISO-certified panel of 1000 respondents to understand the influence of education, gender, income level, generation of respondents, and the previous and recent experiences on the likelihood of a recommendation.In the literature review, we discuss NPS classification logic, individual differences, circumstances influencing NPS answers, and NPS relations with actual customer sentiment.
Problems with NPS Classification Logic and its Relation with the Customer's Real Sentiment
Several authors have discussed the issues with the classification logic of NPS.Fisher & Kordupleski (2019) criticize the NPS logic of defining passives, classifying them as customers who do not recommend the brand.Many studies have upheld the idea of concentrating only on promoters because companies with the highest share of top-2-scores are the most successful, regardless of their detractors or passives (Baehre et al., 2022;Morgan & Rego, 2006;van Doorn et al., 2013).
In contrast, Grisaffe (2007) questions the logic of showing a score of 6 as a detractor, as some customers can take it as being on the positive side of the scale's midpoint.Lewis & Mehmet (2020) find that customers who are defined as passive according to the NPS classification expressed as much appreciation towards the company as customers classified as promoters; hence, they propose that from the perspective of customers' attitudes, the promoter definition should be extended to scores of 7-10.They show that customers with a highly positive attitude towards the company mainly choose a score of 10, whereas customers with a moderately positive attitude towards the company respond between 7-10, with 10 and 8 being the most selected options.Seal & Moody (2008) indicate that the classification, in general, loses the "shades of difference in the strength of perception," as both 0 and 6 mean the same, whereas they do not necessarily show the same level of customer loyalty and respondent's perceived likelihood to recommend the company or product.Fisher & Kordupleski (2019) propose focusing more at NPS numbers not classifications, as customers who choose scores of 5-6 can potentially be converted into passives, and passives can potentially be improved into promoters when customer needs are understood more precisely.Eskildsen & Kristensen (2011) propose calculating only the average of all LTR scores rather than categorizing them to obtain better insight into different answers.
In some cases, the selected score does not correspond to customer sentiment about the company.Considering that the respondent may only have brief exposure to the brand, as pointed out by Fisher & Kordupleski (2019), the decision made about recommendation likelihood can be shallow and change completely during the business relationship.Kristensen & Eskildsen (2014) find that customers who would have liked to not answer, selected mostly 0-5 as their LTR score.The share of low score evaluations instead of "no answer" is especially significant for men (57.9%), but much less for women (Eskildsen & Kristensen, 2011).
Customers can also choose a low score to the LTR question not owing to a negative sentiment but because they know they would not be talking about the subject.Schulman and Sargeant (2013) show this in an example of the non-profit sector, where donors often chose a low score because they found the subject not discussable but remained loyal donors.
Additional complexity is added by Stahlkopf (2019), who show that a person could be a promoter and detractor simultaneouslyrecommending the company to one friend but not another.Customers who have experienced this may struggle to find the "right" answer when answering the LTR question.This leads to the following research question: RQ1: How many customers provide logical LTR scores in accordance with the NPS classification considering the scenarios and experiences described to them?
Individual Differences and Circumstances Influencing NPS Answering
Few studies have been conducted on intergenerational differences in the NPS context.Situmorang (2017) finds that customers born between 1976-1997 (Generation Y) give, on average, a lower NPS to companies than those born from 1998 onwards (Generation Z).Additionally, Katz (2017) finds that Boomers have a high sense of self-importance and believe in the power of individual choice, which may make them less willing to recommend companies.Generation X is a transitional generation; it has seen both material abundance and economic hardship, making them hardworking, busy, and ambitious (Katz, 2017;Miller & Laspra, 2022).Digitalization has significantly changed generations Y and Z.Generation Y is more self-confident and optimistic, coping well with constant change, but is less independent than previous generations (Laor & Galily, 2022).Generation Z has fewer emotional connections with brands and expects more personalized and efficient customer service than previous generations (Gutfreund, 2016).They expect authentic brand recommendations and prefer microinfluencers to macroinfluencers and celebrities (Pradhan et al., 2022).Generation Z consumers generally conduct more online research before purchasing than previous generations (Grigoreva et al., 2021); hence, recommendations play an important role to them.
In terms of gender differences, Eskildsen & Kristensen (2011) show that women are more likely to be promoters than men, and that these women promoters are more likely to choose a score of 10.However, men are more likely to provide high detractor ratings, such as 5 and 6, whereas women may select a score of 0.
Some studies show that more educated, high-earning, and younger customers are more satisfied with their internet banking services than less educated, lower-earning, and older customers (Seyal & Rahim, 2011).Service convenience is most valued by customers with high incomes (Benoit et al., 2017); hence, the general experience may have a significant effect on their LTR score.This leads to the second research question:
RQ2:
To what extent is the likelihood of a recommendation influenced by individual differences of customers, such as generation, gender, income level, and education?
The NPS answers can also be influenced by the situation.Følstad & Kvale (2018) find that customers who assess their experience based on a concrete transaction are more likely to recommend the company than those who base their assessment on general brand perception, product experience, and the entire customer journey.As this field is rarely examined, our third and most important research question is as follows: RQ3: How do different situations, especially the recent experience, influence the likelihood of a recommendation?
Data Collection
Participants are presented with a description of an imaginative situation that consists of either a positive, neutral, or negative previous experience with a bank and a positive, neutral, or negative recent experience with this bank (nine combinations in total).The participants are then asked to evaluate how likely they are to recommend this bank to their friends or colleagues.
The situations were created based on the principles of the SERVQUAL model (Parasuraman et al., 1991) while also acknowledging the results of previous research by Benoit et al. (2017); Darzi & Bhat (2018); Shamsi et al. (2023);and Teeroovengadum (2022).Previous experience included descriptions of reliability (technical reliability of cards and online banking), tangibles (online banking channel appearance and simplicity of use), and responsiveness (speed and friendliness of responses to questions).Recent experiences were based on responsiveness (speed and friendliness of responses), assurance (knowledge of employees), and empathy (friendliness and attention) in a service situation wherein customers were trying to change their payment card limits.Online banking is referred to as the main banking channel because it is used by 91% of the customers in Estonia (Remmelg, 2022).
Before the main study was conducted, the tone of the situations was tested using a separate sample of 194 respondents.Table 1 shows that positive and negative experiences are mainly defined as such, but neutral situations are also defined as either positive or negative.
The research sample of the main study is sourced from Norstat, an online panel provider in Estonia.The panel used is ISO-certified and offers a representative sample of 1000 respondents.The questions were presented in Estonian or Russian language, depending on respondents' preferred language.Online surveys and public sample lists have been also used in similar research (Lewis & Mehmet, 2020).
Data Analysis
The respondents are divided into generations based on their age.Table 2 illustrates how different sources and research papers define generations differently.The authors underline the definitions used in this study.
The respondents are divided into three groups based on their net income: below the Estonian median net income, above the Estonian median net income.Those who have no income or do not wish to answer are excluded from the analysis.Information about gender, generation, education, and net income groups is coded and analyzed using the SPSS statistical analysis tool.Customers without prior education are excluded from analysis.NPS values are calculated using Microsoft Excel.
Our research reveals some patterns in respondents' logic.Grouping is performed based on the patterns observed in the answers.Some respondents give 7 or higher for all situations, including negative ones.We call them "optimists".Some respondents give a score between 4 and 6 for all situations; we call them "averagers".Some respondents give a 6 or lower score for all situations, and they also give scores below 4, which we call "pessimists"."Averagers" and "pessimists" are classified as detractors in all cases, but their logic of answering is different.As shown in Table 3, we additionally define some respondents as "fair", who are neutrals or promoters in positive situations and detractors in the full negative situation, and some respondents as "almost fair," who are neutrals or promoters in full positive situations, and detractors in full negative situations but have a somewhat different approach than "fairs" in other situations.We failed to classify some respondents, and we group them as "others".
When analyzing the results, we calculate NPS for different scenarios and customer groups.Using the Mann-Whitney U test, we test different groups to understand which groups have statistically different distributions of answers.We describe the statistically relevant differences based on the answers' distribution and the share of promoters, neutrals, and detractors.We also conduct a regression analysis to determine the factors influencing the LTR scores in different situations.
General Overview
Table 4 presents the key characteristics of the sample according to gender, generation, education, and income.
Table 5 shows the distribution of the NPS results for different situations.In describing the situations in the tables, the first abbreviation (NEUT, POS, NEG) implies previous experience and the second abbreviation is the recent experience.
The resulting NPS varies from À85 points for a full negative experience (NEG + NEG) to 22 points for a fully positive experience (POS + POS).Responses with a negative recent experience receive the lowest NPS, depending on whether the previous experiences are positive (À80), neutral (À81), or negative (À85).However, regarding the recent neutral experience, the NPS is significantly
Description
Optimist Gives a score of 7 or higher to all situations Pessimist Gives a score of 6 to all situations or lower and gave scores below 4 (not averagers) Averager Gives a score between 4 and 6.to all situations Fair Gives a score of 7 or higher to all situations consisting of positive and neutral cases (POS + POS, POS + NEUT, NEUT + POS) and a score of 6 or less for the full negative (NEG + NEG) experience Almost fair Gives a score of 7 or more to the situation consisting of full positive (POS + POS) experience and a score of 6 or less for the full negative (NEG + NEG) experience Other All other types of answering logics not defined above, authors could not describe the pattern of the answering logic lower (À54) when the previous experience is negative than when it is positive or neutral (À19 and À12, respectively).The effect of the recent positive experience is stronger when previous experiences are either positive (22) or neutral ( 14) but less so when the previous experience is negative (À24).
We also conduct a regression analysis to determine which of the analyzed factors influenced the LTR scores in different scenarios.We use age instead of generation as this resulted in a model with a better proportion of variance.Table 6 presents the results.
Table 6 shows that although the R 2 values are below 5% and age, gender, education, and income only slightly describe the variability of the answers, they have a statistically relevant influence on LTR scores in some situations.In situations involving positive and neutral experiences, younger respondents, women, and respondents with higher education are more appreciative and are willing to give a high LTR score.The customer income group does not significantly affect the given score in most situations.
Next, we conduct a regression analysis for positive, neutral, and negative recent experiences and add previous experience as an independent variable, resulting in a sample of 3000 observations.Table 7 shows the results of how the variables influenced the LTR scores.The results are statistically significant at a level of 0.05.
In the case of positive and neutral recent experiences, previous experience has a much larger effect on the LTR score than in a negative recent experience.Finally, we use previous experience as the dependent variable and recent experience as the independent variable (see Table 8).As a result, the regression models have a considerably better goodness of fit.
Differences in Responses by Genders, Generations, Education, and Income Group
Table 9 shows the calculations of the NPS for respondents of different genders, generations, education, and income groups.The Mann-Whitney U test supports the regression analysis results that women are significantly more appreciative of positive and even neutral experiences; however, there is no statistically different distribution of answers for negative experiences.The answers from generations Y and Z have statistically similar distributions in all described situations.In addition, Generation X and Boomers have a statistically similar distribution of answers.Generation Z's answer distribution differs from that of Boomers in all situations that have a neutral or positive previous experience, and from Generation X in situations where the recent experience is positive The results are statistically significant at of the 0,05 level.and previous experience is either neutral or positive, and in situations where the recent experience is negative and previous experience is either neutral or positive.
The answers from respondents with different educational levels have a statistically similar distribution across all the situations.The distribution of answers from basic and higher education customers for situations with previous positive and neutral recent experiences differs statistically.
Regarding different net incomes, a statistically different distribution of responses has emerged in scenarios with previous neutral and recent positive, previous positive and recent neutral, and previous negative and recent neutral experiences.
Regarding the neutral previous and recent experiences, significantly more men choose a score of 5 (23% of male respondents, 16% of female respondents), resulting in a significantly higher share of detractors (42% for men compared to 34% for women).Regarding neutral previous and positive recent experiences, and positive previous and neutral recent experiences, the share of detractors among men is significantly higher, as they select a score of 5 much more often.For neutral previous and positive recent experiences, 33% of men are detractors, compared to 25% of women.For positive previous and neutral recent experiences, 47% of men are detractors, compared to 37% of women.Even for positive previous and recent experiences, a score of 5 is selected by men much more often (15% of male respondents and 9% of female respondents).
Table 9 reveals that generations Z and Y define neutral situations as neutral or even positive, whereas Generation X and Boomers are more critical and have a much higher share of detractors.In situations where the previous experience is neutral or positive and the recent experience is negative, Generation Z respondents choose a score of 6, whereas Boomers and Generation X choose 0 and 1.This results in similar NPS values, but a different distribution of answers.
The scenario with a previous positive experience and neutral recent experience is defined as negative (detractors) by 37% of respondents with higher education and 50% of respondents with basic education.Regarding previous neutral and recent positive experiences, the higher-income group has a larger share of promoters (48%) than the lower-income group (40%).Regarding previous positive and recent neutral experiences, the share of detractors is higher in the lowerincome group (44%) than in the higher-income group (35%).
Differences in Logic of Answering
Table 10 describes the distribution of the different answering logic.A total of 36.4% of respondents (pessimists, optimists, averagers, and others) does not set their scores as anticipated, they do not give higher scores (classified as promoters or neutrals) to positive cases and lower scores (classified as detractors) to negative cases.
Table 10 shows that women generally answer per anticipation (fair and almost fair) across different scenarios, whereas men have a higher share of pessimists and averagers.Respondents with higher education and above-median income also have a higher share of fair and almost fair respondents.Across generations, Generation Z has the lowest share of pessimists and averagers and the highest share of fair and almost fair respondents.
Discussion
The NPS has been researched from a customer perspective through analyzing customers' comments and explanations of the score (Følstad & Kvale, 2018;Lewis & Mehmet, 2020); however, the customer's experience before answering the LTR question has been unknown.Therefore, it was impossible to evaluate the extent of the differences caused by the situation and the respondents.
Pechter and Kuusik
Our study provides a unique perspective on the logic of different responses to the LTR question.Nine validated descriptions of different bank experiences were presented to 1000 respondents.Because the descriptions presented to the respondents were equal for all respondents, we could eliminate the influence of unknown experiences, concentrating on the respondents' reactions to the situation and the score they assigned to the likelihood of a recommendation.
The Logic of Answering
In our pre-study, the respondents described a distinct logic for answering the LTR question; hence, we assumed that though all respondents evaluated the same questions, the answering logic would differ.Our research confirms this finding, showing that even in similar situations, respondents' answering logic is different.A total of 16% of all respondents (pessimists and averagers) qualify as detractors in all situations.Based on the explanation during our pre-study, and results from Schulman and Sargeant (2013), these respondents possibly know that they do not discuss banking experiences with any of their friends or colleagues or, in general, do not recommend anything to anyone.It can also mean that averagers provide scores comparing the scale's midpoint, with 6 being over average and 4 below average, as proposed by Grisaffe (2007).Approximately, 7% of all respondents are optimists, which implies, based on our pre-study explanations, that they define themselves as optimists or always reply 10 to avoid additional questions.
We do not find any reasonable pattern for 13% of the respondents.This could be because people may recommend the same company to some of their friends but not to others (Stahlkopf, 2019).Our pre-study explanations suggest that people may find it acceptable to talk about their positive experiences but keep the negative ones to themselves, or vice versa.
In response to RQ1, we show that 64% of the respondents give logical LTR scores, which are in accordance with the NPS classification, considering the scenarios and experiences described to them.In addition, despite the various answering logics, the total NPS scores are reasonable.Overall, the scenario with positive previous and recent experiences receive the highest NPS, whereas that with negative previous and recent experiences receive the lowest NPS.
NPS by Genders, Generations, education, and Income Group Eskildsen & Kristensen (2011) show that women are more likely to be promoters than men, and choose a score of 10 than men.Our research shows that women are significantly more appreciative of positive and neutral experiences, whereas there is no statistically different distribution of answers for negative experiences.In none of the scenarios do we observe any tendency for women to choose a score of 10, more likely than men.Eskildsen & Kristensen (2011) also find that men tend to give high detractor ratings, such as 5 and 6, while women tend to give a score of 0. Our findings indicate a clear tendency for men to choose a score of 5 in neutral and positive scenarios but not in negative ones.Additionally, our findings do not indicate that women are more likely to select a score of 0 in any situation.Therefore, we complement the previous research by stating that the tendency for men to give high detractor ratings applies only in positive and neutral situations; however, the tendency for women to give lower ratings is not confirmed.
Based on previous research, we expect Generation Z respondents to have higher NPS scores than Generation Y (Situmorang, 2017).Our study does not find statistically significant differences in the answer distribution between Generation Y and Generation Z.One possible reason is that Generation X is more broadly defined by Situmorang (2017Situmorang ( ) (born in 1976Situmorang ( -1997Situmorang ( instead of 1981Situmorang ( -1996)).Additionally, Situmorang's (2017) study is conducted in 2016, when the respondents from Generation Z are only 18 years old or younger, which may have influenced their preferences compared to Generation Y.In our research, in 2023, the respondents from Generation Z are aged 18-26, so they are more likely to have their households and make independent consumption decisions; therefore, their answers are more similar to those of Generation Y.
Our research shows that Boomers and Generation X representatives respond similarly to the LTR question, as do generations Y and Z. Nonetheless, a line of difference was observed between generations X and Y. Significant differences are observed in the scores of generations Y and Z compared with Boomers and Generation X. Boomers and Generation X give lower scores in most situations, which may mean that they are more critical in general or less likely to recommend.Our findings are in line with those of Katz (2017), Laor & Galily (2022), Pradhan et al., (2022), andGrigoreva et al., (2021), who find that Boomers believe more in the power of individual choice, that Generation Y may be less independent than Generation X and Boomers, and that for Generation Z, recommendations play a bigger role.
The education and income of the respondents influence answers in situations involving neutral previous or recent experiences.Considerably, the tone of neutral experiences is defined as neutral by the majority in the testing group; although, some define it as positive or negative.We can see that respondents with basic education and up to the median income are less likely to be promoters when their recent experience is neutral, despite positive earlier experiences.In addition, respondents with lower income are less likely to be promoters of previous neutral and recent positive experiences and are more critical when responding to previous negative and recent neutral experiences.Unfortunately, it is not known whether critical responses also mean they are less loyal when experiencing neutral situations, but this would be an interesting subject for further research.
In response to RQ2, we find that although individual differences only partially explain the variability in LTR scores, they still have a significant influence in certain cases.In neutral or positive experiences, women are more appreciative and gave higher scores, whereas men give a score of 5 to the LTR question more often.Boomers and Generation X representatives are more critical and give lower scores in most situations than representatives of Generation Y and Z. Education and income influence how respondents interpret neutral situations.
The Influence of the Recent Experience
Our study reveals that the recent experience is influential when answering the LTR question.In comparing Tables 7 and 8, we find that adding the recent experience as an independent variable helps regression models better explain the variability in LTR scores.The effect of the recent experience is particularly significant when it is negative.Table 5 reveals that in the case of negative recent experience, the NPS varies between À80 and À85 regardless of previous experience.This is also supported by Tables 7 and in which the role of previous experience is much smaller in the case of a negative recent experience.Previous experience plays a more important role when the recent experience is positive or neutral.
Følstad & Kvale (2018) find that customers who assessed their experience based on concrete recent transaction are more likely to recommend the company than those who based their assessment on general brand perception, product experience, and the entire customer journey.This can also mean that problems are attributed to the company in general, and positive emotions are emphasized in comments as recognition of the recent good-service touchpoint.Our study supports this statement when the recent experience is not negative.Regarding neutral previous experience and positive recent experience, 42.6% are promoters, whereas for positive previous experience and neutral recent experience, only 23.3% of respondents are promoters.
In response to RQ3, our results show that recent experience has a significantly greater effect on the LTR score than the previous experiences.This effect is particularly strong when the recent experience is negative.
Conclusions and Practical Implications
Our study analyzes the differences in customer answers to LTR questions in different predescribed scenarios.NPS has not been studied before based on pre-described situations and with a highquality sample.We analyze both the logic behind answering the LTR question and the influence of the recent experience.We find that respondents have distinct answering logic, but when the number of respondents is adequate, the NPS fairly evaluates the experienced situation.
Our research complements the existing literature by specifying customer reactions to different scenarios by combining positive, neutral, and negative previous and recent experiences.
1. Complementing the findings of Følstad & Kvale (2018) that customers who assessed their experience based on concrete recent transaction were more likely to recommend the company, our findings confirm that the recent experience with the company largely affects NPS results compared to the general opinion and previous experiences with the company, especially for negative recent experiences.2. Contrary to Eskildsen & Kristensen (2011), our study does not find that women tend to give a 10 or 0, and that men tend to give a 5 to the LTR question.In the case of positive and neutral experiences, women are more likely to be promoters.For negative experiences, there is no significant difference between men and women.3. Unlike Situmorang (2017), we find no statistically relevant differences in the answer distribution between generations Y and Z.However, the line goes between generations X and Y.In most situations, Boomers and Generation X give lower LTR scores than generations Y and Z.This is in line with Laor & Galily (2022), who state that digitalization has significantly changed both generations Y and Z; thus, they are likely to act similarly.4. We also find that in situations without negative experiences, customers who are older or have a lower education level are more likely to give a lower score when defining how likely they are to recommend a company or service to their friend or colleague.
Our study's results have several practical implications.Organizations using NPS to understand customer feedback should consider the following aspects: 1.The NPS may not be suitable for organizations with a low number of respondents for the LTR score.Individually, respondents may have a completely different answering logic, which levels out with a higher number of respondents.2. Boomers and Generation X representatives may give low scores to positive and neutral experiences than younger respondents in similar situations.3. Customers with lower incomes may give lower scores to neutral experiences than those with higher incomes in similar situations.4. Customers with basic education may give lower scores to neutral experiences than those with higher education in similar situations. 5. Negative experiences can disrupt the relationship.The recent negative experience with a company strongly influences customer recommendation probability.Regarding the recent positive experience, previous general experience also plays a role.
Limitations
The scores are given based on situation descriptions with no real emotions attached, which is not necessarily the case when answering the LTR question.However, this is the only possibility to ensure that all respondents provided feedback on the same situation based on their beliefs and values.In real life, emotions are involved, but experiences can also be objectively different.Therefore, in the case of different responses, it is not possible to understand whether the response was different because of the person or event.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
NEUT + NEUT NEUT + POS NEUT + NEG POS + POS POS + NEUT POS + NEG NEG + POS NEG + NEUT NEG + NEG calculated with Mann-Whitney U Test, with a,050 significance level.NPS values are calculated based on LTR scores and their classifications.a Distribution is statistically significantly different from Boomers.b Distribution is statistically significantly different from Gen X. c Distribution is statistically significantly different from males.d Distribution is statistically significantly different from basic education group.e Distribution is statistically significantly different from up to median income group.
Table 2 .
Definitions of generations.
Table 5 .
NPS for different situations.
Table 6 .
Regression analysis for different scenarios.
Table 8 .
Regression analysis with recent experience as an influencing factor.
Table 7 .
Regression analysis with previous experience as an influencing factor.
aThe results are statistically significant at a level of 0.05.
Table 9 .
NPS by gender, generation, education, and income group. | 2023-11-17T16:23:49.294Z | 2023-11-14T00:00:00.000 | {
"year": 2024,
"sha1": "e933149ce7d57cb997ab4eaedfeddc81ccc572bc",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14707853231214188",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "5cc18e22e2293a15bf972a00f0ad7d077410a518",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
1643335 | pes2o/s2orc | v3-fos-license | Simulating galaxy Clusters -II: global star formation histories and galaxy populations
Cosmological (LambdaCDM) TreeSPH simulations of the formation and evolution of galaxy groups and clusters have been performed. The simulations invoke star formation, chemical evolution with non-instantaneous recycling, metal dependent radiative cooling, strong star burst and (optionally) AGN driven galactic super winds, effects of a meta-galactic UV field and thermal conduction. The properties of the galaxy populations in two clusters, one Virgo-like (T~3 keV) and one (sub) Coma-like (T~6 keV), are discussed. The global star formation rates of the cluster galaxies are found to decrease very significantly with time from redshift z=2 to 0, in agreement with observations. The total K-band luminosity of the cluster galaxies correlates tightly with total cluster mass, and for models without additional AGN feedback, the zero point of the relation matches the observed one fairly well. The match to observed galaxy luminosity functions is reasonable, except for a deficiency of bright galaxies (M_B<-20), which becomes increasingly significant with super-wind strength. Results of a high resolution test indicate that this deficiency is not due to ``over--merging''. The redshift evolution of the luminosity functions from z=1 to 0 is mainly driven by luminosity evolution, but also by merging of bright galaxies with the cD. The colour--magnitude relation of the cluster galaxies matches the observed"red sequence"very well and, on average, galaxy metallicity increases with luminosity. As the brighter galaxies are essentially coeval, the colour--magnitude relation results from metallicity rather than age effects, as observed.
INTRODUCTION
Clusters of galaxies are of great interest both as cosmological probes and as "laboratories" for studying galaxy formation. The mass function and number density of galaxy clusters as a function of redshift is a powerful diagnostic for the determination of cosmological parameters (see Voit 2004 for a recent comprehensive review). Besides, clusters represent higher than average concentrations of galaxies, with active interaction and exchange of material between them and their ⋆ E-mail: aro@ct.astro.it † E-mail: lporti@utu.fi ‡ E-mail: jslarsen@tac.dk environment, as testified by the non-primordial composition of the hot surrounding gas (e.g. Matteucci & Vettolani 1988;Arnaud et al. 1992;Renzini et al. 1993;Finoguenov et al. 2000Finoguenov et al. , 2001De Grandi et al. 2004;Baumgartner et al. 2005).
There are observational and theoretical arguments indicating that clusters are not fair samples of the average global properties of the Universe: the morphological mixture of galaxies in clusters is significantly skewed toward earlier types with respect to the field population, implying star formation histories peaking at higher redshifts than is typical in the field (Dressler 1980;Goto et al. 2003;Kodama & Bower 2001; see also Section 3); this is qualitatively in line with the expectation that high density regions such as clusters, in a hierarchical bottom-up cosmological scenario, evolve at an "accelerated" pace with respect to the rest of the Universe (Bower 1991;Diaferio et al. 2001;Benson et al. 2001). Although clusters are somewhat biased sites of galaxy formation, they present the advantage of being bound structures with deep potential wells, likely to retain all the matter that falls within their gravitational influence; henceforth, they represent well-defined, self-contained "pools", where one can aim at keeping full track of the process of galaxy formation and evolution, and of the global interplay between galaxies and their environment.
The physics of clusters of galaxies has thus received increasing attention in the past decade, benefiting from a number of X-ray missions measuring the emission of the hot intra-cluster gas (e.g., ASCA, ROSAT, XMM, Chandra) as well as from extensive optical/NIR surveys probing the distribution of galaxies and their properties (e.g. MORPHS, SDSS, 2MASS). Understanding cluster physics is also crucial to reconstruct, from the observed X-ray luminosity function and temperature distribution, the intrinsic mass function of clusters as a function of redshift, which is a quantity of profound cosmological interest (Voit 2004).
The baryonic mass in clusters is largely in the form of a hot intra-cluster medium (ICM), which dominates by a factor of 5-10 over the stellar mass (Arnaud et al. 1992;Lin, Mohr & Stanford 2003). Consequently, early theoretical work and numerical simulations concentrated on pure gas dynamics when modelling clusters. Recently however, attention has focused also on the role of galaxy and star formation, and related effects. Star formation locks-up low entropy gas, and supplies thermal and kinetic energy to the surrounding medium via supernova explosion and shell expansion; both processes likely contribute to the observed "entropy floor" in low-mass clusters, and the corresponding breaking of the scaling relations expected from pure gravitational collapse physics (Voit et al. 2003). Besides, star formation is accompanied by the production of new metals and the chemical enrichment of the environment; the considerable (about 1/3 solar) metallicity of the ICM indicates that a significant fraction of the metals produced -comparable or even larger than the fraction remaining within the galaxies -is dispersed into the intergalactic medium (Renzini 2004), affecting the cooling rates of the intra-cluster gas. It is thus clear that the hydrodynamical evolution of the hot ICM is intimately connected to the formation and evolution of cluster galaxies. Only recently, however, due to advances in computing capabilities as well as in detailed physical modelling, cluster simulations have reached a level of sophistication adequate to trace star formation and related effects in individual galaxies, and the chemical enrichment of the ICM by galactic winds, in a reasonably realistic way (Valdarnini 2003;Tornatore et al. 2004).
Indeed so far theoretical predictions of the properties of cluster galaxy populations within a fully cosmological context, have been mainly derived by means of semi-analytical models. High resolution, purely N-body cosmological simulations of the evolution of the collisionless dark matter component, are combined with semi-analytical recipes describing galaxy formation and related physics (such as chemical enrichment, stellar feedback and exchange of gas and metals between galaxies and their environment; see e.g. De Lucia, Kauffmann & White 2004 for a recent reference); with such schemes, the evolution of galaxies is "painted" on top of that of the simulated dark matter haloes and sub-haloes. The advantage of this technique is that very high resolution can be attained, since pure N-body simulations can handle larger numbers of particles than hydrodynamical simulations; besides, a wide range of parameters can be explored for baryonic physics (e.g. star formation and feedback efficiency, Initial Mass Function, etc.).
In this paper, we present for the first time (to our knowledge) an analysis of the properties of the galaxy population of clusters as predicted directly from cosmological simulations including detailed baryonic physics, gas dynamics and galaxy formation and evolution. The resolution for N-body + hydrodynamical simulations cannot reach the level of the purely N-body simulations that constitute the backbone of semi-analytical models, so we cannot resolve galaxies at the faint end of the luminosity function (MB -16). On the other hand, our simulations have the advantage of describing the actual hydrodynamical response of the ICM to star formation, stellar feedback and chemical enrichment. Although some uncertain physical processes still necessarily rely on parameters (like the star formation efficiency and the feedback strength, see Section 2), once these are chosen, the interplay between cluster galaxies and their environment follows in a realistic fashion, as part of the global cosmological evolution of the cluster.
Moreover, the intimate relation between stellar Initial Mass Function (IMF), stellar luminosity, chemical enrichment, supernova energy input, returned gas fraction and gas flows out of/into the galaxies is included self-consistently in the simulations (while sometimes these are treated as adjustable, independent parameters in semi-analytical models). The properties we obtain for the galaxy populations (notably, global star formation rates, luminosity functions and colour-magnitude relations) are then the end result of ab-initio simulations, with a far minor degree of parameter calibration than in semi-analytical schemes.
In a standard ΛCDM cosmology, we have performed N-body + hydrodynamical (SPH) simulations of the formation and evolution of clusters of different mass, on scales of groups to moderately rich clusters (emission-weighted temperature from 1 to 6 keV). In Paper I of this series (Romeo et al. 2004, in preparation) we analyze the properties and distribution of the hot ICM in the simulated clusters, and discuss the effects that star formation and related baryonic physics have, on the predicted X-ray properties of the hot gas. Several sets of simulations have been carried out, assuming different IMFs and feedback prescriptions (see Paper I and Section 2). The chemical and X-ray properties of the ICM are best reproduced assuming a fairly top-heavy IMF and a high, though not extreme, feedback (super-wind) efficiency (the simulations marked, hereafter, as AY-SW; see Paper I and Table 1).
In this Paper II we focus instead on the properties of the galaxy population in our simulated richer clusters (with temperatures between 3 and 6 keV), where the number of identified galaxies is statistically significant. We will mainly discuss the results from the AY-SW simulations, favoured by the resulting properties of the ICM; results from simulations with different input physics (IMF, wind efficiency, preheating) are also discussed for comparison, where relevant.
In Section 2 we briefly introduce the code and the sim-ulations (full details are given in Paper I), as well as the procedure to identify cluster galaxies in the simulations. In Section 3 we discuss the global star formation histories of cluster galaxies, and in Section 4 we determine global luminosities of the clusters simulated with different prescriptions for baryonic physics. In Section 5 and 6 we discuss luminosity functions and colour-magnitude relations of the galaxy population in our clusters, and, finally, in Section 7 we summarize our results.
THE SIMULATIONS
The code used for the simulations is a significantly improved version of the TreeSPH code we used previously for galaxy formation simulations (Sommer-Larsen, Götz & Portinari 2003). Full details on the code and the simulations are given in Paper I, here we recall the main upgrades over the previous version.
(1) In lower resolution regions an improvement in the numerical accuracy of the integration of the basic equations is obtained by incorporating the "conservative" entropy equation solving scheme of Springel & Hernquist (2002).
(2) Cold high-density gas is turned into stars in a probabilistic way as described in Sommer-Larsen et al. (2003). In a star-formation event a SPH particle is converted fully into a star particle. Non-instantaneous recycling of gas and heavy elements is described through probabilistic "decay" of star particles back to SPH particles as discussed by Lia et al. (2002a). In a decay event a star particle is converted fully into a SPH particle.
(3) Non-instantaneous chemical evolution tracing 10 elements (H, He, C, N, O, Mg, Si, S, Ca and Fe) has been incorporated in the code following Lia et al. (2002a,b); the algorithm includes supernovae of type II and type Ia, and mass loss from stars of all masses. For the simulations presented in this paper, a Salpeter (1955) IMF and an Arimoto-Yoshii (1987) IMF were adopted, both with with mass limits [0.1-100] M⊙. (4) Atomic radiative cooling is implemented, depending both on the metallicity of the gas (Sutherland & Dopita 1993) and on the meta-galactic UV field, modelled after Haardt & Madau (1996). (5) Star-burst driven, galactic super-winds are incorporated in the simulations. This is required to expel metals from the galaxies and reproduce the observed levels of chemical enrichment of the ICM. A burst of star formation is modelled in the same way as the "early bursts" of Sommer-Larsen et al. (2003): the energy released by SNII explosions goes initially into the interstellar medium as thermal energy, and gas cooling is locally halted to reproduce the adiabatic super-shell expansion phase; a fraction of the supplied energy is subsequently converted (by the hydro code itself) into kinetic energy of the resulting expanding super-winds and/or shells. The super-shell expansion also drives the dispersion of the metals produced by type II supernovae (while metals produced on longer timescales are restituted to the gaseous phase by the "decay" of the corresponding star particles, see point 2 above). The strength of the super-winds is modelled via a free parameter f wind which determines how large a fraction of the new-born stars partake in such bursting, super-wind driving star formation. We find that in order to get an iron abundance in the ICM comparable to observations, f wind 0.5 and at a top-heavy IMF must be used. (6) Thermal conduction was implemented in the code following Cleary & Monaghan (1999), with the addition that effects of saturation (Cowie & McKee 1977) were taken into account. The groups and clusters were drawn and re-simulated from a dark matter (DM)-only cosmological simulation run with the code FLY (Antonuccio et al., 2003), for a standard flat Λ Cold Dark Matter cosmological model (h = 0.7, Ω0 = 0.3, σ8 = 0.9) with 150h −1 Mpc box-length. When re-simulating with the hydro-code, baryonic particles were "added" to the original DM ones, which were split according to a chosen baryon fraction f b = 0.12. This results in particle masses of mgas=m * = 2.5 · 10 8 and mDM=1.8 · 10 9 h −1 M⊙. Gravitational (spline) softening lengths of ǫgas=ǫ * =2.8 and ǫDM=5.4 h −1 kpc, respectively, were adopted. The gravity softening lengths were fixed in physical coordinates from z=6 to z=0 and in comoving coordinates at earlier times. Particle numbers are in the range 4·10 5 −10 6 SPH+DM particles. To test for numerical resolution effects one simulation was run with eight times higher mass and two times higher force resolution, with 2.3 · 10 6 SPH+DM particles having mgas=m * = 3.1 · 10 7 , mDM=2.3 · 10 8 h −1 M⊙ and softening lengths of 1.4, 1.4 and 2.7 h −1 kpc, respectively (Section 5.1).
We selected from the cosmological simulation some fairly relaxed systems, not undergoing any major mergers since z 1: two galaxy groups, two small mass clusters, one "Virgo-like" cluster (T ∼ 3 keV) and a "Mini-Coma" (T ∼ 6 keV) cluster. TreeSPH re-simulations were run with different super-wind (SW) prescriptions, IMFs, either Salpeter (Sal) or Arimoto-Yoshii (AY), with or without thermal conduction, with or without preheating and, as mentioned above, at two different numerical resolutions.
As our reference runs we shall consider simulations with an AY IMF, with 70% of the energy feedback from supernovae type II going into driving galactic super-winds, zero conductivity, and at normal resolution; these will be denoted "AY-SW". Some other simulations were run with galactic super-wind feedback two times (SWx2) and four times (SWx4) as energetic as is available from supernovae; the additional amount of energy is assumed to come from AGN activity. Others were of the AY-SW type, either with additional preheating at z=3 (preh.), as discussed by Tornatore et al. (2003), or with thermal conduction included (COND), assuming a conductivity of 1/3 of the Spitzer value (e.g., Jubelgas, Springel & Dolag 2004). Finally, one series of simulations was run with a Salpeter IMF and only early (z 4), strong feedback, as in the galaxy formation simulations of Sommer-Larsen et al. (2003); as this results in overall fairly weak feedback we denote this as: Sal-WFB. We refer the reader to Papers I and III, and also Sommer-Larsen (2004), for more details.
The general features and main results for all the simulations are presented in the companion Paper I. In the present paper we mainly discuss the properties of the galaxy populations in the simulated "Virgo" and "Mini-Coma" clusters, which are in Paper III referred to as clusters "C1" and "C2", respectively. We focus on these two largest simulated objects since they contain a significant number of identified individual galaxies. The prescriptions adopted for the resimulations of these two clusters are summarized in Table 1. Lower mass objects are only included in the analysis of the luminosity scaling in Section 4.
We consider the AY-SW simulation as our "standard" run. These simulations provide a satisfactory overall match to the observed properties of the ICM (see Paper I, and also Sommer-Larsen 2004). For "Virgo" we also have simulations with higher feedback efficiency, with pre-heating or with the Salpeter IMF (weak or normal feedback); the corresponding results for the galaxy population are shown for comparison, where relevant (e.g., Figs. 4 and 10).
Because of its high computational demand, the "Coma" system was run only with the standard AY-SW prescription, with (COND) or without thermal conduction.
The identification of individual galaxies
The stellar contents of both clusters are characterized by a massive, central dominant (cD) elliptical galaxy surrounded by other galaxies orbiting in the main cluster potential, and embedded in an extended envelope of tidally stripped intracluster stars, unbound from the galaxies themselves. Observationally, the brightest galaxy in clusters is usually located at or near the cluster centre, where other (massive) galaxies likely tend to sink and merge by dynamical friction, mainly against the dark matter. Our simulated cDs appear very dominant, containing slightly more than half of all the star particles in the cluster, in this being more prominent than the cDs observed in real clusters. This is related to a "central cooling" problem common in models and hydrodynamical simulations, both SPH and Adaptive Mesh Refinement (e.g. Ciotti et al. 1991;Fabian 1994;Knight & Ponman 1997;Suginohara & Ostriker 1998;Lewis et al. 2000;Tornatore et al. 2003;Nagai & Kravtsov 2004): after the cluster has recovered from the last major merging events, a steady cooling flow is established at the center of the cluster, being only partially attenuated by the strong feedback. The cooled-out gas is turned into stars at the base of the cooling flow (r 10 kpc). This fairly young stellar population, accumulating at the centre of the cD, contributes very significantly to the total luminosity, and increases the total stellar mass of the cD by of the order a factor of two. In semi-analytical models, one can alleviate this problem by artificially quenching radiative gas cooling in galactic haloes more massive than 350 km/sec (Kauffmann et al. 1999). However, over-cooling in cluster cores is not merely a technical problem in numerical simulations, but a problem with the physics in the central cluster regions. XMM and Chandra observations have revealed that cooling occurs only down to temperatures of about 1-2 keV, so that the former "cooling flow" regions are now known to be instead "cool cores" (e.g. Molendi & Pizzolato 2001;Peterson et al. 2001Peterson et al. , 2003Tamura et al. 2001;Matsushita et al. 2002); the mechanisms that prevent the gas from cooling further and finally form stars at a high rate, are presently not clearly identified (central AGN feedback, magnetohydrodynamic effects, thermal conduction and/or heating through gravitational interactions are some candidates; e.g. Ciotti & Ostriker 2001;Narayan & Medvedev 2001;Makishima et al. 2001;Fabian, Voigt & Morris 2002;Churazov et al. 2002;Kaiser & Binney 2003;Fujita, Suzuki & Wada 2004;Cen 2005).
The cD and its envelope of intracluster stars formed in our simulations are discussed in the companion Paper III of this series (Sommer-Larsen, Romeo & Portinari, 2005). Here, we study the properties of the rest of the galaxy population in the simulated clusters. The galaxies are identified in the simulations by means of the procedure detailed in Paper III. Visual inspection of the z=0 frames shows that the stars in all galaxies (except the cD) are typically located within 10-15 kpc from the respective galactic centers. In a cubic grid of cube-length 10 kpc, we identify all cells containing at least 2 star particles. Each of these is then embedded in a larger cube of length 30 kpc; if this larger cube contains a minimum of Nmin =7 gravitationally bound star particles, the system is labelled as a potential galaxy. Finally, among the various potential galaxies effectively identifying the same system, we classify as the galaxy the one containing the largest number of star particles. With the mass resolution of the simulations, Nmin corresponds to a stellar mass of 2.5 × 10 9 M⊙ and an absolute B-band magnitude of MB ≈ −16, of the order of that of the Large Magellanic Cloud. The galaxy identification algorithm is adequately robust, as long as Nmin∼7-10 star particles. Note though that galaxies will contain gas and dark matter particles as well, in the case of small galaxies often considerably more dark matter particles, because most of the baryons have been expelled by galactic super-winds.
Using this galaxy identification procedure, we identify for the standard runs 42 galaxies in "Virgo" and 212 in "Coma", respectively.
The computation of the luminosity
We assign luminosities and colours to the galaxies identified in our simulations, as the sum of the luminosities of the relevant star particles in the various passbands.
Each star particle represents a Single Stellar Population (SSP) of total mass of 3.6 × 10 8 M⊙, with individual stellar masses distributed according to a particular IMF (either Arimoto-Yoshii or Salpeter), and we keep record of the age and the metallicity of each of these SSPs. It is quite straightforward to compute the global luminosities and colours of our simulated galaxies, as the sum of the contribution of their constituent star particles. SSP luminosities are computed by mass-weighted integration of the Padova isochrones (Girardi et al., 2002).
We must however pay special attention to the relation between star particle mass and SSP mass. In our "statistical" implementation of chemical evolution (Lia et al. 2002a,b), a fraction of the star particles with age transforms back into gas particles (the gas-again particles of Lia et al.), to simulate the re-ejection of gas into the interstellar medium by dying stars. Following the notation by Lia et al., the re-ejected fraction increases with the age t of the SSP: where Φ(M ) is the IMF, Mu is its upper mass limit, and M (t) is the mass of the star with lifetime t. Typical values of the global returned fraction after a Hubble time are 30% for the Salpeter IMF, and around 50% for the Arimoto-Yoshii IMF. As a consequence, out of an episode of star formation involving N star particles, after a time t on average only have returned to be SPH particles. We need to take this effect into account when computing the luminosities, since SSP luminosities refer to the initial mass of the SSP, namely the mass involved in the original star formation episode, not to its current mass which Table 1. Basic physical properties and prescriptions adopted for the runs of the "Virgo" and "Coma" clusters discussed in this paper. Temperatures (4 th column) refer to the "standard" run AY-SW. The last two columns give quantities as resulted from the simulations, at R 500 . is a fraction 1 − E(t) of the initial one. Each star particle of age t is effectively representative of 1 1−E(t) star particles at its birth -or equivalently, each star particle of mass m * corresponds to an initial SSP mass m * 1−E(t) . Therefore, for the computation of the luminosity each star particle is assigned a corresponding "initial SSP mass" m * 1−E(t) , rather than its actual present mass m * .
STAR FORMATION HISTORIES OF THE CLUSTER GALAXIES
The star formation process in the cluster environment is known to peak at higher redshifts than in the field. The morphology-density relation (e.g. Dressler 1980;Goto et al. 2003) implies that the cluster population is dominated by early-type galaxies, ellipticals and S0's, whose stellar populations have formed at redshift z > 2 as indicated by the tightness and the redshift evolution of the colour-magnitude relation and of the fundamental plane (e.g. Bower, Lucey & Ellis 1992;Kodama & Arimoto 1997;Jørgensen et al. 1999;van Dokkum & Stanford 2003). Conversely, the field galaxy population is dominated by late Hubble types, with star formation histories still presently on-going. Furthermore, compared to their cluster counterparts, field ellipticals (especially low mass ones) display more extended star formation histories and/or minor star formation episodes at low redshifts (e.g. Bressan et al. 1996;Trager et al. 2000;Bernardi et al. 2003;Treu et al. 2005). The rapid drop in the past star formation rate in clusters is also apparent in the evolution of the fraction of star forming galaxies, as traced by blue colours and spectroscopy (e.g. Butcher & Oemler 1978, 1984Ellingson et al. 2001;Margoniner et al. 2001;Poggianti et al. 1999Poggianti et al. , 2004Gomez et al. 2003). These trends are qualitatively in line with present theories of cosmological structure formation, predicting that high density regions evolve faster than low-density regions 1 (Bower 1991;Diaferio et al. 2001;Benson et al. 2001); although there are severe quantitative discrepancies with the results of semi-analytical hierarchical models, especially concerning the formation and number density evolution of field early-type galaxies Benson et al. 2002).
All in all, clusters display a drastic decay in the star formation rate at z < 1, much faster than the corresponding decline indicated by the Madau plot for field galaxies (Kodama & Bower 2001; Fig. 1). Such drop in the cluster star formation rate is usually ascribed to a combination of three effects.
(1) Interaction with the cluster environment and with the ICM quenches the star formation of the infalling galaxies via mergings and interactions, harassment, ram pressure stripping, starvation/strangulation, etc. (Dressler 2004 and references therein). (2) The rate of accretion of star forming field galaxies onto clusters decreases at decreasing redshift (Bower 1991;Kauffmann 1995). (3) The intrinsic star formation rate of the accreted galaxies decreases at z < 1 (cf. the Madau plot, Madau et al. 1998).
In this section, we analyze the global star formation history of the cluster galaxies in our simulations and compare it to the observational one derived by Kodama & Bower (2001). Just like for the empirical one, our global star formation history is obtained as the sum of the star formation histories of the individual cluster galaxies (excluding the young cD stars in the central regions, formed out of the cooling flow; see also Paper III).
For each cluster, we single out all the star particles that at the present time (z=0) belong to the identified galaxies within 1 Mpc, corresponding to the same R30 radius considered by Kodama & Bower. For each star particle we know the age t and we derive the corresponding "initial SSP mass" involved in star formation at its birth time, as m * 1−E(t) (see § 2.2). Summing over all the selected star particles, we reconstruct the past SFR in the comoving Lagrangian volume.
In Fig. 1 the global normalized SFR is shown for the galaxies in "Virgo" (3 models with varying feedback strength and Arimoto-Yoshii IMF, plus a model with Salpeter IMF) and "Coma" (standard reference prescription). Since z ∼ 1.2 the SFR declines considerably more rapidly than that of the field galaxies, in line with observational reconstructions indicating a peak at z ∼ 3 − 4 and a subsequent significative Normalized global star formation rates in the "Virgo" and "Coma" galaxies, excluding the youngest stars (less than 1 Gyr old) in the inner 10 kpc of the cDs; symbols are explained in section 2. Also shown are observed rates for field galaxies (dotted, Madau et al., 1998) and those of galaxies in the inner parts of rich clusters (shaded region, from Kodama & Bower, 2001).
drop from z ≃ 2 to 0 quite more pronounced than in the field. In Fig. 1, the shape of the SFR history in our simulated clusters is in agreement with the estimates of Kodama & Bower, however the normalization is somewhat higher (close to the field level at redshift z = 2).
From comparing "Virgo" and "Coma" in the figure, it is evident that the total star formation history normalized by cluster mass is nearly independent of the cluster virial mass, as also recently pointed out by Goto (2005) over a sample of 115 nearby clusters selected from the SDSS: this suggests that physical mechanisms depending on virial mass (such as ram-pressure stripping) are not exclusively driving galaxy evolution within clusters.
SCALING PROPERTIES OF THE CLUSTER LIGHT
The near-infrared luminosity of galaxies is only negligibly affected by recent star formation activity, thus giving a robust measure of the actual stellar content of a cluster (Kauffmann & Charlot 1998). In Fig. 2 we show the total (2.2µ) K-band luminosity of galaxies within r500 versus the total mass inside of r500 (which in the ΛCDM model is approximately half of the virial radius) for individual clusters in the simulated sample -including objects of lower mass than "Virgo" and "Coma". The various models match quite well the best-fit slope derived observationally by Lin, Mohr & Stanford 2004: L K 500 ∝ M 0.72 500 , but the runs with additional pre-heating and those with stronger feedback result in a normalization too low with respect to the observations. As to the Salpeter simulations, the excellent agreement seen in this plot is partly affected by the very blue colours of the galaxies (Fig. 10); in bluer bands, e.g. the B band, these simulations results too bright for their cluster mass (Paper I).
The figure's insert shows the result of correcting for the 0.5 1 5 1 0.5 1 5 1 Figure 2. K-band luminosity-mass relation inside of R 500 at z=0 (excluding cD stars in the innermost 10 kpc, younger than 1 Gyr); notice that "Coma" cluster results at M 500 ∼ 7 × 10 14 M ⊙ are available only for the AY-SW prescription, with and without thermal conduction (see Table 1). Shaded strip: best-fit relation for a sample of 93 clusters and groups from 2MASS (Lin et al., 2004). In the insert: the same excluding all stars born since z ≃ 1 from the central 40 kpc; also the observational best-fit relation has been modified to exclude the BCG.
excess of young cD stars by neglecting the luminosity contribution of stars in the innermost 40 kpc formed at z 1 (see Paper III for details); the observational data have been corrected as well by excluding the cluster brightest (usually central) galaxy (BCG). The slope of the relation remains essentially the same, though observationally it steepens slightly to L K 500 ∝ M 0.83 500 (Lin et al. 2004). This indicates that the relative contribution of the cD/BCG to the total cluster light decreases with cluster mass.
The regular trend seen in Fig. 2 over a considerable range of mass suggests that star/galaxy formation in groups and clusters tends to be approximately a scaling process; moreover the slope of the observational relation indicates that smaller groups produce stars at higher efficiency than larger clusters, provided that K-band light well traces stellar mass.
THE LUMINOSITY FUNCTION OF CLUSTER GALAXIES
In this section we discuss the luminosity function of the population of galaxies in our simulated "Virgo" and "Coma" clusters (excluding the central cDs, which are discussed in Paper III). As mentioned above, we identify for the standard runs 42 galaxies in "Virgo" and 212 galaxies in "Coma", respectively, with a resolution-limited stellar mass 2.5 × 10 9 M⊙, of the order that of the Large Magellanic Cloud. This limit corresponds to MB ∼ −16 at z=0 (Fig. 3).
In Fig. 3 we compare the B-band and K-band luminosity function (LF) of our simulated cluster galaxies, to the observational LFs by Trentham (1998; a weighted mean of 9 For the B-band LF (top panels), the number of resolved individual galaxies quickly drops for objects fainter than MB ∼ −17, both in "Virgo" and in "Coma", due to the above mentioned resolution limits and identification procedure. Notice that, although we are missing the dwarf galaxies that largely dominate in number, we are able to describe the bulk of the stellar mass and of the luminosity in clusters, which is dominated by galaxies around L * while dwarfs fainter than MB ≃-17 give a negligible contribution (Cross et al. 2001;Blanton et al. 2004) -though in the simulations, too much of the luminosity and stellar mass expected in objects around L * , is instead accumulated in the central cD (see below). Our B-band LF in Fig. 3 is normalized so that, in the luminosity bins that are significantly populated (−19 MB −17), the overall number of objects is the same as for the observed relative LF (Trentham 1998); in this magnitude range, the shape of the predicted and observed LF is directly comparable. The simulated LF is steeper than the observed one, namely we underestimate the relative number of bright galaxies. One reason for this may be our somewhat biased selection of the group and clusters sample, which consists of fairly relaxed systems, in which no significant merging takes place since z 1 (see Paper III). This means that most of the massive galaxies have already merged with the central cD by dynamical friction -which is not always the case for real clusters (one likely, nearby example of an unrelaxed cluster is Virgo, e.g., Binggeli et al. 1987).
In the K-band (bottom panels of Fig. 3), our simulated LF reaches fainter magnitudes than the limit MK < −21 (dotted line), where the observational LF is considered reliable (Lin et al. 2004). Therefore, we normalize our LF to match the observed number of galaxies within the populated magnitude range and above the resolution limit (−23.2 MB −21). In the K-band, the lack of massive galaxies in relative number is even more evident.
In Fig. 4 we compare the LFs for the "Virgo" cluster simulated with different IMFs and feedback prescriptions. For the three AY simulations, increasing the feedback strength results in decreasing the masses of all galaxies, and hence, in particular, the number of bright galaxies. The results with thermal conduction (not shown in the figure) are very close to the analogous case without thermal conduction. The LF in the Salpeter case is broader in luminosity, due the lower stellar feedback with respect to the AY IMF, allowing a larger accumulation of stellar mass in galaxies. The trend is even stronger in the Sal-WFB case. Notice that, though the Salpeter IMF simulations are more successful in forming massive galaxies and would thus compare more favourably to the observed LF, this comes at the expense of a too large cold fraction with respect to observations (Table 1 and Paper I). Other reasons to disfavour the Salpeter IMF simulations are the low predicted metallicities in the ICM (Table 1 and Paper I), and the too blue colours of the galaxies ( § 6).
In Fig. 5 we compare the LF of the "mini-Coma" cluster, in absolute number of galaxies, to the observed Coma LFs: besides discussing the distribution of the stellar mass in the simulations (cD vs. bright galaxies vs. dwarf galaxies) we want to test if the actual number of the galaxies formed in the simulation is sensible. The Coma LF is from Andreon & Cuillandre (2002) and Andreon & Pelló (2000); for the comparison, we compute the luminosity functions of galaxies in our simulated cluster within projected, off-center areas comparable to the areas covered by the observations -about (0.8 Mpc) 2 in B, (1 Mpc) 2 in V, R and (0.5 Mpc) 2 in H. There is quite good agreement with the observed num- ber of galaxies over the luminosity range that is significantly populated by the galaxies identified in the simulation. However, as already noticed from the relative LF in Fig. 3, we are "missing" the bright end tail of the LF, a problem which does not seem to be cured by increased resolution (see §5.1). These results on the absolute LF confirm that the problem is not that we form too few galaxies in the simulations (whence one could statistically "miss" the fewer galaxies at the bright end of the LF), but that in fact there is a lack of bright galaxies in favour of the overgrown cD.
Resolution effects
In order to test for numerical resolution effects, one simulation of the "Virgo" system with the standard AY-SW model was run at 8 times higher mass and 2 times higher force resolution. At the time of writing, this simulation has reached z ≃0.8, so that in Fig. 6 we can compare the Virgo LF for the two different resolutions at z ≃0.8. As expected, at higher resolution the LF extends to fainter magnitudes. This is due to the higher force and mass resolution, which allows one to resolve as several small (proto-)galaxies an object which was previously initially identified as one large protogalaxy. However, above the resolution limit for the galaxy identification, the LF appears to be strikingly robust to resolution effects, and consequently so is all the discussion in the previous section. The global stellar content of the cluster is also essentially resolution independent: the total mass of stars at z ≃0.8 inside of the virial radius is 3.0 · 10 12 M⊙ for the normal resolution case and 3.4 · 10 12 M⊙ for the highresolution one. The corresponding masses of stars in galaxies other than the cD are 9.1 and 9.2·10 11 M⊙; the total masses of stars outside of the cD (adopting rcD=50 kpc) are 1.7 and 2.0·10 12 M⊙, and the masses of stars in the cD are 1.3 and 1.4·10 12 M⊙, respectively. The total numbers of galaxies identified, assuming Nmin =7 in both cases, are 50 and 341, respectively.
In conclusion, at higher resolution more substructure can be identified extending the LF at the dwarf galaxy end; yet the luminosity of the second brightest galaxy (after the cD) in the two simulations of different resolution is quite similar. Moreover, the number of galaxies brighter than ≈ L * remains fairly unaffected when going to higher resolution: enhancing resolution does not increase the number of bright cluster galaxies. Analogous results are obtained from another resolution test we performed, on a smaller system (a group of galaxies of virial mass 0.48 ·10 14 M⊙) evolved down to z = 0: the higher resolution does not affect the luminosity of the second ranked galaxy nor helps increasing the number of bright (∼ L * ) galaxies. These tests indicate that the lack of bright galaxies in the normal resolution simulations of our "Virgo" and "Coma" clusters is not an effect of numerical "over-merging".
"Classic" semi-analytical models of cluster formation also faced, to some extent, the problem that the cD became too bright and the number of lower ranked, bright galaxies was depleted. Springel et al. (2001) suggested that numerical over-merging onto the central cD in dissipationless dark matter simulations could deplete the number of galaxies at the bright end of the LF, and showed that for semianalytical models this problem can be significantly improved on by suitable identification of dark matter substructure, or of "haloes within haloes". This is not a solution in our case, however, because we identify as galaxies all bound systems containing as little as Nmin=7 star particles, and increasing resolution does not increase the number of bright galaxies. Moreover, in hydrodynamical simulations there is as of yet no physical way to quench the central, semi-steady cooling flow; such late gas accretion accounts for part of the excess of the cD masses.
Redshift evolution of the Luminosity Function
The LF in clusters can evolve due to two effects: passive luminosity evolution from the aging of the stellar populations, and mass evolution due to dynamical effects such as mergers, tidal stripping and dynamical friction. Cluster galaxies are known to consist of old stellar populations, with the bulk of their star formation occurring at z≫1. However, they may not have been completely assembled by z ∼ 1, and they may still undergo one or two significant mergers since then, or grow by gas accretion .
The observed evolution of the luminosity function is a powerful probe for the assembly epoch of galaxies. From a theoretical point of view, hierarchical models of galaxy formation based on semi-analytical prescriptions tend to predict a deficit of massive galaxies at z ∼ 1, as a result of the ongoing mass assembly activity at lower redshift (Kauffmann & Charlot, 1998). More recent models, based on a flat ΛCDM cosmology and with updated star formation, feedback and wind prescriptions perform much better and agree with the observed number of massive galaxies up to z = 1.2 − 1.5; however they still face problems with a deficit at higher redshifts as well as with the colours of massive galaxies, underpredicting the number of red early type galaxies and of Extremely Red Objects. In the deep field, the observed evolution seems to lie in between the predictions of current hierarchical models and those of the monolithic, pure luminosity evolution scenario Stanford et al. 2004).
For clusters instead, recent results from the observations of distant objects suggest that the evolution of the LF is best modelled by pure luminosity evolution: Barger et al. (1996Barger et al. ( , 1998, De Propris et al. (1999), Kodama & Bower (2003) found that, after correcting for this effect and for cosmological dimming, the galaxy mass function has actually evolved little over time; the K-band LF at z ∼ 1 is consistent with pure luminosity evolution with constant stellar mass and early redshifts of star formation (z f 2). In Fig. 7 we show the evolution of the B and K band LFs of our "Virgo" and "Coma" clusters. We computed the (rest-frame) LF of the galaxies identified in the z = 1 frame; this is shifted to brighter magnitudes with respect to the distribution at z = 0, and both luminosity evolution and dynamical mass evolution are seen to play a role in our simulated clusters. In particular, since the bulk of the stars in our galaxies are formed at z > ∼ 2 (see Paper III and Fig. 9, bottom panel), luminosity dimming is an important effect.
We also computed the LF at z = 1 as expected from pure passive evolution of the z = 0 LF, by considering the star particles in each of the galaxies identified at z = 0, and computing the luminosity they had back at a ∼7.5 Gyr younger age. Such "pure passive evolution" LF is displayed as a dotted line in Fig. 7, and it matches quite closely the actual LF in the z = 1 frame (apart from the absence of a few bright galaxies, see below). This indicates that luminosity dimming of the stellar populations drives most of the LF evolution and the fading to fainter magnitudes from z = 1 to z = 0 for the simulated galaxy population; this is in line with the results of Kodama & Bower (2003).
In the brightest luminosity bins of our LF, some additional, dynamical effect seems to be required since more bright galaxies are identified at z = 1 than expected from pure passive evolution; especially in the case of the "Coma" simulation, where 263 galaxies ar found at z = 1 versus 212 at z = 0. To assess mass evolution effects, we plot in the lower panels of Fig. 7 the mass function of (the stellar component of) cluster galaxies at z = 1 and at z = 0. The two mass functions are very similar at the low mass end, but a few rather massive objects (above M * ∼ 4 × 10 10 M⊙) present at z = 1 "disappear" at z = 0. Inspection of the simulations show that these objects have merged with the cD at z=0.
As shown in § 5.1, numerical "over-merging" onto the central cD cannot be the main reason for missing galaxies at the bright end of the simulated LF. The lack of bright, massive galaxies other than the cD at z=0 is probably largely affected by our group and cluster selection procedure, which picks out extremely relaxed objects at z=0, as discussed in §5. All in all, "Coma" presents a less relaxed structure than "Virgo", as also inferred by the shapes of the LF at z=0 and z=1. Indeed even for "Virgo" we do see a deficiency of bright galaxies at z=0, but not at z=1, where a gap is just due to poorer statistics of galaxy numbers. Work is in progress however to analyze a large sample of groups more randomly selected (hence including both relaxed and un-relaxed object) to assess the bias induced by the selection procedure (Sommer-Larsen, D'Onghia & Romeo 2005, in preparation).
Passive evolution and the Fundamental Plane
In the previous section we computed the pure passive evolution dimming, from z = 1 to z = 0, corresponding to the cluster galaxies identified at z = 0. Constraints on the passive evolution of ellipticals in clusters are derived from the observed evolution of the Fundamental Plane (FP), indicating a dimming of about 1.2 magnitudes in the B band in the redshift range z = 1 → 0; assuming a Salpeter slope for the IMF, this implies a redshift of formation z f or 2.5 for the stellar populations (Van Dokkum & Stanford 2003;Wuyts et al. 2004;Renzini 2005;Holden et al. 2005).
A direct comparison to FP constraints is hampered by the fact that in our LF we miss the bright, massive elliptical galaxies defining the observed FP; nevertheless, it is interesting to comment on the predicted passive evolution of our galaxies. In the table below we show the B magnitude evolution ∆MB = MB(z = 0) − MB(z = 1) predicted by the SSPs in use, as a function of the assumed redshift of formation z f or of the stellar population. These values are largely independent of the SSP metallicity, at least within a factor of 3-4 of the solar value (which is certainly representative of the bright galaxies in our simulations). Since younger stellar populations dim faster, the lower the redshift of formation, the faster the magnitude evolution between z = 1 and z = 0. For the Salpeter IMF, our photometric code predicts that the observed dimming of ∼1.2 mag, corresponds to z f or 2.5 in agreement with the above mentioned studies. The rate of dimming depends however also on the slope of the IMF, being faster for shallower IMFs; for an AY slope, the luminosity evolution is in fact slightly faster requiring z f or 3 (see also Renzini 2005). Taking into account progenitor bias, i.e. the fact the progenitors of the youngest present-day early-type galaxies drop out of the sample at high redshift, the actual magnitude evolution of ellipticals might be underestimated by about 20% (van Dokkum & Franx 2001). In this case, ∆MB = 1.4 mag and z f or = 2.5 (AY IMF) are still compatible with observations. The mean redshift of formation for the stars in our simulated galaxies is 2.5 (Paper III), which is on average compatible with FP constraints but the scatter around the mean is large. Younger star particles, where present, dominate in the luminosity contribution at z = 1 and induce a much faster overall evolution (cf. a stellar population formed at z=1.5 dims by more than 2 mag). This scatter in age makes some of our galaxies evolve faster than indicated by FP studies. This is illustrated in Fig. 8, where we show the magnitude evolution ∆MB = MB(z = 0) − MB(z = 1) for our simulated "Coma" galaxies as a function of their presentday magnitude and as a function of average stellar age. The age-dimming (i.e., z f or − ∆MB) relation is well defined in the bottom panel and the FP constraint of 1.2-1.4 mag is met by objects older than 10.5-11 Gyr. The scatter above the lower envelope of the age-dimming relation is due to internal age scatter within the galaxies. We analyze in particular the six brightest objects in the simulation, with MB < −19 (large symbols in the figure), which are more relevant for comparison to the bright spheroidals on the FP. These objects are 9-10 Gyr old (average age), i.e. z f or =1.5-2 which corresponds to ∆MB = 1.5 − 2. However, they also present a large internal scatter in stellar age, as shown by the errorbars in the bottom panel (stretching from the minimum to the maximum stellar age in each object). In particular, they contain stars as young as 8 Gyr, and some of them have minor tails of SF stretching to z < 1, i.e. to ages younger than 7.5 Gyr. The presence of these younger-than-average stars has negligible effects on the present-day appearance of these galaxies, since at z = 0 all the stars are already quite "old" (ages > 5 Gyr) -see also Fig. 9d showing that the luminosity age equals the actual average age for these objects, while it would be significantly younger if the younger stellar component were prominent. However, back at z = 1 these younger-than-average stars become very bright, young populations (ages 1 Gyr) with a major contribution to lumi-nosity, so that they ultimately drive the overall brightening of the galaxy when we trace it back to to z = 1.
The median of the luminosity evolution for all the Coma galaxy population is 1.4 mag (in agreement with the average z f or = 2.5), i.e. close to the FP constraint; however the large internal age scatter within the galaxies, especially the brighter ones, induces an overall predicted passive evolution for the LF more extreme than that (Fig. 7).
Higher resolution simulations can possibly help with this problem, by both anticipating the overall SF process (resolving smaller, hence denser, substructures) and reducing the internal poissonian noise in the stellar age distribution of the individual galaxies. However, the high resolution test presented in § 5.1 does not show major differences in the LF down to z = 0.8 (Fig. 6), suggesting that we have reached a resolution good enough to grasp the main galaxy properties. We are probably then facing here a standard problem of hierarchical models of galaxy formation, i.e. that they predict both slightly younger average ages (Fig. 9d) and a larger (internal) age scatter for more massive objects, both trends in contrast with observational evidence (see Section 3 and references therein).
We notice however that the faint end of the present-day colour-magnitude relation of E+S0 cluster galaxies seems to be largely populated by objects which reddened onto the Red Sequence only recently, while at high z they were much bluer; an effect that is usually associated to the spiral −→ S0 transformation and to the Butcher-Oemler effect from intermediate redshift to z = 0 (e.g. De ; see also Section 3 and references therein). These objects are certainly not the same probed by the FP at high redshift, which relates to the most massive ellipticals. Also, Holden et al. (2005) find a large scatter in mass-to-light ratio and hence magnitude evolution for the least massive ellipticals selected at high z, with masses around 10 11 M⊙ (comparable to the most massive objects in our final galaxy population, Fig. 7). Henceforth, it is possible that our simulations miss in fact exactly the massive spheroidals at the bright end of the LF, which are most meaningful for FP constraints. The issue needs then to be revisited once the problem of properly populating the bright end of the LF is solved for simulated clusters.
THE RED SEQUENCE
The light and stellar mass in clusters of galaxies is dominated by bright, massive ellipticals (Abell 1962(Abell , 1965. These are known to form a tight colour-magnitude relation, or Red Sequence (Bower, Lucey & Ellis 1992ab;Gladders et al. 1998;Andreon 2003;Hogg et al. 2004;McIntosh et al. 2005). In Fig. 9ab we compare the colour-magnitude relation for our "Coma" cluster galaxies, excluding the cD, to the observed Red Sequence of Coma from Bower et al. (1992) and Terlevich, Caldwell & Bower (2001), assuming a distance modulus to Coma of 35.1, i.e. H0 ∼70 km sec −1 Mpc −1 (Baum et al. 1997). The slope of the observed Red Sequence is sensitive to aperture effects: fixed-aperture observations are often used, measuring a smaller fraction of the total light in larger objects than in smaller ones; combined with colours gradients, this introduces a bias in the sense of measuring systematically redder colours in larger galaxies, i.e. a steepening of the actual colour-magnitude relation. When the analysis is instead performed on an area that scales with the size of the galaxy, e.g. within the effective radius, or within a given isophotal/photometric radius, the colour-magnitude relation is in fact flatter (Terlevich et al. 2001;Scodeggio 2001;and references therein). For Coma and Virgo, Bower et al. measured colours within a fixed aperture of 11" and 60" respectively, corresponding to a fixed radius of 5h −1 ∼ 7.15 kpc. Henceforth for a proper comparison, we computed and plotted in Fig. 9 the colours for our simulated galaxies within a projected radius of 7.15 kpc. The magnitude in abscissa are instead total magnitudes, as adopted by Bower et al.
The solid lines in panels (a,b) indicate the observed slope and location of the Red Sequence; the dotted line is a least square fit to our data. There is overall good agreement, although our Red Sequence is not as extended as the observed one (which reaches magnitudes as bright as MV = −23), due to the lack of simulated galaxies at the bright end of the luminosity function discussed in the previous section. Our slope for the (U-V) vs. V relation (Fig. 9a), is slightly flatter than the observed one, but the colours of the three brightest objects are in good agreement with observations, even within the observed small scatter. These objects are the most significant ones if we remember that the observed slope is defined only for MV < −19, extended down to MV < −18 by Terlevich et al. The slope of the simulated (V-K) vs. V relation (Fig. 9b) is in perfect agreement with the observed one.
Although we formally obtain the right slopes, the simulated Red Sequence also displays a much larger scatter than the observed Coma Red Sequence, which is very narrow down to MV ∼ −18 (dashed and dotted lines in Fig. 9ab; see also Fig. 2a in Gladders et al., 1998). Such large scatter is partly due to Poissonian noise from the small number of star particles in the low luminosity galaxies. Besides, while the observed colour-magnitude relation refers to early type galaxies (E/S0), we did not attempt any morphological classification and a priori selection of our simulated galaxies due to limited resolution.
The colour-magnitude relation is classically interpreted as a mass-metallicity relation (Kodama & Arimoto 1997). In our simulations, the bulk of the stars in galaxies are formed at z 2 (see also Paper III), hence age trends among galaxies are only mild (Fig. 9d) and the colourmagnitude relation is mostly a metallicity effect (Fig. 9c). The metallicity-luminosity relation for our cluster galaxies is displayed Fig. 9c, both in terms of mass-weighted metallicity and of (V-band) luminosity-weighted metallicity. The latter is systematically slightly lower than the actual mass-averaged stellar metallicity, as expected since more metal-poor populations tend to be brighter, skewing the luminosity-weighted metallicity to lower values (Greggio 1997); the trend with galaxy luminosity is however maintained. Although metal-rich objects seem to exist at all luminosities, the fraction of metal-poor galaxies (as well as the scatter in metallicity) increases with decreasing galaxy luminosity; the average stellar metallicity decreases for fainter galaxies (dotted line), which ultimately drives the simulated colour-magnitude relation. Fig. 9d shows the average stellar ages for the galaxies. All the galaxies consist of old stellar populations, with Figure 9. (a-b) Colour-magnitude relation for galaxies in the simulated "Coma" cluster (data points and dotted linear fit line), compared to the observed relation with its scatter (solid and long-dashed lines, from Bower et al. 1992;short-dashed lines, from Terlevich et al. 2001). The extension of the dashed lines indicates the magnitude range probed observationally, while the average fit relation (solid line) has been extrapolated to lower magnitudes. (c) Metallicity-luminosity relation for the "Coma" galaxies (full symbols for mass-averaged stellar metallicity, open symbols for luminosity-weighted metallicity); the dotted line is a linear fit. (d): Age-luminosity relation for the "Coma" galaxies (mass-average and luminosity-weighted stellar ages); the dotted line is a linear fit.
average ages between 7 and 12 Gyrs. The brighter galaxies (MV < −20) are essentially coeval, 9-10 Gyr old. The luminosity-weighted ages are generally very close to the mass averaged ages, which implies a small scatter of stellar ages within the individual galaxies: extended tails or recent episodes of star formation, would strongly skew the luminosity-weighted estimate to younger ages. The scatter in age among the galaxies is around 30%, with a mild trend of decreasing age with increasing luminosity (dotted line), but the effect is much smaller that the systematic variation in metallicity (a factor of 10, Fig. 9c). This confirms that the color-magnitude relation of our simulated galaxies is driven by a mass-metallicity relation, in agreement with common wisdom. The mild age trend, which tends to act in the opposite direction (making fainter galaxies redder) is probably responsible of the fact that our colour-magnitude relation is not as steep as the observed one.
At all magnitudes, even the faintest ones, we find some simulated galaxies with large (super-solar) metallicities. For dwarf ellipticals, a very large spread in metallicities and colours with a tail of red, metal rich objects, is in fact observed but at fainter magnitudes than those probed here (MB > −15, Conselice et al. 2003). Metal rich dwarf ellipticals in clusters may possibly originate from larger (and hence more metal rich) galaxies that have suffered considerable tidal stripping in the cluster potential. To test this hypothesis we traced the evolution of the nine galaxies with Z * > Z⊙ and MV > −18, shown in the top panel of Fig. 8, back in time. Two of the galaxies were found to contain a significantly larger mass of stars at z ∼ 1.5 − 2.5 (when these galaxy reach their maximum stellar masses), whereas the stellar masses of the other seven were found not to change much since their "birth". Therefore, in our simulations, stripping of originally larger galaxies is one possible, but not predominant, channel to form dwarf galaxies of high metallicity.
In Fig. 10 we assess the dependence of the simulated Red Sequence on the parameters of the simulation, by considering the "Virgo" cluster computed with different physical prescriptions (see Section 2 and Table 1). For all the AY simulations the galaxies scatter about the observed Red Sequence, and their average location in the colour-magnitude diagram appears to be robust with respect to the inclusion of thermal conduction or preheating, and also to enhanced feedback efficiency (AY-SWx2; we do not display the case AY-SWx4 since the total number of galaxies and their average mass are quite small -see Fig. 4 -but they still scatter around the observed line). The scatter is however very large and the magnitude range of the galaxies formed quite limited (missing the bright end of the LF), so that one cannot make meaningful comparisons to the observed slope of the Red Sequence.
Significantly different is the case of the Salpeter IMF simulations. The scatter is much reduced and the slope of the observed colour-magnitude relation is well reproduced; however, the simulated galaxies are offset to the blue of the observed Red Sequence. Let us discuss the latter problem first. In the standard SuperWind case (Sal-SW) in particular, the stellar populations are too metal poor, hence too blue, to match the observed Red Sequence. Notice that the blue colours are not due to younger stellar ages, since the star formation history in the Virgo cluster is very similar for the Salpeter and the AY simulations (Fig. 1). In the weak feedback scenario, where effective supernova energy injection is limited to the early epochs (Sal-WFB) the simulated Red Sequence falls closer to the observed one, though still on the blue side. With minor feedback the galaxies retain a much larger fraction of the metals they produce, thereby the stellar metallicities can grow larger; however, the price is a predicted enrichment of the ICM far below the observed levels for the Sal-WFB simulations (Table 1 and Paper I). Henceforth, top-heavier IMFs than the Salpeter IMF are needed, not only for the sake of the ICM enrichment but also to reproduce the observed colours and metallicities of the stellar populations. In fact the Salpeter IMF does not produce enough metals, and/or locks too much mass in stars, to account for the observed metallicities in the ICM and in cluster galaxies at the same time , and Paper I).
As to the increased dispersion in galaxy properties for Figure 10. Red Sequence for our simulated "Virgo" cluster with different physical input. AY: Arimoto & Yoshii IMF; Sal: Salpeter IMF; SW: "standard" feedback prescription; SWx2: two times stronger feedback efficiency; WFB: weak feedback (strong feedback active only at early times); COND: thermal conduction included; ph: energy preheating at z = 3. Symbols follow the metallicity coding of Fig. 9.
the AY vs. Salpeter simulations, this is likely induced by the much stronger feedback (a combination of the SW prescription with a top-heavy IMF). As a consequence, the Red Sequence in the AY simulations is much less tight, with no well-defined slope, although the average colours of the galaxies agree with observations much better than in the Salpeter case. If the dispersion is mostly due to numerical effects, because of the small number of particles in the individual galaxies, higher resolution simulations should show a reduced scatter -something to be tested in future work. In summary, the zero-point of the simulated Red Sequence appears to be a robust prediction of the simulations, quite unaffected by the adopted physical prescriptions other than the chosen IMF, which sets the typical stellar metallicity attainable in cluster galaxies. The observed slope of the colour-magnitude relation is well reproduced in the Salpeter simulations (plagued however by an offset to too blue colours and low metallicities); for the AY simulations the scatter is much larger but the average colours are better reproduced.
CONCLUSIONS
In this paper we have presented for the first time and analyzed the properties of the galaxy population in clusters, as predicted from full ab initio cosmological + hydrodynamical simulations.
Our results are based on cosmological simulations of galaxy clusters including self-consistently metaldependent atomic radiative cooling, star formation, supernova and (optionally) AGN driven galactic super-winds, non-instantaneous chemical evolution, effects of a metagalactic, redshift dependent UV field and thermal conduction. In relation to modelling the properties of cluster galaxies this is an important step forward with respect to previous theoretical works on the subject, e.g. based on semianalytical recipes super-imposed on N-body only simulations.
The global star formation rates of the "Virgo" and "Coma" cluster galaxies are found to decrease very significantly with time from redshift z=2 to 0, in agreement with what is inferred from observations of the inner parts of rich clusters (e.g., Kodama & Bower 2001).
We have determined galaxy luminosity functions for the "Virgo" and "Coma" clusters in the B, V, R, H and K bands; the comparison to observed galaxy luminosity functions reveals a deficiency of bright galaxies (MB -20). We carried out a test simulation of "Virgo" at eight times higher mass resolution and two times higher force resolution; the results of this test, still running at the present, indicate that the above mentioned deficiency of bright galaxies is not due to "over-merging"; higher resolution simulations of "Coma" clusters are in progress as well to further check this point. From a suite of simulations for the "Virgo" cluster with different input physics, we find that the deficiency of bright galaxies becomes less prominent with decreasing super-wind strength, in particular for models invoking a Salpeter IMF and only early feedback; in fact more mass can be accumulated in stars and galaxies, with low feedback. Such models, however, present various drawbacks: the cold fraction is too high and the metal production is too low, as seen in the too blue colours of the galaxies and/or in the low metallicity of the ICM, which can hardly be enriched to the observed level of about 1/3 of solar abundance; the latter point is discussed in detail in Paper I, but it also follows from more general arguments ). The bright galaxy deficiency might be explained as a selection effect, in the sense that we have selected cluster haloes for the TreeSPH re-simulations, which are "too relaxed" compared to an average cluster halo, so that the brightest galaxies have by now merged into the central cD by dynamical friction; we shall return to this in a forthcoming paper.
The redshift evolution of the luminosity functions from redshift z=1 to 0 is mainly driven by passive luminosity evolution of the stellar populations, but also by the above mentioned merging of bright galaxies into the cD.
The slope of the colour-magnitude relation of the simulated galaxies is in good agreement with the observed one, however the scatter is larger than observed, partly due to poissoinian noise within the fainter galaxies which are formed by small numbers of star particles. Such internal dispersion in stellar ages is also responsible for a luminosity dimming between z = 1 and z = 0, faster than indicated by the observed evolution of the Fundamental Plane. The typical average galaxy colours are best matched when adopting a top-heavy IMF (as originally suggested by Arimoto & Yoshii 1987), while Salpeter IMF simulations yield too blue colours. Moreover we find that the average metallicity of the simulated galaxies increases with luminosity, and that the brightest galaxies are essentially coeval. Hence, the colourmagnitude relation results from metallicity rather than age effects, as concluded by Kodama & Arimoto (1997) on the basis of its observed evolution. | 2014-10-01T00:00:00.000Z | 2004-04-22T00:00:00.000 | {
"year": 2004,
"sha1": "f6c0bdbdabdeecf0b9517efddb4430bd0d6896e3",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/361/3/983/2946573/361-3-983.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a0810dd21e9d17469a5604bed5e0dea75807f607",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225690969 | pes2o/s2orc | v3-fos-license | The effect of liquid smoke on the properties of Bali beef performance in the feed block during fattening
This study aimed to improve the quality of Bali beef cattle through the application of liquid smoke technology as an antioxidant in the feed supplement fattening livestock groups in Tanete Riaja District, Barru Regency. The limitation of feed for a particular season in the maintenance of Bali cattle is the main problem to improve livestock performance, especially the quality of meat. Feeding in the form of Urea Coconut Water Liquid Smoke Multi-Nutrient Block (UCSMB) has been carried out in Bali cattle for 45 days of fattening. The study used 12 male Bali cattle aged 2-3 years through a completely randomized factorial 3 x 3 pattern where the first factor was the concentration of liquid smoke; 0, 10, and 20%, while the second factor is the length of maturation; 0, 7 and 14 days. Muscle Longissimus dorsi dissection after slaughter and was then observed the quality of meat regarding pH, shear force value, and organoleptic assessment, i.e., meat tenderness and juiciness. The results of the research showed that the quality of Bali beef increased with increasing concentration of liquid smoke in feed block and maturation times.
Introduction
Tanete Riaja Subdistrict, as a region with a Bali Cattle population that ranked the second highest (11,664 in 2012) in Barru District [1], has implemented livestock cages as the showroom. Farmers in groups conducted business to develop Bali cattle breeding and to fatten at home.
In general, the main problem raised by the partners of the livestock group was the provision of feed. Cutting and carrying, cutting elephant grass or paddy straw and giving it to cattle at home is the consequence of livestock custody requiring farmers to provide forage.
The use of probiotic paddy straw fermentation technology and supplement feeding in the form of Urea Coconut Water Liquid Smoke Multi-Nutrient Block (UCSMB) was carried out with very satisfactory results in the Technology Application in 2015 [2]. In Indonesia, in general, and particularly in South Sulawesi, raising cattle by breeders is characterized by raising Bali cattle more than 55% of the population [3]. However, the contribution of the availability of Bali cattle in Indonesia is immense, which contributes to a cattle population of up to 15.5 million head (more than 80%) [4]. While Bali cattle are known to be able to adapt well to hostile environments with low feed quality [5], another advantage of Bali cattle was that the carcass percentage could hit 52-57.7% with a low fat content of around 4%. [6]. In the traditional raising in South Sulawesi, the fat content of Bali beef was less than 2% [7]. Bali cattle are raised by breeders with the primary purpose of savings with a tiny amount of rearing 1-5 cattle. The characteristics of raising Balinese cattle in Indonesia are marked not only in small numbers of livestock ownership but are preferred as savings that will be released when they need money to meet their needs at certain times.
Bali cattle have an outstanding reproductive and adaptive capacity, a high percentage of carcass 51 to 57 percent, a fat content of meat ranging from 2 to 6.9 percent, and a male 2-year-old weighing up to 210 to 260 kg [6,8]. The percentage of Bali cattle carcasses has been reported to be as high as 52% to 57.7%, with a low fat meat content of about 4% [9]. The level of Bali beef fat content in conventional maintenance in South Sulawesi is below 2%. Bali beef's quality is reasonably good, mainly if better management and feeding are preserved. Weight gain is relatively small at comprehensive treatment ranging from 0.1 to 0.3 kg per day; this can therefore be increased by intensive care by supplying high-quality feed up to 0.65 kg per day [10].
The importance of cattle farms in Indonesia could be seen from the community's need for sources of animal protein derived from fresh meat and processed meat products such as meatballs. Fresh beef needs reached 2.56 kg / capita / day or 654,000 tons for 255,461,700 inhabitants of Indonesia in 2015 [11]. Although the need for meatballs [10] has been illustrated, the value of beef meatballs for public consumption has been defined by around 60% of cattle slaughtered in slaughterhouses (personal communication). In Indonesia, slaughtered cattle reached 1,114,748 [2] in 2017, with an average weight of 50 kg / head of meat to be processed at 33,442.4 tons / year of meatball production or 91,62 tons / day in Indonesia [4,10,12]. Tante Riaja Subdistrict, as a region with a Bali Cattle population that ranked the second highest (11,664 in 2012) in Barru District [1] has implemented livestock cages as the showroom. Farmers in groups conducted business to develop Bali cattle breeding and to fatten at home.
In 2016 feed technology innovations were carried out through the manufacture and feeding of UCSMB block supplements, which were added several levels of liquid smoke concentrations then given as feed fattening to cattle belonging to livestock groups. Through this technological innovation as a form of community empowerment, it was expected that it could be a solution to solving food availability which will have an impact on increasing the productivity of Bali cattle in the showroom
Materials and methods
The application of liquid smoke in feed block was carried out in the Sikapa group farmers during the fattening of Bali cattle.
This research utilized 12 head of Bali cattle age 2-3 years old male. The animals received 6 kg of probiotic fermented rice straw and 500 gr UCSMB per day during 45 of fattening. After the cattle were slaughtered, meat samples were taken on the outer portion (Longissimus dorsi muscle), then meat quality was observed at the Hasanuddin University Meat and Egg Processing Technology Laboratory.
This study used a complete randomized 3x3 factor pattern design, where the first factor was the liquid smoke concentration (0, 10, and 20 percent) and the second factor was the maturation time (0.7 and 14 days). The observed parameter was pH, shear force value of raw meat (RMSF), and organoleptic assessment, i.e. meat tenderness and juiciness.
Urea Coconut Water Liquid Smoke Multi-Nutrient Block (UCSMB)
There were three different types of UCSMB, which have different concentrations of liquid smoke in block feed formulations, namely concentrations of 0%, 10%, and 20%, with a rate of 2% in the formulation. Table 1 indicates the composition of feed supplements in the form of blocks.
Rice straw fermented with probiotics
For 10-12 days, fermented straw products are developed using organic liquid supplement (SOC) probiotics from rice straw. Sprinkled with diluted SOC probiotic solution with a ratio of 30 ml SOC dissolved into 45 liters of clean water, 150 kg rice straw was stacked several piles. It was sprinkled with very fine rice bran among the sheets. The rice straw was fermented after 10-12 days and ready for use as cattle feed. Laboratory study results showed a higher nutritional value than unfermented rice straw. Until fermentation, the volume of wheat protein was 2% higher than that of straw to 7.87%. Fermented straw protein content was 2% higher than straw before fermentation to 7.87%. Previous research produced a protein content of 6.44% [2]. The type and age of straw could explain this difference.
pH Measurement
pH measurement was done using a Lutron pH meter pocket PH-201 with electrode type (spear tip) PE-06 HD specifically for meat.
Measurement of raw meat shear force
Shear force value of raw meat measurement is meant to see the tenderness of meat by using CD Shear Force. Where samples of meat in the form of a cylinder with a length of 1 cm and 0.5 inches in diameter were placed in the hole of the CD shear force that the blade with a thick 1 mm to cut samples. The higher the load to cut off, the sample of the meat is tough. Shear force values expressed in kg / c [7].
Sensory test on meat cooked at 80 0 C for 15 minute
Tenderness and juiciness included the parameters measured in the sensory test. The sensory assessment included 20 panelists who had previously been educated and assessed the sensory quality of meat based on the scale translated into an evaluation score ranging from 1 to 6, which means that one is very tough and not juicy, and six is very tender and very juicy.
Data analysis
Results was analyzed by variance analysis (ANOVA) and the LSD experiment was performed. While, by using the SPSS software (SPSS 16.0, SPSS Ltd., West Street, Woking, Surrey, UK), the real effect was focused on Steel and Torrie [13].
Meat Quality
The quality of fattened meat from Bali cattle through the provision of fermented probiotic and UCSMB feed could be explained through the measurement of several meat quality parameters, including pH, raw meat shear force value, and organoleptic test, i.e., meat tenderness, and juiciness. The significance level of the mean value of meat quality can be seen in table 2 and figures 1, 2, 3, 4, 5, 6, 7 and 8.
pH value of beef
Variance analysis showed that the concentration of liquid smoke in block feed had a significant effect on the pH of meat (P<0.01).The higher the level of liquid smoke in UCSMB, the lower the final pH at 20% will be 8.28% lower. There is, however, no real difference between 10% and 20% concentration. This indicates that the final pH of Bali beef can be reduced by liquid smoke in block feed for 45 days. Changes in the pH value of Bali beef can be seen in table 2 and figure 1.
Variance analysis showed that the ripening time had a very significant effect (P <0.01) on the pH value of meat. The higher the maturation time, the higher the final pH reaches 6.30% (0.37 points) at 14 days maturation.
There was an interaction between liquid smoke concentration and maturation time marked by a decrease in pH with increasing concentration of liquid smoke and an increase in pH with increasing maturation time. The interaction of concentration liquid smoke and maturation time on pH value can be seen in figure 2.
Raw Meat Shear Force value (RMSF)
Changes in the value of raw meat shear force were shown in table 2 and figure 3 based on the concentration of liquid smoke and the maturation time.
Increasing liquid smoke concentration in UCSMB, decreasing RMSF; at the concentration of liquid smoke 10% the decline in the RMSF reached 38.71%, and at a concentration of 20% the decline in the RMSF reached 19.85 percent opposed to smoke-free. It showed that liquid smoke in UCSMB given to Bali cattle for 45 days could improve the quality of meat by the tenderness, which was better at a concentration of 10% than 20%. This showed that liquid smoke in UCSMB given to Bali cattle for 45 days could improve meat quality through increased tenderness, which at a concentration of 10% was better than the concentration of 20%. Previous research in which liquid smoke was added directly to Bali beef showed the concentration of liquid smoke (0, 5, 10, and 15%) did not significantly affect the RMSF even though there was a tendency to decrease the RMSF [14].
As the maturation time increased, the RMSF fell below the start of maturation (0 days) to 18.94 percent on the 14th day of maturation. The lower RMSF said the meat was tenderer than the high RMSF Previous research using buffalo meat smoke flour showed a decrease in RMSF during maturation, reaching the lowest RMSF value at maturation of 14 days, reaching 30.32% [15]. Improvement in fresh meat tenderness during maturation (2-5 0 C) was generally caused by proteolytic enzymes, especially the enzyme of cathepsin. This indicated that during maturation, there was an improvement in meat tenderness as the role of proteolytic enzymes (the maturation phenomenon) and the role of liquid smoke in constricting meat [16,17]. There was an interaction between the concentration of liquid smoke with the maturation time of the RMSF marked by a decreased in RMSF that was not linear, lower at a concentration of 10% than the concentration of 20% with increasing concentration of liquid smoke and decreasing RMSF linearly with increasing maturation time. The interaction of concentration liquid smoke and maturation time on RMSF can be seen in figure 4. Table 2 and Figure 5 revealed improvements in the rating of tenderness measured by panelist assessments based on the amount of liquid smoke and the time of maturation.
Tenderness score
The higher concentration of liquid smoke in UCSMB feed reduced the value of meat cooking shear force to 55.34 percent lower than without liquid smoke at a concentration of 20 percent. This demonstrates the excellent ability of liquid smoke to reduce the value of meat cooking shear force given directly to livestock via UCSMB feed. The use of direct liquid smoke in fresh meat revealed that the concentration of liquid smoke did not significantly affect the shear force value of cooked meat (80 o C-15 '), although there was a tendency to decrease the shear force value of meat cooking with increased liquid smoke concentration [15]. In this case, liquid smoke, up to 2% in the feed, can inhibit protein oxidation during maturation that is [18]. It was reported that if the protein is oxidized, the tenderness of the meat could decrease [19], indicating that protein oxidation would change the WHC and meat tenderness. The interaction of concentration liquid smoke and maturation time on tenderness can be seen in Figure 6. 9 Variance analysis showed that the maturation time affects the cooked shear force value of meat, where the longer the maturation time, the cooked meat shear force value decreased, and at maturation of 14 days, the shear force value decreased to 27.82 percent below the maturation time of 0 days. This was consistent with the decline in the value of raw meat shear force during maturation due to proteolytic enzyme function that digests proteins during maturation resulting in a decrease in the value of raw meat shear force. However, on the 14th day of maturation, the decrease in the shear force value of cooked meat was higher than the shear force value of 18.94% raw meat. Heating meat at 800C temperature caused collagen to dissolve, which can explain this by the deposition of raw meat [7].
Juiciness score
Changes in the juciness score calculated by the panelist experiment based on liquid smoke concentration and maturation period were shown in table 2 and figures 7 and 8.
Analysis of variance showed that the concentration of liquid smoke and the maturation time affects the juiciness score of meat cooked (P<0.01), where the more concentrated of liquid smoke in the fed block the score of juiciness more increased to reach 7.92% more than at without liquid smoke in the fed block.
The maturation time resulted in the score of the juiciness of meat cooked increased and at 14 days'maturation the score of juiciness value up to 29.95% higher than the 0-day maturation time. The interaction of concentration liquid smoke and maturation time on juiciness can be seen in figure 8.
Conclusion
This current study reveals that the quality of Bali beef increased with increasing concentration of liquid smoke in UCSMB feed and maturation time. The concentration of liquid smoke decreased pH value, and raw meat shear force value increased tenderness score, and juiciness score. While maturation time increased pH value, decreased raw meat shear force value, increased score of tenderness, and increased juiciness score. | 2020-06-25T09:09:16.861Z | 2020-06-24T00:00:00.000 | {
"year": 2020,
"sha1": "3da1e5121ffe123e7495121bbb59a482cf782f51",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/492/1/012040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8f08f955b5a866b7eb439c8d5705719a7582020f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
46424933 | pes2o/s2orc | v3-fos-license | Reactions of human dental pulp cells to capping agents in the presence or absence of bacterial exposure.
An ideal pulp-capping agent needs to have good biocompatibility and promote reparative dentinogenesis. Although the effects of capping agents on healthy pulp are known, limited data regarding their effects on bacterial contaminated pulp are available. This study aimed to evaluate the reaction of contaminated pulps to various capping agents to assist clinicians in making informed decisions. Human dental pulp (HDP) cell cultures were developed from extracted human molars. The cells were exposed to a bacterial cocktail comprising Porphyromonas gingivalis, Prevotella intermedia, and Streptococcus gordonii before being cocultured with capping agents such as mineral trioxide aggregate (MTA) Portland cement (PC), and Dycal. HDP cell proliferation was assayed by MTS colorimetric cell proliferation assay, and its differentiation was evaluated by real-time PCR for detecting alkaline phosphatase, dentin sialophosphoprotein, and osteocalcin expressions. MTA and PC had no apparent effect, whereas Dycal inhibited HDP cell proliferation. PC stimulated HDP cell differentiation, particularly when they were exposed to bacteria. MTA and Dycal inhibited differentiation, regardless of bacterial infection. In conclusion, PC was the most favorable agent, followed by MTA, and Dycal was the least favorable agent for supporting the functions of bacterial compromised pulp cells.
Introduction
During pulpotomy and pulp-capping procedures, the dental pulp may be damaged because of infection, mechanical trauma, and chemical exposure. Dentin bridge formation and continuous root development are possible during vital pulp therapy. Some evidence suggests that a dentin bridge is formed as a response to pulp irritation. Pulp cells affected by trauma or carious lesions undergo an inflammatory process, ultimately stimulating osteo/ orthodentin bridge formation that partially or completely occludes pulpal exposure (1,2). The cellular response driving such repair may be the proliferation and differentiation of odontoblast-like cells that are derived from dental pulp progenitors (3).
It can be anticipated that the choice of pulp-capping materials such as mineral trioxide aggregate (MTA), Portland cement (PC), or Dycal (a hard-setting calcium hydroxide paste) could influence the bioactivities of human dental pulp (HDP) cells and the repair and regeneration process of the dentin-pulp complex. MTA induces the proliferation of mouse odontoblast-like cells in undifferentiated pulp cells in vitro, and set MTA is not irritating and shows good biocompatibility in vivo (4)(5)(6). MTA facilitates faster HDP cell proliferation and induces more dentin bridge formation than Dycal (7)(8)(9). Studies have also compared the effects of MTA and PC. PC shares many components with and macroscopically has almost identical properties as MTA (10,11). Both MTA and PC initiate dentin bridge formation after pulpotomy in dog teeth (12) and are considered to be good endodontic filling materials that facilitate tissue healing with minimal inflammatory response as demonstrated in a guinea pig study (11).
For most previous studies, healthy and non-compromised pulp cells were used to evaluate the reactions to pulp-capping agents. In a clinical setting, pulps treated with direct pulp-capping agents or pulpotomy are commonly compromised by bacterial exposure because of caries or trauma. To better simulate a clinical setting, this study aimed to evaluate the proliferation and differentiation of HDP cells that were exposed to different pulp-capping agents in the presence or absence of bacterial infection, for which relevant oral bacterial species were used. This study is expected to help clinicians make educated choices for pulp-capping materials in relevant clinical situations.
Materials and Methods
HDP cell culture HDP samples were collected from extracted human third molars. Teeth were collected from adults (age, 18-25 years) at the Oral Surgery Clinic of The University of Texas School of Dentistry, Houston. The study was approved by the Institutional Review Board of the University of Texas Health Sciences Center, Houston (HSC-DB-06-0618). All third molars were extracted because of impaction with no carious lesions. The surfaces of the extracted teeth were cleaned and cut around the cementoenamel junction using a sterilized fissure bur to reveal the pulp chambers. The pulp tissue was gently separated from the crown and root and digested in 3 mg/mL collagenase type I and 4 mg/ mL dispase for 1 h at 37°C. A single-cell suspension was obtained by passing cells through a 70-μm strainer before being seeded (1 × 10 4 /well) into six-well gelatincoated plates. HDP cells were cultured in Dulbecco's modified Eagle's medium (DMEM) that contained 5% fetal bovine serum (Invitrogen, Carlsbad, CA, USA), 2 mM L-glutamine (Invitrogen), 100 units/mL penicillin (Invitrogen), and 100 µg/mL streptomycin (Invitrogen). The cultures were maintained at 37°C in a humidified atmosphere with 95% air and 5% CO 2 . The cells were fed with fresh DMEM every other day. When the culture reached 70-80% confluence, the cells were harvested for further experiments. HDP cells up to passage 3 were used in the study.
Preparing bacterial cells
Porphyromonas gingivalis (P. gingivalis) strain W83 and Prevotella intermedia (P. intermedia) strain 17 were anaerobically grown at 37°C in trypticase soy broth that was supplemented per liter with 1 g yeast extract, 5 mg hemin, and 1 mg menadione. Streptococcus gordonii (S. gordonii) strain DL1 was cultured under static conditions in the brain heart infusion broth that was supplemented per liter with 5 g yeast extract and 0.5% glucose as the carbon source. All bacteria were cultured overnight to a final OD 600 of >1.0. P. gingivalis, P. intermedia, and S. gordonii cells were added to prewarmed tissue culture media at 37°C without antibiotics to obtain a final concentration of 3.75 × 10 7 colony-forming units/mL for each bacterial strain. An aliquot of this polymicrobial cocktail was added to each culture well, resulting in a 750:1 (bacteria:HDP) ratio. To maintain the vitality of anaerobes, HDP cells were incubated in the presence of the bacterial mixture for 2 h.
For groups that were exposed to bacteria, HDP cells were seeded onto 12-well culture plates at an initial density of 5 × 10 4 cells/well in 1 mL of culture medium for 24 h. The culture medium was removed, and the cells were washed to remove residual antibiotics. HDP cells were then cultured in 1 mL of antibiotic-free medium with bacterial cocktail for 2 h. Bacteria were then removed, cells were washed, and HDP cells were cultured with a fresh medium.
For groups that were treated with a capping agent, capping agents were first mixed according to the manufacturer's instructions, after which they were poured onto cell culture inserts with 1-µm pore size (Becton Dickinson Labware, Franklin Lakes, NJ, USA) and partially set, before placing them in cell culture dishes. The capping agents generated a capping layer of approximately 10-mm diameter and 0.5-mm thickness; this layer was not in direct contact with the cells because of the presence of the insert and culture medium. HDP cells were exposed to the capping agents for 1, 7, and/or 14 days during the experimentation. For groups exposed to both the bacteria and capping agents, HDP cells were first infected with the bacteria for 2 h, after which the bacteria were removed and the cells were cultured with the capping inserts for 1, 7, and/or 14 days during the experimentation.
MTS colorimetric cell proliferation assay
Cell proliferation was measured using the CellTiter 96 Aqueous One Solution assay kit (Promega, Madison, WI, USA) according to the manufacturer's instructions. In brief, on days 1 and 7, the medium was removed and replaced with 0.3 mL fresh medium. Then, 20 µL of the CellTiter 96 Aqueous One Solution reagent was added to each well. The plates were incubated in the dark at 37°C for 1 h, after which 200 µL medium from each well was added to 96-well plates, and the absorbance was read at 490 nm using the SPECTRAmax 190 multiplate reader with SOFTmax PRO version 3.0 software (Molecular Devices, Sunnyvale, CA, USA). The mean absorbance of the wells that contained the cell-free medium was used as the baseline and was deducted from the absorbance of cell-containing wells. Five individual wells per group were analyzed for each time point.
Real-time PCR
Total RNA was extracted from cultured HDP cells using the RNeasy Mini Kit (Qiagen Sciences, Germantown, MD, USA) according to the manufacturer's instructions. DNase I was added to remove remaining genomic DNA. Then, 0.5 µg total RNA was reverse-transcribed using the iScript cDNA Synthesis Kit (BioRad Laboratories, Hercules, CA, USA) according to the manufacturer's instructions.
Primers were designed using LightCycler Probe Design Software version 1.0 (Roche Diagnostics Corporation, Indianapolis, IN, USA). The specificity of the primers was confirmed by BLAST analysis. The sequences for the primers were as follows: alkaline phosphatase (ALP), forward 5′-GGACATCGCCTACCAG, reverse 5′-CCGTCACGTTGTTCCT; dentin sialophosphoprotein (DSPP), forward 5′-GGAATGGAGAGAGGACT-GCT, reverse 5′-AGGTGTTGTCTCCGTCAGTG; osteocalcin (OCN), forward 5′-GCTTTTGGCGTTT-GTG, reverse 5′-GGAAGCGGGGATCAGA; and glyceraldehyde 3-phosphate dehydrogenase (GAPDH; internal control), forward 5′-TCGGAGTCAACGGATTT, reverse 5′-CCACGACGTACTCAGC. Real-time PCR was performed using a BioRad MJ MiniOpticon Personal Thermal Cycler Detection System (BioRad Laborato-ries). Reactions were performed with a total volume of 20 µL, with 10 µL of 2´ SYBR Green PCR Master Mix, 5 µL of primer (20 mM concentration), and 5 µL of cDNA template. The amplification condition was 95/5, 55/7, and 72/20 [temperature (°C)/time (s)] for 40 cycles. All reactions were conducted in triplicate for each sample. The negative control (no template cDNA) was run to reveal any potential contamination. Melting curve analysis was performed for each completed reaction to ensure single product amplification. After calibrator normalization and amplification efficiency correction, PCR products were quantified by comparing the amplification of the target gene to that of GAPDH using the relative quantification software version 1.0 (BioRad Laboratories).
Statistical analysis
All experiments were repeated thrice with a minimal of three replicates. The normality of the data was tested by Kolmogorov-Smirnov test. Normally distributed data was analyzed with Tukey-Kramer multiple comparisons test of one-way analysis of variance using the SigmaPlot 11.0 software (Systat Software, Inc., San Jose, CA, USA). Multiple comparisons were performed between HDP cells alone and HDP cells with MTA, PC, or Dycal in the presence or absence of bacterial infection. Statistical significance was set at a P value of <0.05.
Proliferation of HDP cells
The MTA and PC groups demonstrated similar growth patterns to HDP cells alone; i.e., all three groups had a significant increase in cell numbers from day 1 to day 7 regardless of bacterial infection. There was no significant difference in the number of cells among the three groups at the assay time points, indicating that neither MTA nor PC affected HDP cell proliferation (Fig. 1). The Dycal group had significantly less number of cells than HDP cell alone for under both the non-compromised and compromised conditions on day 7 (Fig. 1), indicating that Dycal inhibited HDP cell proliferation.
Differentiation of HDP cells
The mRNA expression of odontoblastic differentiation genes was quantified with real-time PCR on days 1 and 14. Because no cells from the Dycal group (regardless of bacterial infection) survived on day 14, samples from the two Dycal groups with the presence and absence of bacteria on day 14 were not included in the PCR assay.
Day 1
For DSPP, PC stimulated but Dycal inhibited its expres-sion compared with HDP cells alone under compromised conditions (Fig. 2). For OCN, both PC and MTA stimulated its expression compared with HDP cells alone under compromised conditions. Dycal inhibited the expression under both non-compromised and compromised conditions (Fig. 3). For ALP, PC significantly stimulated but Dycal inhibited its expression compared with HDP cells alone under compromised conditions (Fig. 4).
Day 14
For DSPP, PC significantly stimulated its expression compared with HDP cells alone under both non-compromised and compromised conditions. MTA inhibited DSPP expression compared with HDP cells alone under compromised conditions (Fig. 2). For OCN, PC had no significant effect on its expression compared with HDP cells alone under both non-compromised and compromised conditions. MTA significantly inhibited OCN expression compared with HDP cells alone under both non-compromised and compromised conditions (Fig. 3). For ALP, both PC and MTA negatively regulated its expression compared with HDP cells alone under both non-compromised and compromised conditions (Fig. 4).
It appears that bacteria had some stimulatory effect on pulp cell differentiation. Bacterial infection transiently increased DSPP expression on day 1 in the HDP cells alone, MTA, and PC groups (Fig. 2) and increased OCN expression on day 1 in the MTA group (Fig. 3). In addition, bacterial infection demonstrated a more persistent stimulation of OCN expression till day 14 in the HDP cells alone and PC groups (Fig. 3).
Discussion
This study found that in general, MTA and PC had no effect on HDP cell proliferation, whereas Dycal significantly inhibited the proliferation. PC stimulated HDP cell differentiation, particularly when cells were exposed to bacteria. MTA and Dycal inhibited HDP cell differentiation, regardless of bacterial infection.
The human oral cavity harbors a complex polymicrobial community that comprises diverse microbial species (13). For teeth with deep caries lesions, large restorations, or open fractures, oral bacteria may come in direct contact with pulp cells. In this study, HDP cell cultures were exposed to a bacterial cocktail comprising S. gordonii, P. gingivalis, and P. intermedia. S. gordonii, a gram-positive bacteria, is considered to be an initial colonizer that is present in high numbers on supragingival tooth surfaces and has a tendency to invade deep dentinal tubules (14)(15)(16). It is cariologically significant, particularly in cases that are refractory to routine endodontic procedures (16,17). P. gingivalis and P. intermedia are black-pigmented gram-negative bacteria that colonize the subgingival crevice (18). P. gingivalis is consistently encountered in endodontic infections and is suspected to play a role in the etiology of acute abscesses (19). P. intermedia is among the most frequently detected species in primary caries infections (16). Because of the polymicrobial nature of dental caries, HDP cell cultures were exposed to the bacterial cocktail to mimic the clinical situation.
To evaluate pulp cell proliferation and differentiation, the cell number and transcript expression of odontoblast differentiation genes were monitored throughout the culture period. ALP is an early marker, whereas OCN is a later marker for odontoblast differentiation (20,21). DSPP is odontoblast specific and is expressed by terminally differentiated odontoblasts during the late stage of dentin matrix mineralization (22). Evaluating the temporal expression profile of differentiation markers will reveal how odontoblast differentiation is affected by capping agents and/or bacterial infection.
For the 7-day culture proliferation assay, HDP cells reached a maximal 90% confluence in the MTA and PC groups. Therefore, contact inhibition does not appear to affect the analysis. Although ALP is an early differentiation marker, its expression was monitored on day 14 of the cultures to be consistent with the analysis of other markers.
Mineralization was observed in HDP cell cultures toward the end of the 14-day culture period, particularly in the PC groups. Preliminary microscopic observations demonstrated sparse tubular structures in the mineralized matrix. Further studies, particularly those performed in in vivo animal models, would better mimic clinical scenarios and reveal more detailed data regarding the quality of the reparative dentin formed.
The major ingredients of PC include calcium phosphate, calcium oxide, and silica. MTA shares the same base materials with PC, with the addition of bismuth oxide, which increases its radiopacity (23,24). Dycal is a rigid, self-setting material that predominately comprises calcium hydroxide (25). PC has no cytotoxicity and facilitates pulp cell proliferation and differentiation (26)(27)(28). In our study, PC stimulates HDP cell differentiation, even when exposed to bacteria, although no major effect on HDP cell proliferation was identified. MTA has good biocompatibility and some stimulatory effect on pulp cell proliferation and differentiation (6,8,(28)(29)(30). Our in vitro study reveals an inhibitory effect of MTA on pulp cell differentiation, and no appreciable effect on proliferation. The discrepancy between our study and previous studies could be because of the difference in the types of cells used, the setting stages of capping agents when applied to cells, and the evaluation methods. In our study, change in the medium pH owing to the setting phase of the capping agents may partially contribute to the observed inhibitory effects. Consistent with previous studies, Dycal has an overall negative effect on HDP cells in this study, exhibiting cytotoxic effects and impeding growth and differentiation of pulp cells (8,29,31).
Between PC and MTA, our study identified that PC had a favorable effect, whereas MTA had an inhibitory effect on HDP cell differentiation. It is speculated that some drawbacks of MTA, such as long setting time, alkaline pH, or presence of toxic elements (32,33), contribute to the observed inhibitory effect. However, the underlying mechanisms require further explorations.
An interesting finding of this study was that bacteria had some stimulatory effect on HDP cell differentiation, as demonstrated by a transient increase in DSPP and more sustained upregulation of OCN expressions in the HDP cell alone and PC groups. The bacterial cocktail was directly added to the pulp cell culture; thus, the observed effect could have occurred because of a direct interaction between the bacteria and pulp cells and/or the influence of diffusible bacterial byproducts such as endotoxins, exotoxins, lytic enzymes, acids, and other virulence factors (34). In carious lesions without pulp exposure, bacterial byproducts can diffuse through dentinal tubules and irritate pulp cells (35). Our findings are supportive of previous observations; i.e., lipopolysaccharide, the major pathogenic factor of gram-negative bacteria, promoted odontoblastic differentiation (36) and sonicated extracts from P. gingivalis facilitated early dentinogenic differentiation in HDP cell cultures (37).
During vital pulp therapy, the dentinogenic potential of pulp cells has major significance. The goals of vital pulp therapy include preservation of vitality and promotion of the dentinogenic ability of remaining pulp cells for continued root formation in undeveloped roots. Of the three pulp-capping agents tested in this study, PC was the most favorable, followed by MTA, and Dycal is the least favorable choice. PC demonstrates good biocompatibility and a general stimulation of pulp cell differentiation, regardless of bacterial infection. Together with its low cost and wide availability, PC appears to be an ideal material for endodontic procedures such as indirect and direct pulp capping, sealing of perforations, pulpotomy, and apexification. Further studies, particularly those performed with in vivo animal models, would help validate our current findings and identify the most ideal capping agents for endodontic procedures. | 2018-04-03T06:21:12.661Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "4be97ee9dcfec48a4e2dae0b5231898460b588dd",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/josnusd/59/4/59_16-0625/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "72289787cb39f2fa204d2feb640cea6f7b533f77",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
256054176 | pes2o/s2orc | v3-fos-license | DHODH Inhibition Exerts Synergistic Therapeutic Effect with Cisplatin to Induce Ferroptosis in Cervical Cancer through Regulating mTOR Pathway
Simple Summary Ferroptosis exhibits potent antitumor ability and dihydroorotate dehydrogenase (DHODH) has recently been identified as a novel ferroptosis defender independently of GPX4 or FSP1. DHODH has been studied in several diseases including oral squamous cell carcinoma and small cell lung cancer. This study demonstrates that DHODH inhibition inhibits cervical cancer cells’ cell proliferation and induces cell death through ferroptosis. Moreover, the combination of DHODH inhibition and cisplatin synergistically inhibits the growth of cervical cancer cells in vitro and in vivo by ferroptosis via the mTOR pathway. Abstract Ferroptosis exhibits a potent antitumor effect and dihydroorotate dehydrogenase (DHODH) has recently been identified as a novel ferroptosis defender. However, the role of DHODH inhibition in cervical cancer cells is unclear, particularly in synergy with cisplatin via ferroptosis. Herein, shRNA and brequinar were used to knock down DHODH and directly inhibit DHODH, respectively. Immunohistochemistry and Western blotting assays were performed to measure the expression of proteins. CCK-8 and colony formation assays were employed to assess the cell viability and proliferation. Ferroptosis was monitored through flow cytometry, the malondialdehyde assay kit and JC-1 staining analyses. The nude mouse xenograft model was generated to examine the effect of combination of DHODH inhibition and cisplatin on tumor growth in vivo. The expression of DHODH was increased in cervical cancer tissues. DHODH inhibition inhibited the proliferation and promoted the ferroptosis in cervical cancer cells. A combination of DHODH inhibition and cisplatin synergistically induced both in vitro and in vivo ferroptosis and downregulated the ferroptosis defender mTOR pathway. Therefore, the combination of DHODH inhibition and cisplatin exhibits synergistic effects on ferroptosis induction via inhibiting the mTOR pathway could provide a promising way for cervical cancer therapy.
Introduction
The WHO has proposed an action to eliminate cervical cancer by 2030 as a global public health issue [1]. However, it is difficult to achieve the goal for developing countries and regions in the short term. This is mainly attributed to the regional inequalities in the human papillomavirus (HPV) vaccination and screening uptake in developing countries and regions [2], which have made cervical cancer continue to be the most common and deadly gynecologic malignancies during the past decade [3]. Moreover, the morbidity and mortality of cervical cancer show a younger annual trend [4]. Hence, cisplatin-based chemotherapy has been increasingly used for managing early-stage cervical cancer owing to the raising demand for fertility preservation [5]. However, notorious multi-organ toxicities was obtained from Solarbio (Beijing, China). DAPI solution was purchased from Solarbio (China). The TUNEL detection kit was acquired from Roche (Basel, Switzerland).
Immunohistochemistry
The procedure related to human subjects was approved by the Ethics Committee of The Second Affiliated Hospital of Wenzhou Medical University (2022-K-141-02). The cervical cancer and matched normal tissues embedded in paraffin were obtained from the Second Affiliated Hospital of Wenzhou Medical University. The tissues were cut into 5 µm thickness pieces. After deparaffinization in xylene, the pieces were rehydrated in ethanol gradients. Then the sections were microwaved for antigen retrieval and immersed in 3% hydrogen peroxide. After being blocked by 5% BSA, incubation with DHODH (1:200) antibodies was performed overnight at 4 • C. A secondary antibody was incubated on the slides the following day after washing with phosphate buffer saline (PBS). Then, the slides were stained with DAB solution. The nuclear detection was enhanced by using hematoxylin solution. Finally, the slides were observed under the microscope (Leica, Wetzlar, Germany). The immune scores of each slide were evaluated and calculated as previously described [23].
Cell Culture
Human cervical adenocarcinoma HeLa cells and human cervical squamous cell carcinoma CaSki cells were provided from ATCC (Manassas, VA, USA). HeLa cells were grown in Dulbecco's Modified Eagle Medium (DMEM; Gibco, Grand Island, NY, USA) with 10% fetal bovine serum (FBS; Gibco, Grand Island, NY, USA) and 1% penicillin-streptomycin (Pen-Strep) solution (Invitrogen, Carlsbad, CA, USA). A Roswell Park Memorial Institute 1640 (RPMI-1640) medium with 10% FBS and 1% Pen-Strep solution was used to maintain CaSki cells. All cell lines were incubated at 37 • C in a humidified atmosphere with 5% CO 2 .
Cell Transfection
For the DHODH knockdown in HeLa and CaSki cells, the shRNA sequences were designed and then synthesized by Sigma-Aldrich (Shanghai, China) and were cloned into the pLKO.1-TRC vector (Origene, Rockville, MD, USA). The pLKO.1-TRC vector was employed as an empty control vector. After confirming the gene sequence, packing vectors psPAX2 and pMD2.G were used to pack the virus in 293T cells. Finally, the harvested lentiviral particles were filtered to transfect CaSki and HeLa cells.
Cell Viability Assay
Cells were plated onto 96-well plates and cultured overnight with 2 × 10 3 cells per well. Cells were incubated for a set of predefined times (0, 24, 48 and 72 h). At 37 • C, cells were incubated for another 2 h after CCK-8 reagent (10 µL/well) was added. Cell viability was determined at 450 nm using a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA).
Drug Sensitivity Analysis
In 96-well plates, cells were seeded and cultured for 24 h. Then, various concentrations of cisplatin (0, 1, 2, 4, 8 and 16 µM) and brequinar (0.01, 0.1, 1, 10 and 100 µM) were added. Subsequently, a second incubation at 37 • C was performed after exposure to 10 µL CCK-8 reagent. The absorbance was measured at 450 nm in a microplate reader. For calculating the inhibition rate, the following formula was used: Inhibition rate (%) = [(OD 450 of control well − OD 450 of test well) ÷ (OD 450 of control well − OD 450 of blank well)] × 100%. The combination index (CI) value was calculated using CompuSyn software to investigate the drug-drug interaction between brequinar and cisplatin. The two drugs were synergistic in killing cells when the CI value was between 0 and 1. It indicated a stronger synergistic effect of the two drugs if the value was closer to one.
Colony Formation Assay
In brief, HeLa and CaSki cells (1 × 10 3 cells/well) were plated into 6-well plates and cultured for a week. Fixed cells were stained with 0.25% crystal violet (Beyotime, China) for 10 min after fixation in 4% paraformaldehyde. Finally, the colonies were observed under a microscope.
Flow Cytometric Analysis
Propidium iodide (PI) staining (BD Biosciences, Franklin Lake, NJ, USA) was used to determine cell death by flow cytometry. CaSki and HeLa cells were plated at the 3 × 10 5 cells/well density into 6-well plates and cultured for 24 h. Following the treatment with different reagents, the cells were harvested, washed in cold PBS and stained with 5 µg/mL PI for 15 min at dark. The percentage of the PI-positive dead cell population was analyzed using CytoFLEX flow cytometry (Beckman, Bria, CA, USA).
Western Blotting Analysis
Cells were collected after ice-cold lysis with a radio-immunoprecipitation assay buffer containing 1 mM phenylmethanesulfonyl fluoride (PMSF; Beyotime, China) for 30 min. A centrifugation of 12,000× g at 4 • C for 20 min was performed on lysates. The supernatant was collected for protein quantification by a BCA assay. SDS-PAGE was used for2 electrophoresis of equal amounts of proteins, followed by polyvinylidene fluoride (PVDF) membrane transfer (Millipore, Boston, MA, USA). Incubation with the primary antibodies was performed overnight at 4 • C after blocking with 5% non-fat milk. Afterwards, the membranes were incubated at room temperature for 1 h with the secondary antibody. The concentration of antibodies used in this study was listed as follows: DHODH (1:1000), p-mTOR (1:1000), mTOR (1:1000), GAPDH (1:5000), and vinculin (1:2000). Using ECL reagent (Beyotime, China), protein bands were visualized. The densitometry readings of each band were calculated by ImagJ software 1.8.0. (NIH, Bethesda, MD, USA). The whole blots with all the bands with all molecular weight markers are provided in Supplementary Materials ( Figure S2).
Lipid Peroxidation Measurement
Lipid peroxidation was measured using the malondialdehyde (MDA) assay kit (Solarbio, China). In brief, cells were seeded on 6-well plates at 5 × 10 5 cells per well after which they were treated with either brequinar (1 µM) or cisplatin (2 µM) for 48 h. Then, cells were harvested and resuspended by MDA extracting solution. By centrifugation at 8000× g for 10 min under 4 • C, the supernatant was collected. The protein concentration of the lysis sample was tested by BCA and the rest of the lysis sample was mixed with working solution and incubated at 100 • C for 1 h. All mixtures were centrifugated at 10,000× g for 10 min. Measurements were made at 532, 450 and 600 nm for absorbance of the supernatant. The MDA content was calculated based on manufacturer's instructions.
JC-1 Mitoscreen Assay
A total of 2 × 10 5 CaSki or HeLa cells were seeded on 6-well plates overnight after which they were treated with either brequinar (1 µM) or cisplatin (2 µM). By using the mitochondrial membrane potential (MMP) kit with JC-1 (Solarbio, Beijing, China), MMP level was detected after the treatment with reagents for 48 h. Images of stained cells were obtained using a fluorescence microscope or analyzed them with flow cytometry.
Animal Studies
Five-week-old female BALB/c nude mice (Beijing Weitonglihua Sciences Co., Ltd., Beijing, China) were housed under a specific pathogen-free (SPF) condition with sterilized food and water. After nude mice adapted for~1 week, 2 × 10 6 HeLa cells were subcutaneously injected with 100 µL PBS in the right flank of mice. Upon reaching 50 mm 3 of tumor size, mice were randomly assigned into four groups (n = 8 per group). The treatments in each group were as follows: (i) control (sterilized saline); (ii) brequinar group (15 mg/kg BQR); (iii) cisplatin group (7 mg/kg DDP); (iv) brequinar + cisplatin group (15 mg/kg BQR + 7 mg/kg DDP). Brequinar was dissolved in DMSO and diluted in saline containing PEG400, then intraperitoneally injected every three days. DDP was formulated in saline and administered intraperitoneally once a week. A body weight measurement was obtained every three days, as well as a tumor volume. A formula was used to calculate tumor volume: volume = length × width 2 × 1/2. The mice were sacrificed at day 12. The tumor tissues were collected for making tissue sections. The in vivo antitumor and ferroptosis induction abilities of different treatments were evaluated by IHC (Ki-67 (1:200), 4-HNE (1:200)) and TUNEL staining assays as directed by the manufacturer. Animal experiments were approved by the Institutional Animal Care and Use Committee of Wenzhou Medical University (wydw2022-0561).
Statistical Analysis
Prior to comparison, Kolmogorov-Smirnov analysis was performed on all data. Normally distributed data were presented as means ± SD. Differences were compared using Student's t-test (two groups) and ANOVA (more than three groups). The least significance method was used if variances were homogeneous. Dunnett's T3 method was employed if variances were nonhomogeneous. Statistical significance was determined by a p-value < 0.05.
DHODH Inhibition Suppresses the Proliferation and Promotes the Death in Cervical Cancer Cells
Initially, we compared the expression of DHODH protein expression in tissues of cervical cancer with adjacent normal tissues using an IHC assay. As illustrated in Figure 1A, a significant increase in DHODH expression was observed in cervical cancer tissues but not in normal tissues. Next, for further verification of the carcinogenic effect of DHODH, two sets of shRNA sequences (sh1 and sh2) were employed to knock down DHODH in HeLa and CaSki cells. Results from Western blotting verified the success of DHODH silence in these two cell lines ( Figures 1B and S2).
Further experiments were carried out on CCK-8 and colony formation to examine whether DHODH affected the cell viability and proliferation. In comparison with the control group, DHODH-silenced CaSki and HeLa cells had significantly reduced cell viability and clonogenicity ( Figure 1C,D). In parallel to DHODH silence, brequinar, a specific inhibitor of DHODH, was used to suppress the DHODH activity in cervical cancer cells. Brequinar-treated cells were tested for viability and death using CCK-8 and flow cytometry assays. Obviously, brequinar decreased the survival rate in both CaSki and HeLa cells ( Figure 1E
DHODH Inhibition Induces Ferroptosis in Cervical Cancer Cells
Given that DHODH inhibition repressed cervical cancer cells' proliferation by cell death promotion, the role of DHODH inhibition in inducing ferroptosis was investigated. Since ferroptosis is featured as the outcome of lipid peroxidation accumulation, the level of principal metabolite MDA generated during the process was measured. Genetic silence of DHODH significantly increased the MDA level in HeLa and CaSki cells (Figure 2A). Consistently, brequinar-induced inhibition on DHODH activity also led to the elevation of MDA level in both cells ( Figure 2B). Moreover, liproxstatin-1, known as a classic ferroptosis inhibitor, was used on brequinar-treated cells. Figure 2C indicated that the brequinarinduced decrease in cell viability was partly rescued by the supplementation of liproxstatin-1 in both CaSki and HeLa cells. Furthermore, a significant decrease in the MDA level was observed after liproxstatin-1 treatment in both cells, compared with the brequinartreated group ( Figure 2D), further validating the occurrence of ferroptosis under DHODH inhibition. The membrane damage caused by lipid peroxidation is characterized as a typical morphologic change in the process of ferroptosis. Considering that DHODH is expressed mainly in mitochondria, the mitochondrial dysfunction was tested by evaluating MMP through JC-1 staining. As presented in Figure 2E, the MMP intensity was dramatically reduced in DHODH-inhibited cells, as manifested by the enhanced green to red fluorescence ratio. Collectively, it appears that DHODH inhibition promotes cervical cancer cell death by inducing ferroptosis.
DHODH Inhibition Synergistically Increases Cisplatin-Mediated Cytotoxicity in Cervical Cancer Cells via Ferroptosis
Based on the ferroptotic role of DHODH inhibition, whether DHODH inhibition could sensitize cervical cancer cells to cisplatin through inducing ferroptosis was further explored. During the 48-h incubation, CaSki and HeLa cells were treated with cisplatin at different concentrations. Figure 3A illustrated that the inhibition rate after cisplatin treatment was remarkably increased on both CaSki and HeLa cells after DHODH downregulation. This implied that the sensitivity to cisplatin was increased by DHODH inhibition in cervical cancer cells.
In parallel, the CI value for cisplatin and brequinar was calculated using Compusyn software. Each CI with different drug concentrations in both CaSki and HeLa cells was less than 1, demonstrating that DHODH inhibition exerted a synergistic anticancer function with cisplatin in cervical cancer cells ( Figure 3B). Furthermore, cisplatin treatment increased the level of MDA in CaSki and HeLa cells dose-and time-dependently ( Figure S1), presuming that ferroptosis would be involved in the synergetic effect of DHODH inhibition and cisplatin. Thus, cell death was determined by performing PI staining, lipid peroxidation by an MDA assay and mitochondrial dysfunction by JC-1 staining. It was shown that, as compared to the control groups, either cisplatin or brequinar monotherapy was effective for promoting cell death ( Figure 3C), mitochondrial dysfunction ( Figure 3D) and lipid peroxidation ( Figure 3E) in both CaSki and HeLa cells. More importantly, a combination of brequinar and cisplatin contributed to more PI rates, JC-1 intensities and MDA levels than that under monotherapy. Therefore, these results verify that DHODH inhibition can cooperate with cisplatin to inhibiting cervical cancer in a ferroptotic way and the combination exerts a critical therapeutic potential in treating cervical cancer.
DHODH Inhibition Synergizes with Cisplatin to Exhibit Ferroptotic Anti-Cancer Effect In Vivo
To further confirm the synergistic therapeutic effect of DHODH inhibition and cisplatin in vivo, subcutaneous tumor xenograft models were generated using HeLa cells. After 12 days of treatment, the tumor volume in the control group administered with saline reached 740.5 ± 307.4 mm 3 . In contrast, mice receiving the monotherapy with brequinar or cisplatin had smaller tumor volumes than those receiving saline. Moreover, the tumor growth in the mice of the combination group was limited to the greatest extent, as confirmed by the smallest tumor size (Figure 4A,B). Similarly, the mice of the combination group exhibited the lightest weight of tumor mass among four groups ( Figure 4C). Apparently, the tumor growth in the combined administration group was inhibited and surprisingly, the tumor in two nude mice of this group even disappeared after treatment. Meanwhile, it was worth noting that no significant weight loss or death occurred in the mice of each group under the administrated dosage of brequinar and cisplatin ( Figure 4D). This implied the safety of the administrated dosage of brequinar and cisplatin in this study.
Then, IHC staining analysis was performed on the tumor tissues to evaluate the levels of cell proliferation, apoptosis and lipid peroxidation. As compared to the control group, treatment of brequinar or cisplatin as monotherapy slightly reduced the Ki-67 expression and promoted the cell apoptosis. The combination of these two drugs significantly increased the apoptosis and downregulated the Ki-67 in cancer tissues. Consistently, the combination of brequinar and cisplatin led to a significant upregulation of 4-HNE in cancer cells, indicating more accumulation of lipid peroxidation in this group ( Figure 4E). These data in animal studies collectively demonstrate that DHODH inhibition is synergistic with cisplatin to killing cervical cancer cells by inducing ferroptosis in vivo.
Combination of DHODH Inhibition and Cisplatin Induces Downregulation of mTOR Pathway in Cervical Cancer Cells
The underlying mechanism by which ferroptosis is induced after DHODH inhibition and cisplatin treatment was further explored. First, the DHODH expression in CaSki and HeLa cells was detected after treatment with brequinar and/or cisplatin by Western blotting. Notably, the combined administration significantly downregulated DHODH in both cells (Figures 5 and S2). Since mTOR is reported to be a key defender against ferroptosis [24], whether mTOR was involved in the synergistic effect of DHODH inhibition and cisplatin was investigated. As displayed in Figures 5 and S2, although the mTOR level in HeLa cells was slightly elevated with cisplatin treatment, the expressions of p-mTOR and mTOR were remarkably decreased after the combined administration in both CaSki and HeLa cells. In conclusion, the occurrence of significant ferroptosis in cervical cancer cells may be mediated via suppression of mTOR pathway after DHODH inhibition and cisplatin combination treatment. Then, IHC staining analysis was performed on the tumor tissues to evaluate the levels of cell proliferation, apoptosis and lipid peroxidation. As compared to the control
Discussion
DHODH is a key enzyme for the de novo biosynthesis of pyrimidine-based nucleotides and serves as a known therapeutic target in various diseases [18]. Accumulated molecular cell biology studies reveal that the inhibition of DHODH depletes intracellular
Discussion
DHODH is a key enzyme for the de novo biosynthesis of pyrimidine-based nucleotides and serves as a known therapeutic target in various diseases [18]. Accumulated molecular cell biology studies reveal that the inhibition of DHODH depletes intracellular pyrimidine nucleotide pools, thereby leading to cell cycle arrest and sensitization to current chemotherapies in cancer cells [18]. Therefore, drugging DHODH has increasingly been proposed as part of combination therapies in cancer treatment. The current study demonstrated that DHODH inhibition suppressed the growth and promoted cell death in cervical cancer via ferroptosis. Furthermore, DHODH inhibition enhanced cervical cancer cell sensitivity to cisplatin through inducing ferroptosis in vitro and in vivo. These results offer promising approaches for the treatment of cervical cancer.
In parallel to the differential expression of DHODH in different cancers, the therapeutic efficacy of DHODH inhibition also varies [25]. For example, lung cancer cells are more sensitive to brequinar than fibrosarcoma cells, for the IC 50 of brequinar in HT-1080 is 10-fold higher than that of NCI-H226 [19]. Even among different lung cancer cells, the sensitivity to brequinar varies [26]. A high level of DHODH was detected in cervical cancer tissues here. Cervical cancer cells were suppressed in proliferation and induced death after DHODH knockdown or brequinar treatment. Ferroptosis is a novel process of regulated cell death and the mechanisms regulating ferroptosis in cells have been extensively explored, such as antioxidant basis, which mainly includes the SLC7A11/GSH/GPX4 axis and FSP1 ubiquinone system [27]. Mao et al. [19] reported that DHODH mediates the ferroptosis defense in mitochondria via the ubiquinone system independently of GPX4 or FSP1 system. In our study, brequinar could directly induce ferroptosis without the assistance of any ferroptosis inducer such as erastin, further implying the independent role of DHODH in regulating ferroptosis. However, several studies have unveiled other mechanisms for controlling ferroptosis in cervical cancer cells. For instance, circEPSTI1 and circACAP2 are both involved in inhibiting ferroptosis via SLC7A11/GPX4 axis [28,29], while circLMO1 and Oleanolic acid promote the ferroptotic process through ASCL4 [30,31]. Therefore, it is proposed that the effect of DHODH inhibition is not confined to GPX4 invalidation, but possesses an apparent synergistic effect with GPX4 inactivation.
Nowadays, non-surgery curative treatment for younger, metastatic or recurrent cervical cancer is also limited. Cisplatin-based chemotherapy is important for these patients, but with serious toxic side effects and frequent drug resistance. Thus, a combination with angiogenesis inhibitor bevacizumab has shown an improved median overall survival in metastatic or recurrent cervical cancer patients [32]. However, the application of these targeted therapies in a clinical setting is still limited. Given that cisplatin could induce ferroptosis through downregulating GPX4 [33], it is presumed that DHODH inhibition can synergize with cisplatin through ferroptosis. The results from in vitro and in vivo studies confirmed the hypothesis. The CI value showed that the synergistic effect in HeLa cells was much more remarkable than that in CaSki cells. Therefore, HeLa cells were chosen for animal experiments. Greater benefits were obtained with combination therapy than monotherapy. Moreover, combination therapy did not affect the pharmacokinetic properties of either drug [34]. Brequinar is a potent inhibitor of DHODH and has been preclinically evaluated with anticancer effects in several solid tumor types [35,36]. However, it receives a low clinical efficacy and fails to gain FDA approval for cancer treatment. The lack of clinical efficacy may be caused by suboptimal dosing regimens of brequinar, which results in the failure to inhibit DHODH sustainably [37]. Thus, a combination therapy is essential for brequinar to augment its anticancer capacities. Despite several clinical trials of brequinar that have been performed on several cancers, not one has been carried out based on the combination with cisplatin. Our study proposes the possibility to use brequinar combined with cisplatin for treating recurrent and metastatic cervical cancer, which needs further trials to verify. Meanwhile, the underlying mechanism of a synergistic effect of combined therapy was also discussed. It is noted that the regulatory function of DHODH inhibition on the mTOR signaling remains controversial. A most recent study reveals that the activity of the mTOR pathway is strongly attenuated in DHODH-knockout medulloblastoma SU_MB002 cell lines, thereby inducing cell-cycle arrest and apoptosis [38]. Conversely, Hoxhaj et al. [39] fail to block the mTOR signaling in HeLa cells by using a DHODH inhibitor leflunomide, even after 24 h treatment. Surprisingly, the combination of cisplatin and DHODH inhibition significantly downregulated the mTOR pathway activity in our study, which is critically involved in inducing ferroptosis in cervical cancer cells [40,41]. In conclusion, the ferroptotic synergy may be increased through DHODH inhibition and mTOR pathway inhibition.
Conclusions
In conclusion, DHODH was upregulated in cervical cancer tissues. Silence of DHODH or pharmacologic inhibition promoted the ferroptotic death in cervical cancer cells. Moreover, inhibiting DHODH in cervical cancer cells both in vitro and in vivo produced a synergistic anticancer effect with cisplatin, which may be achieved by mTOR pathway suppression. Our work proposes that the combination of DHODH inhibition and cisplatin is a potential strategy for cervical cancer treatment.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers15020546/s1. Figure S1: Cisplatin induces ferroptosis in a dose-and time-dependent way in cervical cancer cells. * p < 0.05, *** p < 0.001 compared to the control group. Figure Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
All the data supporting this study are available from the corresponding author upon reasonable request. | 2023-01-22T05:14:11.268Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "1c577b89ec123569d7b7dfdbc8e235154d0182c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/2/546/pdf?version=1673864133",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c577b89ec123569d7b7dfdbc8e235154d0182c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253671174 | pes2o/s2orc | v3-fos-license | Recent trends in vanadium-based SCR catalysts for NOx reduction in industrial applications: stationary sources
Vanadium-based catalysts have been used for several decades in ammonia-based selective catalytic reduction (NH3-SCR) processes for reducing NOx emissions from various stationary sources (power plants, chemical plants, incinerators, steel mills, etc.) and mobile sources (large ships, automobiles, etc.). Vanadium-based catalysts containing various vanadium species have a high NOx reduction efficiency at temperatures of 350–400 °C, even if the vanadium species are added in small amounts. However, the strengthening of NOx emission regulations has necessitated the development of catalysts with higher NOx reduction efficiencies. Furthermore, there are several different requirements for the catalysts depending on the target industry and application. In general, the composition of SCR catalyst is determined by the components of the fuel and flue gas for a particular application. It is necessary to optimize the catalyst with regard to the reaction temperature, thermal and chemical durability, shape, and other relevant factors. This review comprehensively analyzes the properties that are required for SCR catalysts in different industries and the development strategies of high-performance and low-temperature vanadium-based catalysts. To analyze the recent research trends, the catalysts employed in power plants, incinerators, as well as cement and steel industries, that emit the highest amount of nitrogen oxides, are presented in detail along with their limitations. The recent developments in catalyst composition, structure, dispersion, and side reaction suppression technology to develop a high-efficiency catalyst are also summarized. As the composition of the vanadium-based catalyst depends mostly on the usage in stationary sources, various promoters and supports that improve the catalyst activity and suppress side reactions, along with the studies on the oxidation state of vanadium, are presented. Furthermore, the research trends related to the nano-dispersion of catalytically active materials using various supports, and controlling the side reactions using the structure of shaped catalysts are summarized. The review concludes with a discussion of the development direction and future prospects for high-efficiency SCR catalysts in different industrial fields.
process to comply with the emission regulations. Several types of SCR catalysts are used including, metal oxidebased, zeolite-based, alkaline-earth metal-based, and rare-earth-based catalysts. Among these, the V-W-based catalysts are the most widely used, as they exhibit high NOx removal efficiency of more than 90% at temperatures over 380 °C [2,7]. Zeolite-based catalysts are also used in some catalyst applications because they have a high specific surface area and a wide operating temperature range, but they have the disadvantage of exhibiting high activity only when pretreatment is performed in a moisture-free condition [7]. The characteristics of representative SCR catalysts are described in Fig. 1 and Table 1.
The SCR reactions are usually defined as either "standard SCR" or "fast SCR. " In general, NOx is composed of 95% NO and 5% NO 2 ; therefore, the reduction reaction is followed in the standard condition. However, in the case of fast SCR reaction, in which NO, NO 2 , and NH 3 react in the absence of oxygen, the reaction is completed more rapidly due to the rapid oxidation [5]. Sometimes, an oxidizing device such as a diesel oxidation catalyst (DOC) is connected to the front of the catalyst to oxidize NO to NO 2 . The catalytic activity in the low-temperature region is enhanced through fast SCR, which is faster than the existing standard reaction. The fast SCR reaction generally shows excellent reactivity when NO and NO 2 are present in a 1:1 molar ratio [5].
Oxygen presence condition Oxygen absence condition Side reactions include ammonia oxidation at high temperatures, oxidation of SO 2 caused by excess vanadium content, and formation of ammonium salts by the reaction of unreacted ammonia and SO 3 . The formation of ammonium sulfate corrodes the post-treatment facilities, leading to reduced catalytic activation due to the blocked surface of the catalyst [8].
The SCR catalysts are divided into the three types: honeycomb, plate, and corrugated. Industrial V 2 O 5 -WO 3 / TiO 2 catalysts (usually containing 0.5-3 wt% V 2 O 5 and 5-10 wt% WO 3 ) exhibit high de-NOx efficiency, and excellent resistance to sulfur (SO 2 ) and H 2 O. Nevertheless, they also exhibit several disadvantages, such as high and narrow effective temperature range (300-400 °C), and a tendency to oxidize SO 2 to SO 3 . The SO 3 reacts with NH 3 to form ammonium sulfate or bisulfate, shortening the lifetime of the catalyst [9]. As a result, in major industrial applications such as thermoelectric power plants and vessels with SCR catalyst, frequent shutdowns are necessary due to operational and equipment limitations. Therefore, SCR catalysts with high de-NOx efficiency at low operating temperatures are required for economic benefits and reduced energy consumption [10]. In addition, operating limitations of the SCR system cause a temporary rise in the system temperature. Currently, in the case of small and medium-sized power generation facilities and general boilers such as the heat recovery system generator boiler, most of the horizontal reactors have a relatively high impact due to high temperature. Therefore, there is a need to develop a SCR catalyst with high activity and durability against thermal shock at high temperatures. Thus, it is necessary to develop a wide temperature range SCR catalyst which is capable of operating at the conventional low temperatures (200-250 °C), exhibits enhanced catalytic properties such as high de-NOx efficiency along with low SOx conversion, and has a long lifetime. (1)
Applications of NH 3 -SCR catalysts for stationary sources
In stationary sources, various problems, such as the carbon content in fly ash, sulfur, alkali metals, the concentration of catalytic poisons in fuel, carbon monoxide (CO) emission, and corrosion may occur depending on the application [7]. The SCR facilities in stationary sources are generally classified into high dust configurations, low dust configurations, and tail-end systems, and are configured according to the application requirements (see Fig. 2) [11]. The variety of emission conditions from a stationary source require the NH 3 -SCR catalyst to operate under varied conditions and catalytic properties. Specifically, the catalysts must resist sulfur poisoning and minimize the oxidation of SO 2 to SO 3 because the SO 2 concentration in the flue gas is relatively high [9]. Power plants generally use a high dust system, and, in this configuration, the SCR units are usually located directly after the boiler. Then an electrostatic precipitator and desulfurizer are installed in that order. This system has the advantage of having a high operating temperature without the need for additional heating devices, as the catalysts are installed at the end of the boiler. However, in facilities using biomass fuels, the catalytic activity decreases significantly due to physical poisoning (cell plugging of the monolith catalyst and wear of the plate catalyst) by alkali (earth) metals, and particulate matter in the exhaust gas significantly decreases the catalyst lifetime [12].
In a low dust configuration, the SCR reactor is located behind the hot-side electrostatic precipitator (H-ESP), reducing catalyst degradation by particulate erosion. However, this configuration requires the installation of an expensive H-ESP and a flue gas reheat system to maintain optimum operating temperatures [11].
Finally, in the tail end system, the catalytic system is placed downstream of the electrostatic precipitator and the desulfurization system. This system is mainly used in Western Europe, and because it removes both dust and SO 2 from the catalyst, it is suitable for increasing the catalyst lifetime. However, in this system, NOx removal efficiency is drastically decreased because of the lowtemperature regime it operates in. Although V-based catalysts are the most efficient in the temperature range of 300-400 °C, their efficiency rapidly decreases at lower temperatures, and the unreacted NH 3 reacts with SO 2 or H 2 O to form ammonia bisulfate. Therefore, additional flue gas reheating equipment is required for V-based catalysts used in this system. Reheating the system to the catalytic activation temperature sharply increases the maintenance cost of the SCR system. Therefore, it is necessary to study the potential of using low-temperature catalysts that exhibit activity at 200 °C or less for this configuration due to the problem of reheating the exhaust gas to the catalyst activation temperature [13]. Due to economic and spatial problems, SCR catalysts are currently positioned as tail-end systems. In particular, a temperature below 220 °C is required in incinerators as the SCR is located behind the baghouse filter [7].
Recently, problems related to the storage and transport of ammonia have led to suggestions that it be subject to environmental regulation in the United States, and emissions of ammonia slippage in the unreacted flue gas are stringently monitored in New Jersey and California. Several studies are underway to replace the reducing agent. Although hydrocarbon-reducing agents have attracted considerable attention as replacements for ammonia, their commercialization is difficult due to the requirements of high reaction temperatures and noble metal catalysts [7]. Studies related to "Enhanced SCR" show high NO reduction efficiency in the low-temperature range of ~ 200-350 °C with the addition of NH 4 NO 3 along with NH 3 , a reducing agent. The resultant reaction is summarized in Eq. (5) [14].
Variables that promote the enhanced SCR reaction at low temperatures include space velocity, reaction temperature, and injection amount of ammonium nitrate. When these parameters are optimized, a high NOx removal efficiency of 90% can be expected at a low temperature of 180 °C even with the use of a commercial V-W/TiO 2 catalyst [14].
Power plants
Power plants mainly use various fossil fuels, such as coal, heavy oil, and liquefied natural gas (LNG) to generate electricity. However, each fuel source poses a particular set of problems in the SCR plant; the deactivation mechanism of SCR catalysts used in coal-fired power plants is different from that of catalysts used in biomass power plants. In coal-fired power plants, catalyst deactivation 2 SCR configurations with typical system temperatures: a high-dust system, b low-dust system, and c tail-end system [11] by mercury and sulfate is the main factor. However, in biomass power generation, alkali (earth) metals act as the main cause of catalyst life reduction [15].
Coal fuel presents several problems. First, it is a representative anthropogenic source of mercury. Among heavy metals in coal, highly volatile mercury is converted into gaseous elemental mercury under high-temperature conditions (> 1400 °C) during combustion. The Hg present in flue gas exists in three main forms: Hg 0 (elemental mercury), Hg 2+ (oxidized mercury), and HgP (particulate mercury). First, Hg 0 (g), which is the gaseous elemental mercury present in the flue gas, reacts with other gaseous components and particulate matter and is converted into either HgP or oxidized mercury (Hg 2+ ) [16]. In Fig. 3, HgP is collected in ESP facilities, and mercury oxide is removed with flue gas desulfurization (FGD) because it is soluble in water; however, gaseous elemental mercury is insoluble in water and is therefore difficult to reduce using the existing facilities.
To solve this problem, an SCR catalyst has been developed with elemental mercury oxidation capability, along with a separate oxidation catalyst. Although SCR technology is mainly used for NOx removal, many studies have shown that it can also be used to convert Hg 0 (g) into Hg 2+ . The oxidation of elemental mercury by the SCR catalyst is closely related to the chlorine content in the coal. As the chlorine content of coal increases, the oxidation of Hg 0 to Hg 2+ is promoted by the SCR catalyst, and as a result, the Hg 2+ concentration at the end of the SCR process increases [16,17].
Second, a problem is caused by the sulfur present in coal and heavy oil. During combustion, pyrite and organically-bound sulfur are mostly oxidized to SO 2, and a very small amount is converted to SO 3 (SO 2 /SO 3 ratio is generally ~ 40:1-80:1). Meanwhile, a portion (0.3-2%) of SO 2 is oxidized to SO 3 in the passage of the SCR facility (see [11]. The co-firing and burning of biomass fuel causes catalyst poisoning due to impurities in the fuel. The analysis of Danish cereal straw and wood chips, which are the most widely-used biomass fuels for power plants (Table 2), indicated the presence of 0.2-1.9 wt%. potassium, which is an alkali metal. Submicron aerosol particles in flue gases from straw combustion consist of almost pure potassium chloride and sulfate with small amounts of sodium, phosphorus, and calcium [19,20].
Currently, as many countries increase the use of biomass for environmental reasons, the effect on the [21]. Potassium (K) in fly ash generated during biomass combustion forms KCl and K 2 SO 4 compounds, which are sticky particles with low melting points, and blocks the micropores of the catalyst to reduce its catalytic activity. Zheng et al. reported the need for additional research on catalyst poisoning caused by small amounts of phosphorus (P), calcium (Ca), and sodium (Na) [22]. In the research on catalyst deactivation by the formation of K compounds (e.g., KCl, K 2 SO 4 , and K 3 PO 4 ) following the combustion of K-getter fuel, the polyphosphoric acid poisoning was relatively low compared to K poisoning. However, pore blocking and fouling occurred on the surface of the catalyst, which had a greater effect on deactivation than poisoning by K [22][23][24]. According to a study on the effect of the addition of inorganic substances to the SCR catalyst and poisoning, catalysts doped with alkali (earth) metals exhibit catalyst deactivation due to a reduction in the NH 3 storage capacity, and a strong poisoning effect occurs in the order of K > Na > Ca > Mg [25]. Lastly, LNG-fired power generation is an eco-friendly power source to replace coal-fired power in Asia, and many new power plants are being built. As LNG is composed of ~ 72%-95% methane, ~ 3-13% ethane, ~ 1-4% propane, and ~ 1-18% nitrogen, it does not contain toxic substances such as sulfur and alkali (earth) metals, and there is almost no reduction in the catalyst life. However, as the main components are hydrocarbons, the process emits a large number of harmful substances, including carbon monoxide (CO) and unburned hydrocarbons (UHC).
As described above, various factors affect catalyst poisoning in power plants depending on the fuel used. Research on catalysts having anti-poisoning properties is an active field of study. In addition, a large amount of NOx is generated due to incomplete combustion during power plant startup, as the exhaust gas temperature does not reach the catalyst activation temperature and is discharged. Therefore, to control this problem, research on technology for storing NOx generated below the catalyst activation temperature by installing a NOx trap in front of the catalyst is also being conducted [26].
Incinerators
Incinerators have a high dust configuration, and the SCR catalyst is placed at the rear end of the boiler; however, catalytic efficiency is severely reduced due to physical and chemical deactivation by fly ash present in the flue gas. Catalysts used in municipal waste incineration (MSWI) plants have reduced activity due to decreases in their specific surface area and pore sizes. One of the main causes is the change of the surface acid sites by alkali metals such as Na and K present in the flue gas [27,28]. According to the findings of Jan et al., when domestic waste and sludge are incinerated simultaneously, the flue gas contains metals such as Ca, Si, Cl, S, K, Na, Pb, Zn, and P [27]. In particular, during co-incineration, a large amount of P is present in domestic sludge, and therefore, a larger amount of P is emitted than that formed during single combustion [29]. On the one hand, Castellino found that when the H 3 PO 4 concentration reached 1000 ppm, the catalyst redox performance deteriorated, and the catalyst was deactivated after 24 h due to a decrease in the number of active vanadium species [23]. On the other hand, Cao et al. suggested that as P inhibits NH 3 oxidation at temperatures above 300 °C, it prevents the formation of N 2 O and NOx, which are side reactants, and improves catalyst efficiency. To solve the problem of catalyst deactivation, the tail-end system is applied, and ESP and FGD are installed in front of the SCR to remove particulate matter, preventing catalyst poisoning. However, as the flue gas temperature rapidly decreases to approximately 160 °C, it is necessary to develop a low-temperature catalyst [30]. In addition, research on SCR catalysts for incinerators is mainly focused on the catalyst deactivation effect and reaction modeling for each element, along with studies on inhibiting the physicochemical poisoning of catalysts by fly ash and catalyst regeneration methods [27,31].
Cement, iron, and steel industry
The cement industry mainly uses the SNCR process to reduce NOx, but as air pollutant emission regulations are tightened around the world, hybrid SNCR-SCR and SCR technologies have attracted increased attention from the cement industry [32]. [32] While the amount of dust in the flue gas of a power plant is approximately 25 g/Nm 3 , the dust concentrations in the precalciner kiln system of a cement plant range from 50 to 100 g/Nm 3 . Therefore, the high dust configuration or semi dust application is mainly applied to the post-treatment facility. The semi dust application requires the development of a low-temperature catalyst because the temperature of the SCR facility decreases as dust is collected using a cyclone or ESP equipment in front of the SCR unit [33]. Therefore, there is an unfulfilled requirement for the development of catalysts for cement plants. These catalysts should be resistant to physicochemical poisoning by particulate matter such as fly ash, and exhibit high activity at low temperatures [34,35].
The iron and steel process produces pollutants such as waste gas, water, and slag, the most serious of which is waste gas. Steelworks emit NOx from processes such as sintering, coking, and rolling, and a particularly large number of pollutants are emitted from the sintering process [36]. In the sintering process, limestone is mixed with iron ore in powder form and heated to process it into a sinter in the form of a homogeneous mass [37]. Steelworks generally use a high-dust configuration, and like the previous stationary sources, the demand for efficient low-temperature catalysts that can function below 240 °C is increasing. As SO 2 is present in the exhaust gas, the poisoning of SCR catalysts by sulfur is a major problem [38]. In addition, even if a low-temperature catalyst is developed, the problem of catalyst poisoning due to moisture and sulfur remains. Therefore, research on sulfur-and moisture-resistant catalysts through the addition of a co-catalyst or by changing the catalyst support is an active field of investigation [39,40]. According to Chu and Wang, there is considerable ongoing research related to the selection of various complex adsorbents to improve the efficiency of denitrification and desulfurization through the simultaneous removal of SO 2 and NOx [41].
Vanadium-based SCR catalysts
Vanadium, which is most widely used as a commercial catalyst, is considered a representative catalyst because it has high activity at ~ 350-400 °C even at ~ 1-2 wt% content of various vanadium chemical species such as monomeric, polymeric, and crystalline V 2 O 5 [42]. Recently, the tail-end system has become more widely used, and the demand for low-temperature catalysts is increasing. However, active materials that can increase activity in the low-temperature region (< 200 °C), such as Mn and Ce, form salts such as MnSO 4 or CeSO 4 with the SO 2 present in the exhaust gas. This results in the poisoning of the active catalyst sites, and the problem of activity deterioration continues to be an issue. Therefore, the catalytic activity temperature range is extended by increasing the vanadium content, making vanadium the major component in low-temperature catalysts. However, vanadium catalysts still have some problems. The operating temperature range is relatively narrow at ~ 350-400 °C, and at high temperatures, ammonia oxidation, SO 2 oxidation, and N 2 O formation occur due to side reactions [5]. Because vanadium exhibits several problems, such as being easily sublimated and generating biological toxicity when used for a long time, it is necessary to minimize the vanadium content.
An analysis of NOx conversion as a function of the vanadium content of a catalyst showed that when the vanadium content was between 0.5 and 1 wt%, the conversion efficiency was 19.57 and 51% at 200 °C, respectively. As the vanadium content increased from 2 to 5 wt%, it exhibited increased efficiency and an extended operating temperature range from 200 to 350 °C. When the temperature was greater than 400 °C, the efficiency decreased rapidly with vanadium content and was closely related to the amount of N 2 O generated. Figure 5 shows that as the vanadium content increases, the amount of N 2 O, generated due to one of the side reactions of the SCR reaction, increases rapidly, and N 2 selectivity decreases. The main SCR side reactions by vanadium catalysts include N 2 O generation by NH 3 oxidation and SO 3 generation by SO 2 oxidation. Extensive studies are required on vanadium content conditions and dispersibility improvements that inhibit these side reactions and yield high efficiency. Figure 7a shows the NOx conversion efficacy according to the presence of SO 2 (off: solid line, on: dotted line); the condition without SO 2 at low temperature (below 200 °C) shows high efficiency across the entire temperature range. There is no significant difference in the range of ~ 250-350 °C; however, the efficiency is increased when SO 2 is added at a high temperature.
When SO 2 is present at a low temperature, it is more readily adsorbed to the catalyst acid site than ammonia. It blocks the acid sites, which may cause a decrease in activity at low temperatures ( Fig. 6) [43]. In addition, in the presence of SO 2, the amount of N 2 O generated may be reduced as the oxidation reaction of NH 3 is suppressed. This is because SO 2 is first adsorbed to the Brønsted acid site and Lewis acid site, and then the adsorption of NO is inhibited, leading to the preferential adsorption of NH 3 . The adsorbed NH 3 improves the SCR reaction and suppresses side reactions.
As a commercial catalyst, tungsten or molybdenum are mainly used as co-catalysts for vanadium-based catalysts. Tungsten and molybdenum play a role in maintaining the structural and thermal stability of the catalyst and have the advantage of resistance to sulfur. Although the choice of tungsten or molybdenum depends on the intended use, molybdenum has the limitation of generating N 2 O at high temperatures.
Analyzing the catalyst properties according to tungsten content (Fig. 7) at a constant vanadium content shows that as the tungsten content increases, the NOx removal efficiency is improved at low temperatures. When 10 wt% of tungsten is added, the catalysts show the highest efficiency across the entire temperature range, but this decreases sharply at 13 wt% tungsten. The amount of N 2 O emitted showed almost no change according to the tungsten content. The value of the N 2 selectivity calculated based on the amount of N 2 O generated showed an almost invariant range within ~ 80-87% at 450 °C. As the content of the co-catalyst WO 3 was increased, the activity was improved at low temperatures. However, as it increases to 13 wt%, the dispersibility is lowered, and catalytic activity is reduced due to aggregation and crystallization. Therefore, to improve this, it is necessary to improve the dispersion properties of the main/ co-catalyst. Table 3 lists the various compositions of vanadium-based SCR catalysts, including the main catalyst and support contents. The activity temperature window is defined as the region where more than 90% of NOx removal activity occurs for each synthesized catalyst, and specific results are summarized. Various components such as W, Mo, Ba, Ce, Mn, and Sb are added as co-catalysts to the vanadium-based SCR catalyst [44][45][46][47][48][49][50][51][52]. The W and Mo are the most widely used commercial co-catalysts and demonstrate thermal and structural stability and resistance to sulfur. In particular, W extends the working temperature of vanadium-based catalysts from low to high-temperature ranges [53], and although molybdenum has arsenic poisoning resistance, it exhibits a drawback of generating N 2 O at high temperatures. However, as low-temperature catalysts have recently been in the spotlight, the widespread use of Mo is increasing as a co-catalyst [54]. This interest has also led to many studies related to Mn and Ce, particularly related to the effect of Mn on NOx removal activity according to its oxidation states of + 1 to + 4. The Mn shows high NOx removal characteristics even at low temperatures (~ 150-200 °C). However, in the presence of SO 2 , MnSO 4 is formed, and catalyst acid sites are poisoned [55]. Furthermore, Ce is widely used as a cocatalyst because of its high oxygen storage capacity and easy oxygen storage and release due to the Ce 4+ ↔ Ce 3+ redox shift [56][57][58]. Based on the oxygen vacancies present on the surface, active oxygen chemically adsorbed on the surface moves rapidly in the Ce-based catalyst and promotes the fast SCR reaction [5]. The Zr expands the catalyst operating temperature range by inhibiting particle aggregation and improving acid site dispersibility and has high thermal stability and SO 2 resistance; however, its excessive addition promotes ammonia oxidation [59,60]. When Sb and Nd are added in small amounts, they prevent catalyst poisoning by SO 2 and water, and promote the decomposition of ammonium bisulfate (ABS) [61 -64]. As a NOx adsorbent, Ba is used as a co-catalyst to adsorb NOx at low temperatures [26]. In addition, many researches have been conducted in which many transition metals and rare earth metals are used as cocatalysts for SCR catalysts [65][66][67][68][69][70][71].
Effect of Vanadium oxide states
As a representative transition metal, vanadium contains various valence states, such as + 2, + 3, + 4, and + 5, in the form of VO 2 , V 2 O 3 , and V 2 O 5 [72]. Vanadium is mainly present in the form of + 4 or + 5 oxide states, but the rate at which side reactions such as SO 2 oxidation and N 2 O generation occur and the low-temperature catalytic efficiency change, depend on the ratio of oxidation states present [73]. Therefore, studies on pH control in an aqueous vanadium precursor solution, changing the calcination temperature and maintenance time, and using various precursors are of great interest [54,74,75].
Previous studies have shown that the vanadium-oxalate complex undergoes thermal decomposition to V 2 O 5 in the presence of + 4 and + 5 compounds at approximately 270 °C. Therefore, to control this, the amount of V 4+ was changed by changing the calcination time under a temperature of 270 °C [76]. Inomata et al. found that the shorter the calcination time (at 270 °C), the higher the fraction of + 4 oxides, yielding an amorphous catalyst with a greenish color. As the calcination time increases, most vanadium exists in the + 5 form, resulting in a highly crystalline yellow catalyst. In addition, it was suggested that the NH 3 -SCR reaction rate was faster based on the high redox cycle and Lewis acid site of bulk vanadium oxide than that of V 2 O 5 dispersed in the support. Youn et al. explored a V 2 O 3 catalyst containing 5 wt% V, and reported that it had a high SCR efficiency across a wide temperature range and formed the least N 2 O when it existed in the + 3 oxidation state rather than in the + 4 or + 5 oxidation states, and therefore, the N 2 selectivity was high. Additionally, when defective V 2 O 5−x was formed on the surface, it had a lower NH 3 -SCR reaction energy barrier than that of crystalline V 2 O 5 , and therefore this study showed that high activity vanadium catalysts could exist even at low temperatures [77].
Effect of Vanadium surface density
Inomata et al. reported that VOx exists as monomeric and oligomeric VOx units on the surface of the support, and crystallized vanadium shows an orthorhombic phase V 2 O 5 crystal form [76]. At a low surface concentration (< 2 V atoms/nm 2 ), vanadium exists in the form of monomeric vanadyl without V-O-V bonds. As the vanadium content increases (2-8 V atoms/nm 2 ), it adopts a V-O-V bridge and exists in the form of oligomeric vanadyl. At high surface densities greater than 8 V atoms/nm 2 , it exists in the form of crystallized V 2 O 5 nanoparticles (Fig. 8) [78]. The binding forms of vanadium can be confirmed through Raman spectroscopy (Fig. 9), and the binding [78] form changes depending on the vanadium content and the resulting surface density. Therefore, the catalytic efficiency also changes [76].
He et al. suggested that when the vanadium content is less than 1.3 wt%, 80% or more of the vanadium is monomeric [79]. When the vanadium content is 1.3-3 wt%, the monomeric fraction decreases rapidly, and the polymeric fraction increases to 33%. According to their results, polymeric vanadyl species show higher NH 3 -SCR efficiency compared to that of monomeric species owing to the shorter reaction pathway for regeneration of catalytic oxidation-reduction reaction and lower reaction energy barrier for the catalytic reaction cycle [79].
Dispersion of active catalytic materials
Because catalysis is a surface reaction, it must be supported on a porous support. The choice of support is determined by the need for high dispersibility to increase catalyst efficiency and the need for it to have high thermal stability, specific surface area, and suitable mechanical properties for the intended application [80,81]. Therefore, many researchers have used various dispersants to improve dispersibility [82][83][84][85][86]. It is important to finely disperse catalytically active materials without aggregation to enhance the catalyst acid sites, achieve a high specific surface area, and prevent sintering at high temperatures [87][88][89]. In addition, significant research has been conducted for improving catalyst dispersibility using supports with abundant functional groups [90] and having high efficiency and durability by dispersing singleatom catalysts on a support [91].
Nanocomposite selection
Nanoparticles are preferentially reduced precursor materials to form small particle clusters, which can aggregate to form stable particles. Therefore, Ye et al. reported nano-dispersed catalyst materials on the surface by synthesizing the support at the particle formation stage, inhibiting the formation of large size of these particles, and thus exhibiting high NOx reduction efficiency with a smaller catalyst content [92,93]. Using commercial reduced graphene oxide (rGO) as a support, an MnCe/ rGO composite was prepared in which Mn and Ce were nano-dispersed. The catalytic efficiency was increased owing to a high specific surface area without aggregation of the active material. It was sufficiently moldable to be synthesized even with a 1-inch SCR catalyst [94]. In addition, to retain both the advantages of the high specific surface area of rGO and the abundant oxygen functional groups of GO, a catalyst using a GO-r support subjected to thermal reduction after supporting an active material on the GO surface was synthesized [8]. Next, surface-treated graphene was synthesized using N-doped graphene with nitrogen functional groups, which have higher thermal stability than oxygen groups [95]. The N-rGO support exhibited the highest dispersion and particle aggregation inhibition properties. The N-rGO has an appropriate amount of oxygen and N-functional groups on the surface, and thus demonstrates excellent thermal stability at high temperatures and efficient NOx removal characteristics at low temperatures. Surface oxygen functional groups on graphene act as anchoring sites which play an important role in preventing the aggregation of catalytically active materials and dispersing nanosized particles as the support.
In addition to graphene, studies related to a catalyst containing vanadium and tungsten active materials dispersed using an oxygenated carbon nanotube (O-CNT) support on which an oxygen group is formed through surface acid treatment and having dispersibility and thermal stability of a TiO 2 support with nitrogen functional group were conducted [88,96]. In addition, a study was conducted using hexagonal boron nitride, which has a high melting point of 3000 °C and excellent thermal stability, as a support synthesized h-BN with a porous structure on the surface due to catalytic etching by the transition metal. When the active material is dispersed in this porous structure, the phase is thermally stabilized with high dispersibility, and phase changes and agglomeration are suppressed [97]. As a result, it was possible to nano-disperse the catalytically active material down to 10 nm by utilizing the surface defect anchoring sites.
Effect of surface modification on catalytic activity (structure and morphology)
Catalytic performance is determined by the composition of active catalytic materials, variations of which could improve catalytic efficiencies. However, studies to minimize the NH 3 slip and to develop the SCO catalyst are required owing to the current lack of researches. Therefore, it is difficult to utilize more than a specific amount of active catalytic materials, and it is naturally limited in the linear efficiencies line. To enhance the catalytic performance despite relatively lower contents of active materials, various studies have been conducted concerning surface modification such as functionalization and structure deformation. A phase transition in catalyst structures, such as hollow and kegging structures [98], is also one of the considerations. In the previous literature, active materials were easily deposited on modified and/or functionalized support, nanotubes, APT(Hydroxyapatite), controlled pores/defects, and exchanged sulfated species [99]. Deliberate surface sulfation can increase the active oxygen sites leading to the enhancement of NH 3 -chemisorption [100,101]. Zhang et al. [102] reported on the influence of sulfation in a heterogeneous system with an iron-based catalyst. The enhancement of acid sites (Brønsted acidity and Lewis acid strength) and NH 3 -adsorption show positive effects, not only increasing efficiencies at the relatively higher temperature but also improving the resistance to SO 2 .
In addition, the tolerance to H 2 O and SO 2 is improved because SO 4 − species formed by sulfation act as acid sites to increase the amount of adsorbed NH 3 . Similarly, the catalysis surface shows the mentioned effects due to modification by acidification using acids such as HCl, HNO 3 , H 3 PO 4 , and H 2 SO 4 [103,104]. Chenglong et al. [103] reported an improvement in the catalytic activity, NOx conversion, and N 2 selectivity by acidification in the following order: H 2 SO 4 > H 3 PO 4 > HNO 3 > HCl. In particular, surface acidity was significantly improved while increasing the Brønsted acid sites (M-O-NH 4 + ), and it naturally yielded reducibility and surface chemical adsorption. Other studies have been conducted using H 4 PO 7 to synthesize a nano-hollow structure, increasing surface oxygen storage capacity and acid sites [104]. In other work [105], the structure, morphology, and size of pores influenced the amount of NH 4 + adsorbed for high N 2 selectivity. In addition, a study was conducted on the role of pore diffusion in determining the active site in NH 3 -SCR reaction, and enhanced activity was investigated by controlling the more significant textile properties such as specific surface area, pore size, and structure. This work also explained that particles form the active sites, along with the correlation between the pore diffusion and actual activation energies [106]. The pore size in TiO 2 results in a significant impact on active catalytic species [107], and the influence of meso-and micropores was also studied.
Consequently, it has been reported that the catalytic activity and physicochemical properties change while the vanadium bond is altered by pore size and structure effect, and the suitability of mesopores for catalytic properties is suggested. Hence, research on modification using silicon (Si) was conducted to enhance H 2 O resistance and improve catalytic activity by controlling pore textile properties [108]. The pore size and structure positively affect the dispersibility and acidity of catalytically active materials such as vanadium and tungsten caused by inhibiting and decreasing the specific surface area and improving the hydrothermal stability. The larger volume of mesopores can have more surface oxygenation groups, naturally leading to improved catalytic performance. Accordingly, research is being conducted to control pores through the use of porous materials based on carbon (C) [109,110]. Various researchers [111] have studied carbon-based materials, and the studies have mainly focused on enhancing catalytic performance through chemical surface modification. However, the occurrence of N 2 O is also promoted, leading to the possibility for re-oxidation to NOx [112].
Activated carbon is used to modify the chemical surfaces to enhance catalytic performance. Incorporating nitrogen species into carbon enhances the catalytic activity and promotes NO 2 formation by facilitating chemisorption of surface nitrogen species, which improves catalytic performance [113][114][115][116][117][118][119]. The de-NO X efficiency behavior of catalysts with carbon is expected to increase due to the surface porosity. Most studies of surface functionalization have focused on adding oxygen or nitrogen via chemical or physical methods. Nitrogen functionalities can act as adsorption sites for NO X , and the methods are viscose-based activated carbon fibers (VACF), oxygen plasma, and nitric acid modification [120]. Among them, oxygen plasma treatment can improve and increase the oxygen functional groups on the surface, and it seems to improve pore distribution, dispersion of active catalytic material, and catalytic properties. Doping is widely used as a modification to improve catalytic activity. The S-doping of the catalyst improved its redox property, which was beneficial to the catalytic property. Zhang et al. [121] reported that sulfur could lead to more NH 3 adsorption species through S-doping and increase the oxygen vacancy, leading to excellent activity. Royer et al. [122] reported the importance of structural and textural properties. This research explained that crystal morphologies and structures show significant differences in surface area and pore volume. Particles were highly dispersed on a hollow sphere-and monoclinicstructure among diverse types of supports including hollow sphere, star, rod, mesoporous, and crystallinestructures. Additionally, control of the TiO 2 crystalline phase uses a cationic surfactant cetyltrimethyl ammonium bromide (CATB). In contrast, the structure and valence state of the active phase is controlled by changing the calcination temperature [123]. In conclusion, anatase TiO 2 crystalline is more conducive for improving electron transfer.
Commercial SCR catalyst monolith forms
As shown in Fig. 10, SCR catalysts are divided into three types [124]: honeycomb monolith, plate, and corrugated. The honeycomb type is an extruded ceramic structure that is easily regenerated, but it takes a long time for manufacture, and the catalyst is too heavy. The plate type is a metallic substrate. It has high thermal and mechanical durability, making it suitable for fly ash and gases (e.g., erosion, pressure drop) but with a low specific surface area. The corrugated type is based on a glass-fiber substrate; it has a large specific surface area and short manufacture period but poor durability. Catalysts normally use titanium oxides TiO 2 , alumina oxidesAl 2 O 3 , zeolite, and carbon as support. In general, TiO 2 is used as a commercial catalytic support, and zeolite is used according to the condition of exhaust gases in the mobile source.
Reducing SO 2 -SO 3 conversion and NH 3 slip (oxidation NH 3 )
The main fuels for coal-fired electrical power plants are bituminous coal or biomass mixing coal. Therefore, reagents to sodium-based absorbents, including calciumand magnesium-based, are used for sulfur removal. Additionally, ammonia is used as the reducing agent in SCR, but it is oxidized at high temperatures (> 380 °C). Ammonia slip in stationary sources, i.e., coal-fired and chemical power plants, causes equipment corrosion and negative side reactions. Therefore, the SCR system of stationary sources is adjusted to an NH 3 /NO X ratio to alleviate ammonia slip. However, it leads to a serious problem of reduced denitration efficiency. Therefore, the commercial V-W(Mo)/Ti catalyst is used considering SO 2 -SO 3 conversion. The SO 3 is generated from an SCR system in a coal-fired boiler and wet electrostatic precipitator, and it reacts with ammonia, corroding the SCR system and deactivating the catalyst. Equation (2) shows that the SCR catalyst converts SO 2 in coal-fired sulfur to SO 3 , thereby, damaging equipment when it generates sulfuric acid on reaction with water vapor present in combustion gas and forms gypsum from the CaO in fly-ash materials. Ammonium sulfate (AS) and ammonium bisulfate (ABS) are generated by the reaction of SO 3 with excess ammonia, i.e., slipped-NH 3 . The AS is deposited in catalyst pores and plant equipment, causing problems concerning corrosion, clogging, and performance degradation. The engineers in catalyst application and manufacturers generally require a low SO 2 -SO 3 oxidation rate (less than 2%) [125][126][127][128][129][130][131] Vanadium provides the active site of the SCR reaction, but it also causes oxidation of SO 2 to SO 3 . Therefore, many researchers have tried to control the SO 2 -SO 3 conversion, such as the use of low sulfur coal and absorbents (magnesium, calcium), as well as improving the desulfurization system. In addition, attempts have been made to suppress this reaction by changing the composition of the commercial SCR catalysts and modifying the surface.
The SCR reaction occurs on the surface, whereas SO 2 -SO 3 conversion is a diffusion-based reaction (see Fig. 11). The variables that affect SO 2 -SO 3 conversion are catalyst composition, temperature [132], catalyst geometry (pitch, open area, wall thickness) (see Fig. 12) [124,[133][134][135][136], gas composition, and operating conditions (see Fig. 13). Therefore, improved durability can be obtained through the inhibition of oxidation on the surface by preventing SO 2 -adsorption. The results indicated a linear dependence on catalyst wall thickness and channel [124, 128,135]. For this reason, corrugated type catalysts show lower SO 2 -SO 3 conversion and are better in this respect than honeycomb and plate-type catalysts. The SO 2 -SO 3 conversion rate indicated a difference with respect to the velocity of the gas passing through the catalyst, and the result indicated easier diffusion, as mentioned above, yielding lower AV [134].
However, as the rationale for catalyst choice cannot be limited to a simple SO 2 -SO 3 conversion property, the optimal catalyst monolith is determined by the exhaust gas conditions for each site. Consequently, further research is required for reducing the SO 2 -SO 3 conversion rate.
Studies on the SO 2 -SO 3 conversion by Saltsburg et al. [137] and Morikawa et al. [138] has been in progress since 1979. Scheffknecht et al. [126] and Beretta et al. [139] studied the influence of each factor mentioned above. Scheffknecht et al. [126] reported the effects of SO 2 -SO 3 conversion on catalyst components (Cu, V, W), the concentration of SO 2 and H 2 O, and flue gas velocity. The SO 2 -SO 3 conversion increased with increasing SO 2 concentration [140]. However, a non-linear trend was observed from the saturation point of SO 2 -SO 3 conversion [140]. The SO 2 -SO 3 conversion can also be explained by the Arrhenius and Eyring equations. These equations show that the conversion rate increases with increasing temperature. However, the H 2 O range is not dependent on the SO 2 -SO 3 conversion reaction. This finding was contradicted by other reports, which have found that the addition of H 2 O influenced the generation of SO 3 by inhibiting SO 3 adsorption [128,129]. Along with influenced factors in the SCR process of SO 2 -SO 3 conversion, Yang et al. [133] evaluated and reported various compositions in flue gases, such as O 2 , NH 3 , NOx, SOx, H 2 O, and CO 2 [128,129], as well as catalyst components TiO 2 , V 2 O 5 , WO 3 , Al 2 O 3 , BaO, and SiO 2 . According to their report, the gas composition is directly influenced by SO 2 -SO 3 conversion. The NH 3 especially affects the SO 2 -SO 3 conversion rate, and when the concentration of NH 3 exceeds the NO content, it continues to decrease the SO 2 /SO 3 conversion rate. Wang et al. explained that the presence of NH 3 inhibits the formation of SO 3 , while the oxidation of SO 2 -SO 3 conversion can be controlled by readily reacting surface-oxygen with NH 3 [141]. The concentration of O 2 in the exhaust gas is not impacted by SO 2 -SO 3 conversion [130]. Furthermore, vanadium content directly influences SO 2 -SO 3 conversion, and vanadium surface coverage and density are related to SO 3 oxidation [130,142]. That is to say, SO 2 -SO 3 conversion is connected to only one surface vanadate site. Commercial catalysts use the V-W/Ti components, i.e., a vanadium-based catalyst with a co-catalyst to improve the SO 2 -SO 3 conversion. Thus, many studies have been conducted on SO 2 -SO 3 conversion, and the main factors included the suppression of SO 3 through the use of a co-catalyst and optimized support. In general, vanadium-based catalysts are commercially represented using tungsten oxide (WO 3 ) and molybdenum oxide (MoO 3 ) as promoter. Many studies have shown that WO 3 inhibits SO 2 -SO 3 oxidation, including the report by Ismagilov et al. [143], and the rate is affected by the tungsten content [93,143]. However, based on other reports, a consensus has not yet been reached on the influence of tungsten content on the SO 2 -SO 3 conversion rate [133,142]. Although molybdenum has the desirable property of being resistant to SO 2 by naturally restricting the SO 2 -SO 3 conversion rate, other catalysts have been used more widely in the industry than molybdenum [144]. However even with this composition, because the SO 2 -SO 3 conversion rate is limited, studies are being conducted on the effects of metallic bonds and diversification of catalytic composition by incorporating rare earth and transition metals (Cu, Ba, Mg, Fe, Ge, Zn, Ta, and Y). Wachs et al. [130,131] [145 ]. Wit h an increase in the Fe 2 O 3 loading, the SO 2 -SO 3 conversion also increased. However, studies have reported that excessive iron loading causes a nonuniform distribution of active components on the catalytic surface and an increased SO 2 -SO 3 conversion [145]. Also, to those mentioned in other reports, the addition of barium (Ba) as a co-catalyst showed low SO 2 -SO 3 conversion [143]. The Ba species are an important factor in the SO 2 -SO3 conversion rate, affecting surface acidity. Ceria (Ce) has sulfur resistance and inhibits SO 2 -SO 3 oxidation; therefore, it is generally used for Mn-based catalysts vulnerable to sulfur [146]. The rare earth niobium (Nb) showed the greatest resistance to SO 2 and inhibited surface reaction to SO 2 . Therefore, using Nb, a decrease in SO 2 oxidation at temperatures below 350 °C was demonstrated [93,130,147]. Germanium (Ge) and Zinc (Zn) components are effective promoters for retarding the SO 2 -SO 3 conversion [142]. In addition, research on Cu-containing catalysts has shown that they have the disadvantage of increasing the SO 2 -SO 3 conversion; however, they have the advantage of improving the mercury oxidation rate [122]. Xiang et al. [129] have studied the SO 3 generation, and Yang et al. [133] focused on the factors of SO 2 -SO 3 conversion. Furthermore, Hitachi-Zosen reported that SO 3 reacted with NH 3 generated from ABS and AS, as shown in Fig. 14.
The NH 3 oxidation generates NO and N 2 O, and therefore, it must be suppressed. The NH 3 slip is generally adjusted to 2 ppm or less [148]. The basic method for inhibiting NH 3 slip is to reduce the NH 3 /NO ratio. However, in this case, it is necessarily directly related to the decrease in the de-NOx efficiency, and therefore, unfortunately, its application is difficult. For this reason, research has been conducted with a focus on the selective catalytic oxidation (SCO) technology and removal of residual NH 3 after reaction. The use of co-catalysts to inhibit NH 3 oxidation is also related to slip prediction models. However, NH 3 can also be oxidized by major substances in the catalyst, such as vanadium, tungsten, and molybdenum; therefore, this factor should also be considered [148,149]. As the NH 3 oxidation reaction is connected to a decrease in N 2 selectivity and inhibited NH 3 -SCR reaction, it causes a decrease in the catalyst activity [150]. The NH 3 oxidation reaction proceeds via the following reaction pathway.
In the case of an indirect reaction from NH 3 to N 2 and H 2 O, as shown in Eq. 7-9, there is a problem related to the generation of by-products, such as NO and N 2 O. Therefore, such a reaction must be suppressed to prevent activity degradation, as discussed above [151]. Epling et al. [152] and Blanco [153] have investigated the presence of an indirect path or direct oxidation to N 2 after the formation of NO [152] or N 2 O (see Fig. 15) [153] during oxidation from NH 3 . Indeed, to inhibit the NH 3 -oxidation, many studies have been conducted on mechanisms concerning oxidation and adsorption to examine the doping effects of W [121,126,133], Cu [113,114], Ce [113,120], Fe [113,120], and Ru [122,127]. The studies confirmed the kinetic scheme of NH 3 /NOx ratio, temperature, vanadium loading influence (see Fig. 16), and NH 3 -slip. In fact, the direct factors include the intrinsic rate of NH 3 oxidation and the volume of catalyst.. In fact, the direct factors depend on the intrinsic rate of NH 3 oxidation and the catalyst volume. Additionally, the catalyst monolith thickness and diffusion of NH 3 were studied. In other works reported the NH 3 oxidation depended on N 2 selectivity. According to the research on NH 3 reaction, ammonia oxidation increases with an increase in the reaction temperature and vanadium content, and the N 2 selectivity decreases rapidly. Grange et al. [154] reported the relationship between NH 3 oxidation and SCR catalyst through the DRIFT study and confirmed the connection with the V = O octagon band in DRIFTS, particularly the temperature of the SCR reaction, the de-NO X efficiency, and oxidation in the vanadium-based catalyst (see Fig. 17). Furthermore, Wang et al. [155] reported on the relations between ammonia storage and slip and gas hourly space velocity (GHSV) and temperature, among other factors. This research indicated that GHSV affects NH 3 slip by affecting the temporary NH 3 reaction and the heating rate of the catalyst. Epling et al. [152] reported that Cu demonstrates the advantage of easier control of NH 3 oxidation than Fe as the storage capacity for NH 3 and NOx was higher than that of the Fe catalyst. In the V-Mo-based catalyst using Ru [156,157], Ru inhibits NH 3 slip at 350 °C by increasing NH 3 decomposition efficiency and improves N 2 selectivity to 97%. Additionally, depending on the presence of SO 2 , NH 3 oxidation was partially inhibited. Moreover, a study on NH 3 oxidation inhibition was conducted by doping with CaO, a catalyst poison. This research indicated that the addition of CaO improves NO formation by NH 3 oxidation and inhibits N 2 O formation, thereby changing the NH 3 reaction pathway. As a result, doping with CaO improves the de-NO X efficiency and N 2 selectivity [150]. However, as CaO is generally considered to be a poison substance in catalysts, more complementary research is necessary. In summary, [154] it is essential to control the vanadium content and the amount of NH 3 adsorbed [158]. Additionally, it is important to consider the influence of exhaust gas and flow rate [155], such as NH 3 /NOx for inhibiting the NH 3 oxidation. However, it will be necessary that the ongoing researches focus on minimizing the NH 3 slip and developing the SCO catalyst because of the lack of related research at present.
Conclusions and outlook
Vanadium-based catalysts with W or Mo co-catalysts and TiO 2 supports have been used as commercial NH 3 -SCR catalysts without any major changes to the composition for many years, despite research into new compositions. However, air quality standards are being strengthened worldwide, which has renewed interest in the development of SCR catalysts with high NO x reduction efficiencies and excellent poisoning resistance at low operating temperatures. This review discusses the current status of research on commercial vanadium-based SCR catalysts used to reduce NO x emissions from stationary sources worldwide.
The diverse operating conditions of SCR catalysts in actual industrial sites necessitates the tailoring of catalyst properties for a given application. For example, catalysts with physicochemical poisoning resistance to different poisoning substances (mercury, alkali metals, particulate matter, sulfur, etc.) over a wide range of operating temperatures are required; therefore, several methods have been developed to design suitable catalysts based on the intended application area. In addition, there are several methods for increasing the efficiency of SCR catalysts, including the use of different co-catalysts (W, Mo, Ba, Ce, Sb, Fe, Mn, etc.), supports (TiO 2 , Al 2 O 3 , CeO 2 , activated carbon, etc.), and vanadium species (polymeric, monomeric, and crystalline V 2 O 5 ); tailoring the surface density, support application (carbon based materials, two-dimensional materials), and particle structure; and surface modification. In addition, although much research has been conducted on powder-type catalysts, this review discussed several other monolith catalysts (honeycombtype, plate, corrugated, etc.) that may help to meet the commercial catalyst requirements for a given application. Catalyst monolith forms can be optimized for the specific operating and exhaust conditions of a given application; therefore, different catalyst morphologies are used depending on the place of use. We discussed the correlation among the reaction, geometry, and morphology of monolith catalysts to improve their performance and properties under variable operating conditions. Environmental regulations are set to become continually stricter in the future. Therefore, improved methods of controlling the amount of N 2 O and unburned NH 3 emitted by SCR side reactions will be required. Currently, as fossil fuels are used in industrial plants, NH 3 , an unreacted reducing agent, is mainly emitted along with NO x , CO, VOCs, UHC, and particulate matter. Therefore, there is a need to develop oxidation-reduction catalysts or module systems that are capable of reducing these pollutants simultaneously in the SCR facility. In addition, as more governments and enterprises seek to achieve carbon neutrality, new combustion technologies based on low-carbon or non-carbon fuels such as ammonia or hydrogen with conventional LNG are being applied to replace fossil fuels. These new fuels change the composition of the flue gas; however, thermal NO x is still emitted in large quantities from combustion reactions. Consequently, continuous research is needed to ensure SCR catalyst technologies keep up with the changing field of fuel combustion. Importantly, under co-firing conditions, pollutants such as CO and unreacted NH 3 may be generated due to incomplete combustion. Therefore, in the future, it will be necessary to not only improve the performance of SCR catalysts, but also to keep pace with new developments by designing functional reduction catalysts that can simultaneously reduce various pollutants together with NO x . | 2022-11-20T06:15:58.613Z | 2022-11-19T00:00:00.000 | {
"year": 2022,
"sha1": "6e79f91b7f05b88e8a544c461aead2eb525c13b6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "c8346291c59bf6ac75e282d8342967d51ffc384f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3527341 | pes2o/s2orc | v3-fos-license | Nutrition and the Risk of Alzheimer's Disease
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that accounts for the major cause of dementia, and the increasing worldwide prevalence of AD is a major public health concern. Increasing epidemiological studies suggest that diet and nutrition might be important modifiable risk factors for AD. Dietary supplementation of antioxidants, B vitamins, polyphenols, and polyunsaturated fatty acids are beneficial to AD, and consumptions of fish, fruits, vegetables, coffee, and light-to-moderate alcohol reduce the risk of AD. However, many of the results from randomized controlled trials are contradictory to that of epidemiological studies. Dietary patterns summarizing an overall diet are gaining momentum in recent years. Adherence to a healthy diet, the Japanese diet, and the Mediterranean diet is associated with a lower risk of AD. This paper will focus on the evidence linking many nutrients, foods, and dietary patterns to AD.
Introduction
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that accounts for the major cause of dementia in the world [1,2]. The number of the disease is projected to reach 106.8 million worldwide by the year 2050; therefore, the disease is a growing public health concern with major socioeconomic burden [3].
Much attention has been paid to disease-modifying factors and risk factors for AD [4]. Cognitive engagement and physical activities have been associated with decreased risk of AD, while diabetes, epsilon 4 allele of the apolipoprotein E gene (APOE 4), smoking, and depression have been associated with increased risk of AD [5]. In recent years, there has been increasing evidence supporting the role of nutrition in AD [6][7][8]. A number of dietary factors such as antioxidants, vitamins, polyphenols, and fish have been reported to decrease the risk of AD, while saturated fatty acids, high-calorie intake, and excess alcohol consumption were identified as risk factors [9]. Dietary patterns, which better reflex the complexity of diet, have emerged in recent years to examine the relationship between diet and AD [10]. In this paper, we will investigate the evidence linking nutrients, foods, beverages, and dietary patterns to the risk of AD.
AD Is Associated with Both Obesity and Malnutrition
Obesity and overweight seem to be associated with AD [11][12][13]. However, the evidence relating obesity measured with body mass index (BMI) with AD is conflicting. Obesity (BMI > 30) in midlife has been found to increase the risk of AD, while late-life obesity was found to reduce the risk of AD [11]. Therefore, manipulation of adiposity may provide a means to prevent AD [14]. Malnutrition and weight loss are frequent complications of AD, the mean prevalence of malnutrition in AD patients living at home is 5% as reported by Guigoz et al. [15]. Patients with AD had a worse nutritional status compared to that of controls [16], and a baseline lower nutritional status was reported to indicate the progression of AD [17]. In addition, weight loss was reported to predict rapid cognitive decline in AD patients [18], and treatment of weight loss and malnutrition may also be important in AD patients.
The Effects of Nutrients on the Risk of AD
Many nutrients, such as antioxidants, vitamins, fat, and carbohydrates, can affect the risk of AD. Although the mechanisms of these nutrients on AD are not clear, reducing the oxidative stress and amyloid beta-peptide (A ) accumulation is considered to play a role in the process of AD [19,20].
3.1. Antioxidants. The oxidative stress, the undue oxidation of biomolecules leading to cellular damage, promotes many studies of antioxidants in the prevention of AD [21].
Vitamin A and -Carotene.
Vitamin A and -carotene could be key molecules for the prevention and therapy of AD, due to their ability to inhibit the formation of both A oligomers and fibrils [20]. It has been shown in vitro that vitamin A and -carotene have antioligomerization effects on A [22]. Low serum and plasma concentrations of vitamin A and -carotene have been seen in AD patients [23,24], and a higher -carotene plasma level was associated with better memory performance [25]. Data on the supplementation of vitamin A alone in AD were not available.
Vitamin C.
Vitamin C has been proven to reduce A oligomer formation and oxidative stress in vitro and in vivo studies [26,27]. Data from cohort studies about the supplement effect of vitamin C on AD are conflicting. A prospective study ( = 980) evaluating the relationship between 4 years of vitamin C and vitamin E intakes and the incidence of AD showed no difference in the incidence of AD during the 4-year followup [28], and the same relationship was also found in another prospective study ( = 5395) showing that vitamin C intake was not associated with AD risk [29]. However, results from mostly published prospective observational studies ( = 4740) suggested that the combined use of vitamin C and vitamin E for at least 3 years was associated with the reduction of AD prevalence and incidence [30]. Overall, there is a large body of evidence that maintaining healthy vitamin C levels can have a protective function against AD, but avoiding vitamin C deficiency is likely to be more beneficial than taking supplements on top of a normal, healthy diet [31].
Vitamin E.
Vitamin E is a lipid-soluble antioxidant that has been found to confer neuroprotection by inhibiting oxidative stress [32][33][34] and scavenging A -associated free radicals [35]. Compared to cognitive normal subjects, AD and mild cognitive impairment (MCI) had lower levels of total tocopherols, total tocotrienols, and total vitamin E [36]. When considered each vitamin E form alone, intake oftocopherols and -tocopherols was associated with a slower rate of cognitive decline [37]. However, in a double-blind, randomized controlled study with 769 subjects, there were no significant differences in the probability of progression to AD in the vitamin E group compared to the placebo group [38]. There were limitations to the study, for example, the forms of vitamin E were not clarified. In addition, the composition of vitamin E supplement might not reflex the actual composition in the diet. At present, there is no reliable evidence of efficacy of vitamin E in the prevention or treatment of people with AD; thus, more research is needed [39].
3.1.4. Selenium. Current knowledge provides no evidence of a role of selenium (Se) in the treatment of AD but allows speculation on a potential preventive relevance [40], and selenium has been reported to play an important role in the antioxidative defense [41,42]. AD patients showed a significant lower Se level in plasma, erythrocytes, and nails when compared to controls [43]. Several interventional trials demonstrated that supplementations of selenium-containing mixtures improved cognition [44][45][46]; however, as the authors did not specify the form of Se, the validity of the interventional trials was limited. In addition, the window of selenium's biological efficacy is a narrow one. The relationship between Se supplementation and AD requires confirmation by randomized trials.
Polyphenols.
Polyphenols are natural antioxidants that provide protective effects to AD through a variety of biological actions, such as interaction with transition metals, inactivation of free radicals, inhibition of inflammatory response, modulation in the activity of different enzymes, and effects on intracellular signaling pathways and gene expression [47][48][49]. Several animal studies have demonstrated that polyphenols inhibited A formation and attenuated cognitive deterioration [50][51][52][53][54]. Data from a randomized, double-blind controlled clinical trial of polyphenols supplementation in 100 subjects showed that polyphenols contained in antioxidant beverages might benefit AD patients by decreasing homocysteine concentrations in AD patients [55].
B Vitamins (Folic Acid, Vitamin B6, and Vitamin B12).
B vitamins might contribute to AD by inhibiting oxidative stress and lowering the concentrations of homocysteine [56,57]. Vitamin B6 has been reported to inhibit oxidative stress in AD [56]. High concentrations of homocysteine have been linked to an increased risk of AD [58][59][60], and homocysteine was significantly elevated in AD patients [61]; high dose supplementation of vitamin B6, B12, and folate lowers plasma homocysteine concentrations in AD patients [62]; homocysteine-lowering treatment might be a therapeutic target for AD.
In a cohort of 816 subjects, low serum folate concentrations were reported to increase the risk of AD [59], and increased dietary intake of folate decreased the risk of AD [72,73]. A meta-analysis of 9 folic acid supplements versus placebo in 2,835 participants suggested that folic acid, with or without other B vitamins, had no effect on cognitive function within 3 years of the start of treatment [74]. In a randomized controlled trial of homocysteine-lowering treatment with B vitamins, 140 subjects with mild-to-moderate AD or vascular dementia were assigned to take 1 mg methylcobalamin and 5 mg folic acid, or placebo once daily for 24 months; there was no significant group difference in changes in any of the neuropsychological scores [75]. However, in another randomized controlled trial, 266 participants with MCI were randomly assigned to receive a daily dose of 0.8 mg folic acid, 0.5 mg vitamin B12, and 20 mg vitamin B6, or placebo for 2 years; the mean plasma homocysteine concentration was 30% lower in those treated with B vitamins relative to placebo there was significant benefit of B vitamin treatment among participants with baseline homocysteine above the median (11.3 mol/L) in global cognition, episodic memory and semantic memory [76]. The reasons for the discrepancy in findings for the effect of B vitamins AD and cognition might be the nonuniform choice of regimen, the different pathological courses of the patients, and the diverse measurements of the results. Thus, the available evidence, so far, is insufficient to draw definitive conclusion on the association of B vitamins with cognitive decline [77].
Vitamin D. Vitamin D might have little association with
A mechanisms and its potential association with AD might involve other pathways, such as antioxidative, vascular, antiinflammatory, or metabolic pathways [78]. Genetic studies have provided the opportunity to determine that vitamin D is associated with AD risk [79,80].
A meta-analysis of 10 studies showed that AD cases had lower serum vitamin D concentrations than matched controls [81]. In a study with 1,604 men, little evidence of associations between lower 25-hydroxyvitamin D level and cognitive function was found [82]; however, data from another large population-based study with 5,596 communitydwelling women showed that women with inadequate intakes had a lower mean Pfeiffer Short Portable Mental State Questionnaire (SPMSQ) score compared to women with recommended weekly vitamin D dietary intakes [83]. The cross-sectional association between vitamin D and cognition strengthened the hypothesis that correcting hypovitaminosis D among older adults could prevent cognitive decline but prevent the finding of a cause and effect link. Randomized controlled trials testing vitamin D supplements versus placebo should be the next step [84].
Metals.
Dysfunctional homeostasis of transition metals is believed to play a role in the pathogenesis of AD by forming reactive species through metal amyloid complexes [85,86]. Modulating metals has been proposed as a therapeutic strategy for AD [87]; bivalent cation chelators such as clioquinol and its later derivatives are being developed as a novel AD drug [88].
Copper.
Copper is essential for life, but in excess can be toxic. High dietary intake of copper in conjunction with a diet high in saturated and transfats was reported to be associated with cognitive decline [89,90].
A meta-analysis of 17 studies with 1425 subjects showed that AD patients have higher levels of serum copper than controls [91]. Copper dysfunction is thought to play a role in AD pathology [92]; however, data from a prospective, randomized, placebo-controlled trial with 68 subjects showed that oral copper supplementation had neither a detrimental nor a promoting effect on the progression of AD [93].
Iron.
Iron mediates the oxidative stress in AD, and an imbalance in iron homeostasis is thought as a precursor to AD [94,95]. Diets excessive in Fe together with a high intake of saturated fat acids have been recommended to be avoided in the elderly [89]. However, iron supplementation has been reported to improve attention and concentration irrespective of baseline iron status in older children and adults [96].
Zinc.
Zinc supplementation was found to reduce both A and tau pathologies in the hippocampus and to delay hippocampus-dependent memory deficits in AD mouse model [97]. Zinc deficiency was reported to be associated with cognition loss in AD patients [98].
3.5. Fats. Different consumption levels of the major specific fat types, rather than total fat intake itself, appeared to influence cognitive aging. Higher monounsaturated fatty acid was related to better cognitive function, while higher saturated fatty acid was associated with worse cognitive function. Total fat, polyunsaturated fatty acid, transfat intakes were not associated with cognition changes [99]. [100,101], and derivatives of MUFA, including low molecular weight phenols, were reported to have antioxidant effects [102]. Data from a prospective study suggested that higher intake of monounsaturated fatty acid is associated with less cognitive decline [103].
Polyunsaturated Fatty Acids (Omega-3 Polyunsaturated
Fatty Acids). Current evidence suggests that elevated intake of polyunsaturated fatty acids might be beneficial to AD [104][105][106]. Dietary supplementation of omega-3 polyunsaturated fatty acids was reported to affect expression of genes that might influence inflammatory process [107]; however, the protective effects might be limited to APOE epslion4 noncarriers [108]. Dacosahexaenoic acid (DHA), the main form of omega-3 fatty acids, has been demonstrated to reduce A production and pathological changes in AD animal models [109][110][111][112]. A meta-analysis of 11 observational studies and 4 clinical trials showed that omega-3 fatty acids slowed cognitive decline in elderly individuals without dementia [113]. However, data from randomized controlled trials showed that supplementation with DHA and eicosapentaenoic acid (EPA), compared with placebo, did not slow the rate of cognitive decline and functional decline [114][115][116]. The contradictory results between observational studies and randomized controlled trials might be that the duration of randomized controlled trials was often not long enough.
Saturated Fatty
Acids. Elevated intake of saturated fatty acids could have negative effects on cognitive functions [117]. In a study of 1,449 participants with an average followup of 21 years, moderate intake of saturated fatty acids was associated with an increased risk of AD and dementia, especially among APOE epslion4 carriers, whereas a higher intake did not affect the risk [118], suggesting that there may be a threshold association.
Transfatty Acid.
Transfatty acid might potentially increase AD risk or cause an earlier onset of the disease by increasing the production of A through increase of amyloidogenic and decrease of nonamyloidogenic processing of amyloid precursor protein [119]. However, in a prospective study with 482 women over a followup of 3 years, a validated food frequency was administrated twice to assess dietary intake before cognitive assessment; greater intake of transfat was not associated with cognitive decline [103]. No reliable data from randomized trials on the association of transfatty acids with AD were available.
3.6. Carbohydrates. It has been suggested that patients with T2DM (type 2 diabetes mellitus) are at an increased risk of getting AD [5]. Deficient brain insulin signaling pathway has been proposed as the common mechanism in the two disorders [120]. In AD patient brains, reduced insulin levels, insulin receptor expression, and insulin resistance have been reported [121][122][123]. With increased exposure to glucose, multiple proteins in neurons are susceptible to glycation, which is viewed as being an important contributor to AD [124]. Therefore, a diet high in carbohydrates may be detrimental to AD [125,126]. However, in a prospective study with 939 participants over 6.3 years of followup, glycemic load reflexing carbohydrate content in food was not associated with a higher risk of AD [127]. No reliable data from randomized trials on a diet high in carbohydrate and AD were available.
The Effects of Foods and Beverages on the Risk of AD
Single nutrients are not consumed in isolation but as a part of diet; examining the role of single nutrients is complicated and difficult due to the interaction between nutrients. Therefore, examining foods rather than single nutrients might be more useful, and many foods and beverages have been reported to affect the risk of AD (Figure 1).
Fish.
Epidemiological studies suggest that fish consumption can reduce the risk of dementia and AD, especially among APOE epslion4 non-carriers [128][129][130][131][132]. The positive link is thought to be associated with marine long chain omega-3 fatty acids, EPA, and DHA. The belief that consumption of fish as a whole is gaining popularity. In a prospective study with 815 participants aged from 65 to 94 years, consumption of fish more than once a week had 60% less risk of AD compared with those who rarely or never ate fish [131]. Data from randomized trials of the effect of wholefish consumption on the risk of AD were not available.
Fruits and Vegetables. Frequent consumption of fruits
and vegetables might decrease the risk of AD and dementia [128]. A medium or great proportion of fruits and vegetables in the diet, compared with no or small proportion, was Fish, vegetables, fruits, coffee, and light-to-moderate alcohol intake are reported to reduce AD incidence. Milk and tea are reported to influence cognition, but their influence on AD is not clear.
associated with a decreased risk of AD and dementia [133].
If the association between fruits and vegetables intake and AD is validated, the mechanism might be that fruits and vegetables are rich sources of antioxidants and bioactive compounds (e.g., vitamin E, vitamin C, carotenoids, and flavonoids) and also low in saturated fats [134]. Higher vegetable, but not fruit, consumption was reported to be associated with slower rate of cognitive decline in a cohort of 3,718 participants aged 65 years and older; among types of vegetables, green leafy vegetables had the strongest association [134]. The paradoxical results might be due to that vegetables, especially green leafy vegetables, contain more vitamin E than fruits, and some unknown dietary component offsets the protective effects of antioxidants in fruits. In another cohort of 2,613 participants aged 43-70 years old, total intakes of fruits, legumes, and juices were not associated with change in cognitive cognition, while higher intakes of some subgroups (e.g., nuts, cabbage, and root vegetables) may diminish age-related cognitive decline in middle-aged individuals [135]. Data from randomized controlled trials were not available.
Dairy.
A lower consumption of milk or dairy products was found to be associated with poor cognitive function [136,137]. Dairy, rich in vitamin D, phosphorus, and magnesium may reduce the risk of cognitive impairment by decreasing vascular alterations and structural brain changes that occur with cognitive decline [136]. However, the consumption of whole-fat dairy products may be associated with cognitive decline in the elderly [136]. Moderate intake of unsaturated fats from milk products and spreads at midlife decreased the risk of AD, while saturated fat intake from milk products and spreads at midlife was associated with an increased risk of AD [118,138]. Unfortunately, the observational studies examined diary as a component of dietary intake not as their primary focus, and there was no evidence available from randomized controlled trials.
Coffee.
Coffee drinking may be associated with a decreased risk of AD [139]. A trend towards a protective effect of caffeine on AD was reported [140,141]. Coffee may be the best source of caffeine to protect against AD due to a component in coffee that synergizes with caffeine to selectively enhance plasma cytokines [142]. A quantitative review of four studies (two case-control studies and two cohorts) showed that coffee consumption is inversely associated with the risk of AD, compared to nonconsumers; the risk estimate of AD in coffee consumers is 0.70 with 95% confidence interval 0.55-0.90 [143]. However, the four studies had heterogeneous methodologies and results, so further prospective studies evaluating the consumption of coffee and AD are strongly needed.
Tea.
Observational studies suggest that tea drinking was associated with lower risks of cognitive impairment and decline [144,145], and the protective effect was not limited to a particular type of tea [144]. Black tea was shown to significantly enhance auditory and visual attention compared to placebo [146]. Green tea polyphenols may inhibit cognitive impairment via modulating oxidative stress [147][148][149], and green tea epigallocatechin-3-gallate (EGCG) has been shown to reduce -amyloid generation and sarkosyl-soluble phosphorylated tau isoforms in AD mouse models [150,151]. The neuroprotective effects of tea consumption could be due to catechins, L-theanine, polyphenols, and other compounds in tea leaves [152]. Therefore, tea might be a relevant contributor to AD.
Alcohol.
Epidemiological studies suggest that light-tomoderate alcohol intake was associated with a reduced risk of AD, particularly among APOE epslion4 non-carriers [153][154][155]. However, heavy drinking (>2 drinks), alongside with heavy smoking and APOE epsilon4, was associated with an earlier onset of AD [156]. The mechanisms by which low-to moderate intake could be protective against AD while heavy intake was detrimental to were unclear.
Different types of alcohol (wine, beer, and mixed alcohol beverages) may have different effects on AD. Resveratrol and other polyphenols in red wine have been found to diminish plaque formation and protect against A -induced neurotoxicity [157][158][159], and moderate beer consumption was thought to afford a protective factor for AD due to its content in bioavailability silicon [160]. Therefore, alcohol intake might provide benefits to AD, but the quantity and type of alcohol were not clear.
The Effects of Dietary Patterns on the Risk of AD (Figure 2)
Dietary pattern, a combination of food components that summarizes an overall diet for a study population, can have various effects on cognitive function and AD (Table 1). A dietary pattern, characterized by a high intake of meat, butter, high-fat dairy products, eggs, and refined sugar, has been found in AD patients [161].
Western Diet.
A Western diet is characterized by higher intake of red and processed meats, refined grains, sweets, and desserts [162]. A high-fat Western diet may contribute to the development of AD by impacting A deposition and oxidative stress [163,164]. Data from epidemiological studies exploring Western diet and the risk of AD were not available.
Japanese Diet.
Traditional Japanese diet is characterized by increased intake of fish and plant foods (soybean products, seaweeds, vegetables, and fruits) and decreased intake of refined carbohydrates and animal fats (meat) [165]. In a population-based study with a total of 1006 Japanese subjects followed by 15 years, a dietary characterized by a high intake of soybeans and soybean products, vegetables, algae, and milk and dairy products and a low intake of rice was associated with a reduced risk of AD [63].
Health Diets.
In a cohort of 3054 participants, a healthy diet was defined as one positively correlated with consumption of fruit, whole grains, fresh dairy products, vegetables, breakfast cereal, tea, vegetable fat, nuts, and fish and negatively correlated with meat, poultry, refined grains, animal fat, and processed meat. Participants with the highest compared with lowest adherence to the health diet had a better cognitive function [64]. A healthy diet, characterized by higher consumption of fish by men and fruits and vegetables by women, was also reported to be associated with better cognitive performance [65]. In another study with 525 subjects, a healthy diet index was constructed to assess healthy and unhealthy diet components; persons with a healthy diet (healthy-diet index >8 points) had a decreased risk of AD [66].
DASH-Style Diets. The Dietary Approaches to Stop
Hypertension (DASH) diet contains a high intake of plant foods, fruits, vegetables, fish, poultry, whole grains, low-fat dairy products, and nuts, while minimizing intake of red meat, sodium, sweets, and sugar-sweetened beverages [165]. In a randomized clinical trial of 124 participants with elevated blood pressure, subjects on the DASH diet exhibited greater Ozawa et al. [63] 1006 Japanese community Cohort A higher adherence to a dietary pattern characterized by a high intake of soybeans and soybean products, vegetables, algae, and milk and dairy products and a low intake of rice is associated with dementia in the general Japanese population.
Kesse-Guyot et al. [64] 3054participants Cohort The healthy pattern was associated with better global cognitive function (50.1 ± 0.7 versus 48.9 ± 0.7; trend = 0.001) and verbal memory ( neurocognitive improvements when compared to normal subjects [67]. As hypertension is associated with increased risk for AD [166], it is biologically plausible that DASH could reduce the risk of AD.
Mediterranean
Diets. The Mediterranean diet, a typical diet of the Mediterranean region, is characterized by a high consumption of fruits, vegetables, cereals, bread, potatoes, poultry, beans, nuts, olive oil, and fish; a moderate consumption of alcohol; a lower consumption of red meat and dairy products.
Adherence to the Mediterranean diet may not only affect the risk of AD but also mortality in AD [167]. A meta-analysis of eighteen cohort studies with 2,190,627 subjects showed that adherence to the Mediterranean diet was associated with a significant reduction of overall mortality and neurodegenerative diseases [168]. Several researches supported a beneficial association between adherence to a Mediterranean diet and AD [68,69]. As fruits, and vegetables, fish, and moderate alcohol reduced the risk of AD [128,130,131,[153][154][155], in spite of lack of data from randomized controlled trials, the Mediterranean diet can be thought to be beneficial to AD.
Conclusions and Future Directions
In this paper, we searched PubMed articles published from 2000 to 2013, using the search terms "Alzheimer's disease, " "nutrition, " "nutrients, " "food, " "diet, " "dietary patterns, " "overweight, " "obesity, " "prospective cohort studies, " "randomized controlled trials, " "systematic review, " and "metaanalysis. " Articles were also identified through searches of lists. Studies were selected for inclusion on the basis of a judgment about the quality of the evidence according to four key elements: study design, study quality, consistency, and directness, as proposed by the Grading of Recommendations Assessment, Development and Evaluating (GRADE) working group. For each nutrient, food, or dietary pattern, only the studies with the highest level of evidence were included. If randomized trials had not been undertaken and only observational data were available, studies were included if they were prospective, population-based, and large, with standardized diagnostic criteria for AD. Studies were excluded if serious limitations to study quality and major uncertainty about directness existed. Only articles published in English were included.
Epidemiological studies suggest antioxidants, vitamins, polyphenols, polyunsaturated fatty acids, fish, fruits, vegetables, tea, and light-to moderate consumption of alcohol are beneficial for AD, while trans-fatty acids, saturated fatty acids, carbohydrates, and whole-fat dairy are detrimental to AD. However, epidemiological studies cannot eliminate bias and confounding in any association between a risk factor and AD [169]; the results of such studies should be interpreted with caution. In addition, it is difficult to examine the individual effects of nutrients and foods because they are correlated with each other; therefore, the idea of focusing on diet as a whole is gaining momentum.
Randomization is the best method to minimize bias and confounding and establish causality. However, randomized trials are not always feasible, and the few randomized controlled trials that have been undertaken provide conclusions that dietary supplementation of vitamin E, B vitamins, and polyunsaturated fatty acids does not reduce cognitive decline and the risk of AD. Several reasons might be responsible for the discrepancy between observational studies and randomized controlled trials. First, nutrients might be useful only for primary prevention of AD, and not protective once the pathological process started. Second, the dose of nutrients might not be equivalent to levels seen in the epidemiological studies. Third, the duration of most trials has been suggested to be inadequate to show benefits. Besides, the association between nutrients and epidemiological studies is confounded by social and behavioural factors acting across the life course [170].
Dietary pattern, which better reflexes the complexity of diet, has emerged in recent years to examine the relationship between diet and AD. Adherence to the Mediterranean diet, the Japanese diet, and the healthy diet has been reported to be associated with decreased risk of AD. Given that the studies relating dietary patterns to AD are very few, further studies are needed. In general, the very few studies that have been done suggested that higher intake of fruits, vegetables, fish, nuts, legumes, cereal, lower intake of meats, high fat diary, sodium, sweets, and refined grains seemed to be associated with reduced risk of AD.
Further researches are needed to improve the quality of evidence relating to the association of many nutrients, foods, and dietary patterns with AD. To establish a causative role for specific nutrients, foods, and dietary patterns in the pathogenesis of AD, adequately powered, large randomized trials are needed in which the patient population and intervention are carefully described. | 2017-10-18T14:39:37.518Z | 2013-06-20T00:00:00.000 | {
"year": 2013,
"sha1": "d82aad46bd389dd833dd8e8e3d9b5a7dd1171245",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2013/524820",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d82aad46bd389dd833dd8e8e3d9b5a7dd1171245",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86836887 | pes2o/s2orc | v3-fos-license | Diagnostic efficacy of PET/CT in bone tumors
Clinical value of PET/CT (positron emission tomography/computed tomography) in the diagnosis of malignant bone tumors (BT) was investigated. Fifty-four patients with BT were first diagnosed by ordinary CT and then by PET/CT. The diagnostic efficacy outcomes and diagnosis of malignant BT by clinical stage of the two methods for BT were observed and recorded, and the diagnostic value of PET/CT in the diagnosis of BT was evaluated. There were 14 cases of benign BT patients, 15 cases of stage I, 10 cases of stage II and 15 cases of stage III in malignant BT patients. The diagnostic coincidence rate of PET/CT was 92.59% and the diagnostic coincidence rate of CT was 72.22%, which showed that the diagnostic coincidence rate of PET/CT was significantly higher than that of CT (P<0.05). The sensitivity, negative predictive value and positive predictive value of PET/CT were 95.00, 85.71 and 95.00%, respectively, which were higher than those of CT (P<0.05). CT and PET/CT were used for the clinical staging and pathological diagnosis of malignant BT; the results showed that the diagnostic accuracy of PET/CT in the clinical stages of malignant BT was also significantly higher than that of CT (P<0.05). The diagnostic efficacy of PET/CT in BT is better than that in CT. PET/CT can diagnose the pathological properties of BT more accurately, and can also effectively diagnose the clinical stage of malignant BT and provide clinical diagnostic basis for follow-up procedures.
Introduction
Bone tumor (BT) (1) is a tumor that occurs in the bone or its subsidiary tissues. BT are classified as benign BT and malignant BT. Benign BT is easy to cure and has a good prognosis, while malignant BT develops rapidly with poor prognosis and high mortality. The incidence of BT in the world (2) is low; but the absence of obvious symptoms or the neglect of minor symptoms in the early stage leads to misdiagnosis and missed diagnosis. Sometimes, it even develops into malignant BT at the time of a visit. PET/CT examination (3) is a common imaging detection method for the diagnosis of tumor. It is widely used in the differential diagnosis of various diseases. The diagnosis of BT has become a difficult problem in clinic because of its diverse causes and complex components. The clinical value of PET/CT in differential diagnosis of bone tumors and tumor-like lesions has also become a hot research topic (4).
PET/CT, a scanner combined by positron emission tomography and X-ray computed tomography, combines the two imaging techniques perfectly to form a complementary advantage (5). PET (positron emission tomography) provides functional and metabolic information (6) and CT (computed tomography) (7) provides detailed anatomical and patho logical information. The pathophysiological and morphological changes of the disease can be obtained by the fusion of these two techniques. PET/CT is an advanced examination method. Its application in the diagnosis of tumors, especially BT, and the clinical value of differential diagnosis cannot be ignored. In addition, it is also non-invasive (8). Because of the existence of false positive and false negative, the result should be judged synthetically. In this study, CT was used as a control to evaluate the diagnostic efficacy of PET/CT in different stages of bone tumors.
Methods
PET/CT examination. Patients were weighed. The injection measurement of image agent should be controlled according to the patient's weight. Patients with BT should fast for at least 6 h before the examination. After 6 h, the venous blood glucose concentration of BT patients was measured to ensure that the blood glucose concentration was <7.8 mmol/l. The hospital needs to handle it in time when blood glucose concentration is too high or too low. 18 F-FDG imaging agent was injected into the patient's elbow vein after the patient's blood glucose concentration was within the normal range (the radiochemical purity should be >95%). Patients needed to empty their urine first and then drink 600 ml purified water before PET/CT examination. CT scan-fluoroscopic guidance was performed on the lesions of BT patients first; the PET was used to scan the largest range of BT lesions next, then the decay data of CT was corrected. The fusion images of CT, PET and PET/CT in all directions were then formed.
CT examination. All subjects were examined with 64-slice spiral CT with a slice thickness of 5-10 mm, a matrix of 512 x 512 mm. The soft tissue window and bone window parameters were set to 1,500-3,000 HU in window width and 300-700 HU in window level.
Judgement criterion. The diagnostic criteria for BT are as follows: i) The history of BT patients is completely clear. ii) PCT/CT showed that the concentration of bone nuclide was abnormal, the distribution was irregular and the distribution Malignant BT Periosteal edges were unclear. The soft tissue mass was obviously enhanced and the edge of the mass was clear. There were even cortical destruction, pathological fractures, bone lesions or bone scan radioactive concentration.
BT, bone tumor; PET/CT, positron emission tomography/computed tomography. Figure 1. Malignant BT. In the anterior part of the right femur, a large masslike abnormal density was observed. The density of the mass was uneven, and some of them were located in the suprapatellar capsule area. The cortical bone of the lower femur was destroyed, and the pericarp-like periosteal reaction was observed. The soft tissue of the patellar bursa and the lower end of the femur was significantly swollen, and the right knee was in place.
range was enlarged with time. Manifestations of BT in PET/CT are shown in Table II and Fig. 1. iii) CT or X-ray showed osteogenic destruction or osteolytic lesions in some bone tissues. Manifestations of BT in CT are shown in Table III. A patient who meets the first or last two criteria can be diagnosed as a BT patient. All images were evaluated by two or more relevant chief physicians.
Statistical methods. SPSS 17.0 (Beijing Bizinsight Information Technology Co., Ltd., Beijing, China) software system was used for statistical analysis; The enumeration data were represented by [n (%)]. An χ 2 test was used for a comparison of diagnostic accuracy of BT in different phases. Students' t-test was used for diagnostic accordance rate of CT and PET/CT. The difference was statistically significant at P<0.05. respectively. There were significant differences in sensitivity, negative predictive value, positive predictive value and diagnostic accordance rate between PET/CT and CT screening (P<0.05). There was no significant difference in specificity between the two groups (P>0.05) ( Table VI).
Comparison of the diagnostic efficacy between CT and PET/CT in different stages of BT.
i) The diagnostic accordance rates of CT in benign BT and malignant BT were 64.29 and 75.00%, respectively. The diagnostic accordance rates of PET/CT in benign BT and malignant BT were 85.71 and 95.00%, respectively. The result showed that the diagnostic accordance rates of PET/CT in benign BT and malignant BT were higher than those of CT. The diagnostic rate of PET/CT in malignant BT was significantly higher than that of CT (P<0.05), and the difference was statistically significant (Table VII); ii) In comparison of the positive diagnostic rate between CT and PET/CT in different stages of malignant BT, the positive diagnostic rates of CT in stages I-III of malignant BT were 46.67, 90.00 and 93.33%, respectively. The positive diagnostic rates of PET/CT in the same stages were 86.67, 100.00 and 10.00%, respectively. Comparing the data of the two groups, it was found that the positive diagnostic rate of PET/CT in stages I-III of malignant BT was higher than that of CT, and the difference in stage I of malignant BT was statistically significant (P<0.05) (Table VIII).
Discussion
The location of bone tumor (BT) is often in bone tissue or bone subsidiary tissue. Since BT in different stages has similar clinical and imaging manifestations, it is more difficult to diagnose BT in clinic. Relevant BT pathology (9) result shows that the clinical manifestations of partial benign BT are in a malignant state, and the effect of some benign BT-like lesions under X-ray (10) is particularly like that of malignant BT, and thus makes it more difficult to diagnose BT and BT-like lesions (11). At present, X-ray examination and CT scan are often used for early diagnosis or tumor staging of BT. Clinical application data (12) show that although X-ray examination can clearly reflect the location and size of BT in the BT staging diagnosis, it cannot accurately diagnose whether the BT is benign or malignant. Thus, the limitation of X-ray in the specific staging diagnosis of malignant BT is more obvious. CT can better display the fine anatomical structure of the location of BT when compared with X-ray examination. However, for improving the early diagnosis rate and the specific stage of BT patients, CT still lacks more precise function, metabolism and other molecular information (13) to assist BT staging diagnosis. PET/CT technology, which can integrate body function, metabolism and other molecular information, and accurate anatomical and pathological information, has been put into clinical application in recent years. A study (14) has confirmed that PET/CT is particularly sensitive for benign tumor, malignant tumor and early diagnosis of tumors. The diagnostic efficacy of PET/CT and CT in BT was measured, and the results were compared and analyzed in the present study. It was found that the positive predictive value of CT in BT patients was 85.71%, while the detection rate of PET/CT in BT patients was 95.00%. In comparison, the detection rate of PET/CT in BT was significantly higher than that of CT. CT examination results of Janssen et al (15) found that CT imaging of adjacent disc tissue (16,17) was not very clear, and it was easy to cause misdiagnosis and missed diagnosis. While the results of different diagnostic efficacy of PET/CT and CT in BT showed that the sensitivity, specificity, diagnostic accordance rate, negative predictive value and positive predictive value of CT screening for BT were 75.00, 64.29, 72.22, 47.37 and 85.71%, respectively. While the sensitivity, specificity, diagnostic accordance rate, negative predictive value and positive predictive value of PET/CT screening were 95.00, 85.71, 92.59, 85.71 and 95.00% respectively. There were significant differences in sensitivity, negative predictive value, positive predictive value and diagnostic accordance rate between PET/CT and CT screening (P<0.05). There was no significant difference in specificity between the two groups (P>0.05). The diagnostic efficacy of PET/CT in BT is better than that of CT. The results of Guimaraes et al (18) were consistent with ours. They also applied the PET/CT technique to the comparative study of the diagnostic efficacy in BT, and compared it with other scanning techniques. By analyzing the scanning results of BT patients, they found that the sensitivity, specificity, diagnostic accordance rate, negative predictive value and positive predictive value of PET/CT were significantly higher than those of other scanning techniques, which was a good complement to our findings. Finally, the diagnostic efficacy of PET/CT in different stages of BT was analyzed concretely. The result showed that the diagnostic accordance rates of PET/CT in benign BT and malignant BT were higher than those of CT. The diagnostic rate of PET/CT in malignant BT was significantly higher than that of CT (P<0.05), and the difference was statistically significant. Particularly in the stages I-III of the malignant BT, it was found that the positive diagnostic rate of PET/CT in stages I-III of malignant BT was higher than that of CT, and the difference in stage I was statistically significant (P<0.05). The accurate diagnostic efficacy of PET/CT in BT staging (19) has been confirmed in the clinical study of BT. The advantages of PET/CT in the diagnosis of BT or other tumors were summarized by El-Galaly et al (20), through extensive clinical data induction and comparison with other detection methods (21,22). They considered that the advantages of PET/CT in the diagnosis of different stages of tumor were that it could locate the lesion more accurately, detect the smaller lesion, and distinguish the benign, malignant and different stages of BT, abdominal neuroendocrine tumor and ovarian cancer, hepatocellular carcinoma by PET/CT imaging. In this study, the selection of research objects was strictly in accordance with the inclusion and exclusion criteria to ensure the reliability of the results. However, due to the small number of subjects included, there were still some missed diagnosis and misdiagnosis for PET/CT in the detection of the diagnostic efficacy of PET/CT.
In conclusion, the diagnostic efficacy of PET/CT scan screening in different stages of BT is significantly better than that of CT. When CT scan screening is not accurate enough to judge BT staging, PET/CT can provide more precise tissue physiological metabolism and imaging evidence of anatomical structure of BT lesion. That is, PET/CT can accurately diagnose the pathological nature of BT, effectively diagnose the clinical stage of malignant BT, and provide more accurate clinical diagnosis basis for BT treatment. | 2019-03-28T13:34:04.982Z | 2019-03-04T00:00:00.000 | {
"year": 2019,
"sha1": "332ee89902d9e208c7897d6a6114bdd71d5e7351",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2019.10101/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "332ee89902d9e208c7897d6a6114bdd71d5e7351",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
224723992 | pes2o/s2orc | v3-fos-license | First identification and molecular phylogeny of Sparganum proliferum from endangered felid (Panthera onca) and other wild definitive hosts in one of the regions with highest worldwide biodiversity
After decades of being neglected, broad tapeworms now attract growing attention thanks to the increasing number of reports from humans but also thanks to many advancements achieved by application of molecular methods in diagnosis and epidemiological studies. Regarding sparganosis, unfortunately general uniformity of most species, their high intraspecific variability and lack of agreement among researchers has led to confusion about the classification of Spirometra/Sparganum species. For the first time we determined adult, eggs and plerocercoid life cycle stages and the molecular phylogeny of Sparganum proliferum obtained from endangered wild felids (Panthera onca, Leopardus pardalis, Leopardus guttulus and Herpailurus yagoauroundi) in one of the largest continuous remnants of worldwide biodiversity, the Atlantic Forest from South America. Our results showed that at least 57% of total species of wild felids in this natural area could act as definitive hosts of Sparganum proliferum. We conclude that the availability of more morphological characteristics are needed in order to secure reliable characterization and diagnosis of sparganosis. The integration of these data with molecular analysis of mitochondrial DNA sequences will be useful for species discrimination.1
Introduction
Sparganosis is an emerging parasitic zoonotic disease mainly caused by the second larva stage (plerocercoid) of diphyllobothriid cestodes such as, Spirometra ssp. and Sparganum proliferum (Noya et al., 1992;Kokaze et al., 1997;Miyadera et al., 2001;Brabec et al., 2006;Kuchta et al., 2008;Schauer et al., 2014;, Oda et al., 2016;Hong et al., 2020). Sparganum proliferum is a cryptic parasite which phylogeny and life cycle are poorly understood. The adult stage of S. proliferum has not been observed and the precise taxonomic relationships of S. proliferum with other tapeworms remain unclear because few genes have been sequenced (Noya et al., 1992;Miyadera et al., 2001;Okamoto et al., 2007). Recently, the genome and transcriptomic analysis of plerocercoid of S. proliferum was reported and confirmed that S. proliferum and Spirometra erinaceieuropaei are closely related but different species . In addition to taxonomic considerations, the pathogenicity of S. proliferum (proliferative sparganosis) and plerocercoids of Diphyllobothriidae tapeworms, including those of S. erinaceieuropaei (non-proliferative sparganosis) are different . Sparganosis cases are reported worldwide, but it has been predominantly diagnosed in Southeast Asia, mainly in China (Dorny et al., 2009;Qiu and Qiu, 2009;Liu et al., 2015). Human sparganosis frequently occurs by consuming raw or undercooked meat of infected reptiles or amphibians, drinking water contaminated with copepods as well as direct contact with the skin of infected frogs or snakes (Li et al., 2011;Liu et al., 2015;Oda et al., 2016;Okino et al., 2017;Zhang et al., 2020). Also, a case of human infection by adult of S. erinaceieuropaei has been reported in Vietnam (Le et al., 2017). In Argentina, three cases of sparganosis have been reported in individuals from border countries. Two with cerebral location (Boero et al., 1991;Jones et al., 2012) and one with cutaneous location (De Roodt et al., 1993). Moreover, in Argentina there are few reports of Spirometra spp. in animals. Spirometra mansonoides has been found in cats (Santa Cruz and Lombardero, 1987), S. erinaceieuropaei in cats (Venturini, 1980(Venturini, , 1989 and dogs (Denegri, 1993). In wildlife, Martínez et al. (2010) has identified eggs of S. mansonoides in the felines F. pardalis, F. yagouaroundi, Panthera onca and Puma concolor. Spirometra has been reported in Pampas fox (Lycalopex gymnocercus) (Reigada et al., 2012;Petrigh et al., 2015). Despite numerous attempts to clarify its taxonomy, host specificity and geographic distribution (Faust et al., 1929;Wardle et al., 1974), the genus remains one of the most complicated groups of tapeworms, and several lines of evidence concluded that it is very difficult and almost impossible to distinguish some of the 50 nominal species of Spirometra based solely on morphological characteristics (Iwata, 1934(Iwata, , 1972Mueller, 1974;Daly, 1981;Odening, 1985;Kuchta and Scholz, 2017). In Asia, there are several studies of S. erinaceieuropaei lineages (Zhang et al., 2016), particularly in wild frogs in China the prevalence is above 10% in some regions. Zhang et al., 2017Zhang et al., , 2020. Regarding Africa, there are molecular reports of findings of Spirometra sp. in human infections in South Sudan and Ethiopia (Eberhard, 2015). In Europe, there are also molecular records of S. erinaceieuropaei plerocercoid larvae in wild fauna from Poland (Kołodziej-Sobocińska et al., 2019). In Brazil, there have been reports of Spirometra spp. larval stages in cold-blooded animals (Rego and Schäffer, 1992) humans (Liu et al., 2015) and adult stages in wild felids (Vieira et al., 2008). The occurrence of a particular Spirometra lineage in South America has been reported (Almeida et al., 2016) and the molecular sequences obtained were phylogenetically cluster in a separate node and distant to the Asian S. erinaceieuropaei lineage.
The objective of this work is to identify and characterize diphyllobothriid infections in wild animals of the Atlantic Forest of Misiones, Argentina through an integrative approach that links morphological, genetic and ecological aspects. Here we report, for the first time, the presence of adult and eggs of S. proliferum in wild animals in South America, confirmed by molecular analysis. Our results could be useful to understand some of the underlying aspects of the life cycle of S. proliferum and evaluate the zoonotic importance in the interface areas to guide prevention measures for human and animal welfare.
Study area
The study area contains one of the largest continuous remnants of Atlantic Forest (AF) in the World. It is located in northern Misiones province, Argentina, (54 • 15 ′ 30.60 ′′ W, 25 • 55 ′ 52.32 ′′ S). The area is 220 m in altitude and presents subtropical climate with annual rain precipitations between 1700 and 2100 mm (Ligier, 2000).
Animal samples
Road-killed animals were actively searched on national routes 101 and 12 that cross the Iguazú National Park between the years 2015-2016. Animal necropsies were carried out under approved protocols by the National Parks Administration technical office (NEA 423 Rnv ex DCM 483 Dispo 23/2015). Only 1-2 days old animal carcasses were selected for sampling. Four animals were collected and analyzed in this work and are summarized in Table 1. Each animal was individually packed and labelled with relevant information including place of origin, sampling date, age category, and sex of the animal.
Parasite samples
The intestinal tracts of the analyzed carnivores were carefully removed from each carcass and subsequently isolated by ligatures (pylorus and rectum). All samples were kept at − 20 • C for at least 1 month prior to processing in order to inactivate possible parasite eggs from other species (Scioscia et al., 2013). Examination of the intestinal content was performed as previously (Arrabal et al., 2017) using the modification of the technique described originally by Eckert (2001). Briefly, the small intestine was separated from the large intestine, and then each section was placed in different trays and cut lengthwise. Coarse material and large parasites of the small intestine were removed. Then, each section was immersed in 9% saline solution at 37 • C for 30 min. Intestinal walls were scraped with a microscope slide, and all the content of each section were poured into individual glass bottles and left to stand for 20 min. The supernatant was discarded and physiological saline solution was added to dilute the sediments. This procedure was repeated several times until the supernatant was almost translucent. Obtained sediments were examined in small portions of 5-10 ml round petri dishes with magnifier lens at × 65 to identify small helminths. The helminths found were cleaned with saline solution and deposited in recipients with either 4% formalin or 70% ethanol for further taxonomic and molecular examination, respectively.
Morphology studies
Strobilas of adult tapeworms, larvae and eggs were analyzed under optical Primo Star (Carl Zeiss Gmbh, Gӧttingen, Germany) microscope using Axion Cam ERc 5s camera (Carl Zeiss Gmbh, Gӧttingen, Germany). Each sample was whole mounted and registered with 4 × , 10 × and 40 × using Carl Zeiss Vision software for image analysis. Moreover, strobilas and larvae tissue sections were prepared in paraffin and were sectioned in serial sections of 4-5 μm, mounted on glass slides, and stained with hematoxylin-eosin (HE). The slides were analyzed under optical microscope and picture was taken at 4 × , 10 × and 40 × . The main features analyzed in the larvae were pleomorphism, color, Table 1 Percentage divergence between mitochondrial sequences from samples of wildlife Sparganum proliferum and reference genes of Spirometra erinaceieuropaei (KJ599680) and Sparganum proliferum (AB015753). symmetry and presence or not of scolex (Noya et al., 1992). The main features analyzed in eggs were size, shape and presence or not of cap and pointed ends (Mueller, 1936).
Molecular identification and phylogenetic analysis
Total parasite genomic DNA was obtained using the DNeasy Blood &Tissue Kit (Qiagen GmbH, Hilden, Germany). Three molecular markers from mitochondrial genome were used to determine species. Cytochrome c oxidase subunit I (cox1) gene, NADH dehydrogenase subunit 1 (nad1) gene and ATP synthase subunit 6 (atp6) gene were selected since we know and used them in previously reports from cestodes (Kamenetzky et al., 2002;Arrabal et al., 2017) and were demonstrated to be useful to classify isolates of Spirometra in previously reports (Almeida et al., 2016). The genes were amplified by polymerase chain reaction according to Arrabal et al. (2017) Table 1). The PCR-product obtained was sequenced and firstly aligned with ClustalX (v2.0.12) with Spirometra and Sparganum sequences extracted from complete mitochondrial genomes available on GenBank and considered as reference genomes (International Helminth Genomes Consortium, 2019). To get insight into an accurate phylogenetic analysis of our parasite linage we downloaded cox1 sequences form Asia, Africa, Europe and South America totalling 275 cox1 sequences (Lavikainen et al., 2013;Zhang et al., 2017Zhang et al., , 2020Jeon and Eom, 2019;Kołodziej-Sobocińska et al., 2019;Hong et al., 2020). After several sequence redundancy removal 42 cox1 sequences were retained. These data set includes Spirometra cox1 sequences from wild frogs that were described to have a high pairwise genetic distance with the reference mitochondrial genome (Zhang et al., 2017). Multiple alignments were edited with BioEdit (v7.1.3). Maximum likelihood phylogeny was performed using MEGA7. A discrete gamma distribution was used to model evolutionary rate differences among sites. Branch lengths were measured as the number of substitutions per site. All positions with less than 80% site coverage were eliminated. There was a total of 296 positions in the final dataset. Additionally, Bayesian phylogeny was implemented by using BEAST. Substitution model HKY + G + X with gamma distribution was selected with PartitionFinder. Changes in the evolutionary rates among branches were performed by using random local clock model (Drummond and Suchard, 2010). For earlier tree a basic coalescent model was selected. MCMC run was performed with tree parameter values sampled every 1000 steps over a total of 100,000.000 steps (Zhang et al., 2017).
Morphological identification of Sparganum proliferum in wild carnivores
The analysis for intestinal tracts of wild carnivores allowed as isolating tapeworms morphologically compatible to Spirometra in wild carnivores, this being the first report of this parasites in the eco-region of Atlantic Forest. One Leopardus pardalis (ocelot), one Panthera onca (jaguar), one Leopardus guttulus (tirica) and one Herpailurus yagoauroundi (yaguarundi) (Fig. 1). Parasites were identified according to morphological features, the individual selected for further analysis has a resemblance with Spirometra by their general appearance and size (Fig. 2). The larva has the following major macroscopic features: pleomorphism, white color; length <5 mm, lack of bilateral symmetry and without scolex (Fig. 2D) accordingly to previously Sparganum proliferum larvae features described so far (Noya et al., 1992). Although numerous worms were found it was not feasible to identify all specimens based on morphological features because most of them were fragmented and were not suitable for morphological examination. Regarding strobilas corresponding to adult tapeworms the major differentiating features of the eggs found were land shape and the evident cap and pointed ends attributable to the genus (Mueller, 1936) (Fig. 3). The average eggs measures were 67.02 μm by 34.95 μm (n = 50). Histological sections of strobilas were performed. The main characteristics (based in mature and gravid proglottids) were i) presence of anterior and posterior uterine coils in the longitudinal median line of the proglottids ii) ventral middle uterine pore in the third of the gravid proglottid iii) uterus opened by a pore well separated from and posterior to the vagina, and presence a varying number of loops in the terminal heavy walled portion in an "S" shape (iv) uterus consisted of 5-7 loops and the dumbbell-shaped ovary connected to the uterus and situated near the posterior margin v) vagina passed traversing from its vestibule in an approximately straight path in the median line thrown into lateral undulations of different amplitude viii) cirrus surrounded by the seminal receptacle and opens out separately from vagina and near to the uterine pore (Fig. 4). In this section the ratio of width and length of gravid proglottids and uterine morphology were consistent to Spirometra spp. (Iwata., 1972;Mueller, 1974).
Molecular characterization of Sparganum proliferum in wild felids
First, we analyzed by PCR and sequencing the adult obtained from the ocelot (sample LPMiSP) by three molecular markers. The sequences obtained from cox1 (295 nt), nad1 (343 nt) and atp6 (594 nt) mitochondrial genes were concatenated resulting in a dataset of 1322 Table 1. References species are marked with a black dot.
nucleotides to analyze the complete information in an integrated phylogeny. Multiple sequence alignment comparisons with all mitochondrial reference genomes were performed in order to identify the ocelot mitochondrial sequence. Redundant reference sequences were removed and a total of 10 orthologous sequences from mitochondrial complete genomes were finally included (Supplementary Table 2). The phylogenetic tree constructed based on the multiple alignment showed that LPMiSP belongs to Spirometra lineage near to S. erinaceieuropaei (KJ599680) isolated from a human in Korea (Supplementary Fig. 1). The genetic divergence between LPMiSP and S. erinaceieuropaei was 14.4% for cox1, 12.5% for nad1 and 26.9% for atp6 (Table 1). Since S. erinaceieuropaei cox1 non redundant sequences available in GenBank have an average genetic divergence of 8.8% and the genetic distance obtained between LPMiSP and Spirometra spp. was relatively higher (14.4%) we couldn't classify it as belonging to the same species. To get insight the presence of Spirometra in wild felids we assessed to amplify and sequence the same three molecular makers from more samples. The cox1 sequences from jaguar (samples POMiSP1 and POMiSP2), tirica (sample LTMiSP) and yaguarundi (sample HYMiSP) and additional nad1 sequence from sample LTMiSP were obtained. The nad1 sequence obtained from tirica host was 100% identical to the previously sequenced obtained from LPMiSP-nad1. Additionally, all cox1 sequences were 100% identical to each other. Since atp6 was not possible to be amplified, we hypothesize that several SNPs are present between mitochondrial genomes from Argentinean wild felids and Spirometra spp. mitochondrial genomes reported, and may be different species. To test this hypothesis, we retrieve a broader set of cox1 sequences available for Spirometra/Sparganum in GenBank and performed multiple alignments. The number of SNPs between cox1 sequences from parasites from Argentinean wild hosts is shown in Supplementary Fig. 2. One interesting finding was that cox1 sequences from wild felids from Argentina have a genetic divergence of 4.2% with Sparganum proliferum cox1 sequence (AB015753) ( Table 1). This finding was consistent with the phylogeny obtained from the multiple sequences alignment. Even the tree topology indicates that a taxonomic revision of some sample is needed (some Spirometra decipiens clustered with S. erinaceieuropaei sequences). Parasite samples obtained in this work shared common ancestor with Sparganum proliferum (Fig. 5). Besides this, the sequences that were characterized as Sparganum, including those obtained in this study, are included within the same clade as a Spirometra lineage registered in South America (KF572950 and KT375456). These sequences have 6.4% and 6.7% of genetic divergence with LPMiSP, respectively. The Spirometra sequences from the next near node (e. g. KF988137) have 12.0% genetic divergence with LPMiSP. Taking into account the tree topology and the genetic distance between Sparganum and Spirometra cox1 sequences we suggest that KF572950 and KT375456 accession numbers also belongs to S. proliferum species. We confirmed our results by Bayesian phylogenetic analysis (Supplementary Fig. 3). In this phylogeny numbers along branches indicate posterior probabilities that support the groups mentioned before. Moreover, the effective sample size (ESS) values for all parameters were above 200 giving confidence to the analysis.
Discussion
After decades of being neglected, broad tapeworms now attract growing attention thanks to the increasing number of human cases but also thanks to considerable advance achieved by application of molecular methods in diagnosis and epidemiological studies (Scholz et al., 2019). Regarding sparganosis, general uniformity of most species, their high intraspecific variability and lack of agreement among investigators has led to confusion about the classification of Spirometra/Sparganum (Mueller, 1974;Daly, 1981;Kuchta and Scholz, 2017). Moreover, most of the available material was obtained from host examined long time post mortem or even from decomposed carcasses, which may have caused significant morphological changes (Hernández-Orts et al., 2015).
As a result, morphological and biometrical data in some species descriptions may be misleading. Similarly, most clinical samples of larval stages were not characterized molecularly and were described under different names. This work showed that S. proliferum and S. erinaceieuropaei species have dissimilar cox1, nad1 and atp6 sequences. The molecular results of this work are in concordance with previous analysis where both species were clearly distinguished by cox1, nuclear sdhB and 18 S rDNA V2 region gene sequencing (Miyadera et al., 2001;. Also, discrepancies in Spirometra phylogeny and a possible new species were also reported by Zhang et al. (2017Zhang et al. ( , 2020 studying parasites from wild fauna. Moreover, the low identity and high genetic distance between atp6 sequences obtained from ocelot (LPMiSP) and the reference atp6 sequence from S. erinaceieuropaei support these findings. Mitochondrial atp6 sequences from S. proliferum are not available in public databases to make the necessary comparisons with the results obtained in the present work. The isolates analyzed in this work are not closely related to S. erinaceieuropaei or other Asian Spirometra lineage, but instead, might display close affinities to one of the lineages described as Spirometra from South America and with S. proliferum (AB015753) mitochondrial reference genome. Our source of parasites are dead animals on the road, it should be noted that the high temperature of the region under study favours the decomposition rate of carcasses. For this reason, helminth specimens not suitable for ideal morphological identification are the most common outcome. We overcome these problems by implementing an integration of morphological and molecular data analysis. Morphologically similar species presenting a spiralled uterus (S. decipiens, S. gracilis, S. longicollis, and S. mansoni) were reported in wild felids from Brazil, as well as in proglottids of Spirometra spp. (Almeida et al., 2016); however, the vagina of S. erinaceieuropei is considered to lie next to the midline and descends in waves of different amplitude (Palmer et al., 2008). Also, the shape of the uterus lacks uniformity in the number of turns (between three and seven loops) having irregular arrangement and size (Iwata, 1932;Mueller, 1974;Okino et al., 2017). Our results showed that the proglottids of S. proliferum, and also the eggs found, presented the same morphological characteristics that S. erinaceieuropei and it is because of that, these species need to be evaluated using molecular markers. Recently, Kuchta et al. (in press) suggested Sparganum proliferum belongs to a lineage of S. decipiens described in South America. However, since their results are based only in genetic data and we analyzed not only this data but also novel adult morphological features that classification may be revised. More isolates analyzed with other molecular markers such as nuclear genes or complete genome sequences are needed to confirm the presence of S. proliferum in wild hosts. meanwhile the three mitochondrial genes employed in this work could be used as molecular markers for epidemiology studies. The sequence comparison among Spirometra from Brazil and Sparganum from Argentina indicates that they are different lineages. Species of Sparganum occur in warmer latitudes similar to the region analyzed here (Mueller, 1974;Daly, 1981). Fatal proliferative sparganosis was reported from domestic cats in North America (Buergelt et al., 1984;Woldemeskel, 2014) and dogs in Europe (Stief and Enge, 2011). However, the impact of diphyllobothriid cestodes in wild animals is not clear yet. Our findings showed for the first time S. proliferum adults and larvae in the intestine track of the wild felids. In Spirometra species it was already described that once in the secondary vertebrate host, the procercoid can develop into a plerocercoid larva in different tissues, which can survive predation and reach a wide variety of vertebrates (Mueller, 1974;Opuni and Muller, 1974;Liu et al., 2015). We found S. proliferum plerocercoids larvae in tirica intestine, indicating that the preys it feeds on are harbouring plerocercoids that survive the gastric digestion. Regarding the unknown complete life cycle of S. proliferum, the human activities in the region under study like the conversion of natural landscapes to urban areas may increase predation by domestic dogs and cats on wild amphibian and reptile populations, thus potentially enhancing the incidence of proliferative sparganosis (Borteiro et al., 2015). The possible role of amphibians and reptiles may have in the occurrence of human cases in South America is an issue not being well investigated yet. In Argentina, there are few reports of Spirometra spp. mostly in domestic definitive hosts Venturini (1980Venturini ( , 1989; Santa Cruz and Lombardero (1987); Denegri (1993). Regarding wildlife, S. mansonoides eggs were reported in felids (Martínez et al., 2010) and Spirometra spp. in the Pampas fox (Lycalopex gymnocercus) (Reigada et al., 2012;Scioscia et al., 2014;Petrigh et al., 2015). The present work showed that at least four different species of wild felids, out of the six existing in the natural area under study are involved in the sylvatic life cycle of S. proliferum and could act as definitive hosts in the Atlantic Forest. This region is shared with other groups of carnivores (canids, mustelids and procyonids) that could also be participating in the cycle. Sparganum proliferum is a good model for ecological interaction studies, which allows us to understand and define the trophic levels of the intermediate and definitive hosts, and then to establish the distribution of parasites within a host population (Denegri, 2008). For these reasons, the knowledge of prevalence of Sparganum in wild animals from Argentina is necessary, due to ongoing changes on the environment that affect parasite ecology and its transmission dynamics. In Argentina few cases of human sparganosis have been reported but the real prevalence is unknown in the country. In the meantime, a reliable taxonomic criterion based on morphological characteristics integrated with molecular analysis of mitochondrial DNA sequences will be useful for species discrimination.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-10-19T18:12:59.627Z | 2020-09-13T00:00:00.000 | {
"year": 2020,
"sha1": "cce0db18e81e35fe84a16ac7d1d6e1ba6dc8f353",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ijppaw.2020.09.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dba335d80542f0aedb5c21baf1518498878380f9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237961780 | pes2o/s2orc | v3-fos-license | The Effectiveness of Biological Products and Micronutrient Fertilizers use in Row Crops Cultivation
The results of field studies of the use of biological products and microfertilizers in the cultivation of corn for grain and sunflower in the production conditions of the Krasnodar Region are presented. Indicators of a comparative assessment of economic and new schemes for the application of fertilizers and plant protection products from the point of view of agrotechnical and economic efficiency are given.
Introduction
At present, the protection of plants from diseases caused by various pathogens is an economically and socially significant problem. The main method of plant protection is the use of chemicals that have a number of serious disadvantages. The use of biological plant protection products is becoming more and more urgent [1][2][3]. To date, in Russia, only 0.3% of agricultural land is processed with biological products [4].
The use of fertilizers has the greatest impact on the formation and development of plants in the cultivation of row crops. Optimization of plant nutrition, namely the use of modern biological products and microelements during the growing season, contributes to an increase in the yield of corn and sunflower grain [5][6][7].
Biological products are used for preliminary inoculation of seeds, spraying of plants during the growing season in order to improve nutrition, increase resistance to pathogens, increase yields and improve the quality of agricultural products [8][9][10].
Recent studies indicate that the most effective form of trace elements for plants are complex compounds in the form of chelates (organic intracomplex compounds of cyclic structure, containing an ion of any metal in their molecule). An important feature of chelates is the manifestation of biochemical activity due to the content of amino acid fragments in their composition. This allows us to consider them as compounds that provide plants with better availability of trace elements that contribute to their greater productivity. Trace elements in a chelated form have a number of valuable properties: they are highly soluble in water, highly stable in a wide range of acidity (pH values), are well adsorbed on the surface of leaves and in the soil, they are not destroyed by microorganisms for a long time, and are well combined with various pesticides. Trace elements, when introduced into the soil, contribute to the conversion of inaccessible trace elements into a biologically active form. Such microelements are water-soluble organic salts and are practically not fixed in the soil absorbing complex, remaining available for plants for a long time [11][12][13].
Currently, a large number of new types of biological preparations and fertilizers have been developed with different doses of microelements and various methods of their application for row crops cultivated using intensive technologies. The choice of the most effective drugs for certain crops and specific economic conditions is an urgent and urgent task.
In recent years, Russian scientists have created biological preparations, the use of which ensures an increase in the yield of agricultural crops. The main mechanisms of action of microorganisms on plants are as follows: improving nitrogen nutrition, optimizing phosphorus nutrition, stimulating growth and development, suppressing phytopathogens (controlling the development of diseases and reducing plant damage), increasing the utilization of nutrients from fertilizers and soil, increasing resistance to stress conditions (lack of precipitation, unfavorable temperatures, high acidity, salinization or soil pollution with substances of various natures) [14].
The relevance of the studies carried out by the specialists of the Novokuban branch of the Federal State Budgetary Scientific Institution "Rosinformagrotech" (KubNIITiM) is to substantiate the effectiveness of the use of biological preparations and fertilizers with microelements produced by Biotechagro LLC as leaf treatments in the cultivation of row crops.
The purpose of this work is to study the effect of various schemes for the introduction of biological preparations and fertilizers with microelements on the yield of row crops (corn for grain, sunflower).
Materials and methods
Materials for industrial research (Table 1) were provided by "Biotehagro" LLC (Timashevsk), which produces biological products based on living, beneficial microorganisms, and develops schemes for the effective use of these drugs in agriculture. fertilizer based on humic acids (liquid concentrate) Helios Silicon liquid mineral fertilizer with a maximum concentration of silicon in the form of silicon dioxide of a special form of processing The research work was carried out in accordance with the methodology of field experience in agricultural crops of corn for grain, sunflower, developed jointly with representatives of the "Biotehagro" LLC, and taking into account their recommendations on the timing and doses of the drugs used in the experiment (Table 2), and also in compliance with certain methodological requirements (typicality, principle of single difference, etc.) according to the instructions for conducting the field experiment by B.A. Dospekhova [15]. For the purity of the experiment, experimental plots were laid for each crop within a separate field for the predecessor of winter wheat, all technological operations were identical and corresponded to the generally accepted schemes of cultivation of these crops.
A mid-season hybrid of corn for grain "Pioneer P9241 (P9241)" (FAO 340) from the American company Pioneer, included in the State Register for the North Caucasus region. The plant is medium-sized, strong stem increases lodging resistance. Excellent resistance to stress and drought. A hybrid of maize with an innovative system of resistance to blister smut, fusarium on the cob and helminthosporiosis.
A mid-early hybrid of the "Pioneer P64LL125" sunflower of the Pioneer linoleic type with a high oil content. It has high indicators of drought resistance, well-developed root system. A large number of seeds are formed in the basket of the plant. Differs in high productivity rates and excellent drought tolerance. Perfectly adapts to any soil and climatic conditions. Sunflower hybrid with an innovative system of broomrape resistance "System-2" and root lodging. Good tolerance to leaf and basket diseases. The growing season is 105-115 days, the yield potential is 49 c / ha, the plant height is 180 cm, the basket is convex, the oil content is 50%, the weight of 1000 seeds is 60 g.
Pre-harvest monitoring options
According to the developed methodology for a comparative assessment of the variants of the experiment, before harvesting, the crops were monitored by the variants of the experiment (tables 3, 4). To do this, on the counting sites 10 m long, each two rows wide, a complete analysis, counting and measurement of plants (in triplicate) were carried out. As a result of a comparative assessment with the control variant, differences in the biometric parameters of plants and in the general state of crops were revealed.
For corn for grain for option 2k (Biotechagro) in comparison with option No. 1k (control): • plant height is 9.9 cm or 3.4% more; • the average thickness of the stem at the base of the plants is 0.6 mm or 3.7% higher; • the average height of the lower ear is less by 1.0 cm or 0.8%; • the length of the cob increased by an average of 1.0 cm or 6.3%; • the ear diameter has increased by an average of 0.3 cm or 0.7%. For sunflower for option 2p (Biotehagro) in comparison with option 1p (control): • plant height is 6.3 cm or 2.9% more; • the average height of the arrangement of baskets is lower by 3.9 cm or 1.9%; • the average diameter of a sunflower basket is 1.2 cm or 7.1% higher; • the average diameter of the plant stem is 0.6 mm or 3.7% more.
Yield assessment
Evaluation of the yield according to the variants of the experiment was carried out during harvesting by direct combining in one day: for corn for grain on 09/08/2020 with an average grain moisture of 11.4%, for sunflower -on August 26, 2020 with an average grain moisture of 7.0%. The actual yield (table 5) was determined by the amount of harvested grain from the accounting plot harvested by the same combine. As a result of harvesting, the actual yield of corn grain in option No. 2k (Biotehagro) amounted to 85.7 c / ha, which is 1.5 c / ha or 1.8% higher than the control option. The advantage in the weight of 1000 grains (292.58 g) also prevailed in variant No. 2k (Biotehagro), which is 9.18 g or 3.3% higher than the indicator of the control variant.
The yield of sunflower in option 2p (Biotehagro) was 34.0 c/ ha, which is higher than the control option by 1.6 c/ha or 5.0%. The advantage in the weight of 1000 grains (59.69 g) also prevailed in option 2p (Biotechagro), which is 4.85 g or 8.9% higher than the control option.
Economic evaluation of technology options
Let us analyze the indicators of economic efficiency (table 6) of the introduction of drugs produced by OOO Biotehagro (option No. 2) in comparison with option No. 1 (control). drugs in comparison with the economic application is significantly higher than the additional costs.
Discussion
Analyzing the obtained results of the use of preparations produced by "Biotehagro" LLC (option 2) in production technologies for the cultivation of corn for grain and sunflower, we can conclude that there is a positive dynamics of improvement of such indicators as the length and diameter of the corn cob, the diameter of the sunflower basket. An increase in their values contributed to an increase in yield for grain corn by 1.5 c/ha or 1.8%, for sunflower -by 1.6 c/ha or 5.0%.
The analysis of the results of the economic assessment showed that for corn for grain, the profit per hectare increases by 2.1% compared to the agricultural application, and the additional profit received from the increase in yield due to the use of drugs is 839.5 rub/ha, which 2.3 times higher than additional drug costs. For sunflower, the profit per hectare increases by 4.7% compared to the agricultural application, and the additional profit received from the increase in yield due to the use of drugs is 2286.3 rub/ha, which is 2.1 times higher than the additional costs for drugs.
Conclusion
Based on the results of experimental studies of various schemes for the use of biological preparations and micronutrient fertilizers from "Biotehagro" LLC in production technologies for cultivating a hybrid of corn for grain "Pioneer P9241" and a hybrid of sunflower "Pioneer P64LL125" Krasnodar Territory) established: • the use of the preparations provides an increase in the grain corn yield by 1.8%. The variant of application of Biotechagro preparations differs from the control one in the following: at the technological operation "Foliar dressing" by replacing the fertilizer with zinc sulfate with an insecticide BFTiM (2 l/ha) and micronutrient fertilization CMS (1 l/ha) and replacing the organic mineral fertilizer Potassium humate with a fertilizer based on humic acids Humate +7 (1 l/ha). The amount of additional profit amounted to 840 rub/ha, which is 2.3 times higher than additional costs; • the use of preparations provides an increase in sunflower harvest by 5%. In the variant of applying Biotechagro preparations, foliar processing (in the phase of 4-6 leaves) of a tank mixture consisting of biofungicide BFTiM (3 l/ ha) and liquid mineral fertilizer Helios silicon (0.5 l/ha) with a maximum concentration of silicon in the form of silicon dioxide in a special form of processing. The amount of additional profit amounted to 2,286 rub/ha, which is 2.1 times higher than additional costs.
Thus, the results of the analysis provide prerequisites for the inclusion of schemes for the introduction of biological preparations and microfertilizers produced by "Biotehagro" LLC in the technology of cultivation of corn for grain and sunflower and in recommendations to agricultural producers of the Krasnodar Territory in order to improve and reduce the cost of production. | 2021-08-27T17:14:38.007Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a4b106e3672339d15df4fe528d1c9131d79a049f",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/49/e3sconf_interagromash2021_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ba6d24bf6bdfcdc5775ce6660b204e03956045ff",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
173990021 | pes2o/s2orc | v3-fos-license | Testing a variable-length Cognitive Processing Therapy intervention for posttraumatic stress disorder in active duty military: Design and methodology of a clinical trial
Combat-related trauma exposures have been associated with increased risk for posttraumatic stress disorder (PTSD) and comorbid mental health conditions. Cognitive Processing Therapy (CPT) is a 12-session manualized cognitive-behavioral therapy that has emerged as one of the leading evidence-based treatments for combat-related PTSD among military personnel and veterans. However, rates of remission have been less in both veterans and active duty military personnel compared to civilians, suggesting that studies are needed to identify strategies to improve upon outcomes in veterans of military combat. There is existing evidence that varying the number of sessions in the CPT protocol based on patient response to treatment improves outcomes in civilians. This paper describes the rationale, design, and methodology of a clinical trial examining a variable-length CPT intervention in a treatment-seeking active duty sample with PTSD to determine if some service members would benefit from a longer or shorter dose of treatment, and to identify predictors of length of treatment response to reach good end-state functioning. In addition to individual demographic and trauma-related variables, the trial is designed to evaluate factors related to internalizing/externalizing personality traits, neuropsychological measures of cognitive functioning, and biological markers as predictors of treatment response. This study attempts to develop a personalized approach to achieving positive treatment outcomes for service members suffering from PTSD. Determining predictors of treatment response can help to develop an adaptable treatment regimen that returns the greatest number of service members to full functioning in the shortest amount of time.
Introduction
Combat-related trauma exposures have been associated with increased risk for posttraumatic stress disorder (PTSD) and comorbid mental health conditions (e.g. Ref. [1]). Current estimates of PTSD prevalence in military personnel and returning veterans range from 7 to 20% [2], suggesting a significant need for mental health treatment of this population. Cognitive Processing Therapy (CPT) has emerged as one of the leading evidence-based treatments for combat-related PTSD among military personnel and veterans. Well clinical trials support CPT as an effective treatment for PTSD and other comorbid conditions in a variety of trauma populations, including sexual abuse survivors [3,4], veterans (e.g. Refs. [5,6]), and active duty military personnel [7,8]. However, rates of remission have been less in veterans and active duty military personnel compared to civilians [9]. In the only published studies of CPT with active duty military, over half of service members retained their PTSD diagnosis after completion of the prescribed 12-session CPT protocol [7,8]. These findings suggest that studies are needed to identify strategies to improve upon outcomes in military populations.
There is existing evidence that varying the number of sessions in the CPT protocol based on the individual's response to treatment improves outcomes. Galovski and colleagues [10] examined a variable-length CPT protocol in a civilian population. They found that more than half of the 100 participants (58%) reached good end state with fewer sessions than the standard 12-session protocol, while 34% took longer to complete the treatment and reach good end state. Only 8% of the participants reached good end state at exactly 12 sessions. Nearly all participants (98%) lost their PTSD diagnosis following treatment.
The current paper describes the design and methodology of a clinical trial testing a variable-length CPT protocol in a treatment-seeking active duty military sample with PTSD to determine if some would benefit from a longer or shorter dose of treatment, and to identify which individuals require more, less, or the standard number of treatment sessions to reach good end state. The primary goal of the study is to improve the efficacy of CPT in this population through a variable-length treatment, specifically targeting "refractory" patients and increasing the focus on overall end-state functioning in addition to loss of PTSD diagnosis. Given that studies using veteran samples have not demonstrated the same levels of treatment success observed in civilian samples, it is conceivable that veterans and military personnel will need a longer dose (more sessions) to reach a good end state. However, there may also be some military personnel who could benefit from treatment more quickly. The study was designed to explore a range of variables that may predict length of therapy and treatment outcome. In addition to demographic and trauma-related factors, other variables include (1) factors related to internalizing/externalizing personality traits, (2) neuropsychological measures including cognitive flexibility and ability to inhibit dysfunctional cognitions, and (3) salivary cortisol and brainderived neurotrophic factor (BDNF) as predictors of treatment response.
Research objectives and hypotheses
The primary objective of the study is to evaluate the effectiveness of CPT delivered individually until the service member reaches good end state (up to 24 sessions) and to characterize the distribution of response based on specific predictors of treatment outcome. Good end state is defined as low scores on measures of PTSD and agreement by the patient and therapist that the patient is ready to stop treatment. Hypothesis 1 states that participants will reach good end state at varying lengths of CPT treatment, and they may be characterized into early, standard, late, or nonresponders based on the length of time needed to reach good end state. Because allowing service members to receive a longer course of treatment, if needed, may improve outcomes, Hypothesis 2 states that, given additional sessions, the total percentage of participants who successfully remit from their PTSD will be greater than those in current studies with the standard 12-session CPT protocol.
A second aim of this study is to identify pretreatment factors that account for individual differences in response to treatment for PTSD and to examine predictors of length of treatment needed to achieve treatment success. These predictors include personal factors such as demographics, trauma history, level of combat exposure and other military factors, symptom severity, comorbidities, and personality factors such as internalizing/externalizing traits. Additionally, neuropsychological variables such as cognitive flexibility and inhibition and biological markers such as salivary cortisol and brain-derived neurotrophic factor (BDNF) will be included as potential predictors of treatment response. Hypothesis 3 states that these individual characteristics will predict length of treatment needed to achieve good PTSD end state and ultimately treatment success. Specifically, those who have more problems with externalizing or internalizing symptoms, less cognitive flexibility, more problems with cognitive inhibition, and abnormal regulation of peripheral BDNF and cortisol levels will need a longer course of therapy. Additionally, because successful treatment of PTSD involves more than remission of PTSD symptoms, a final research objective is to examine secondary outcomes (e.g., work functioning, social/family functioning, aggression, health-risk behaviors) of a varied-length CPT treatment. Hypothesis 4 states that variables such as military factors, trauma history, personality characteristics, and cognitive flexibility/ inhibition will predict length of treatment and treatment success on secondary outcomes including work adjustment, aggression, social/family functioning, and health-risk behaviors.
Participants
Participants are 130 active duty U.S. military personnel age 18 or older seeking treatment for PTSD at the Fort Hood military base after deployments to or near Iraq or Afghanistan. All participants are required to have experienced a Criterion A traumatic event as defined by the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) [11] that occurred during military deployment. However, the diagnosis of PTSD may be based on another, worse Criterion A event at any time in their lives. Additional inclusion criteria include diagnosis of PTSD as determined by the Clinician-Administered Posttraumatic Stress Scale for the DSM-5 (CAPS-5) and the ability to speak and read English.
Exclusion criteria are as minimal as possible in order to increase generalizability of the results. They include: current suicide or homicide risk meriting immediate crisis intervention; active psychosis; and moderate to severe brain damage (as determined by the inability to comprehend the baseline screening questionnaires). Other comorbid conditions (e.g., substance abuse, personality disorders) are not reasons for exclusion. To increase the likelihood that participants will remain in the area long enough to complete the treatment and to improve our ability to track them for follow-up assessments, additional exclusion criteria include the following: local availability of fewer than 5 months; pending Medical Board decision to separate from service; and undergoing an Army chapter, as these may affect the individual's motivation for successful treatment, may result in restricted access to the military installation to attend appointments, and may increase the likelihood of relocation.
Study design
This is a prospective within-subjects clinical trial designed to determine how much CPT treatment is needed in an active duty military sample with PTSD to reach good end state and to determine predictors of treatment response. All consented participants who meet the inclusion criteria are offered CPT immediately. Following a baseline assessment, participants meet with their therapist for an initial session to establish the index trauma on which to focus in treatment. Participants then receive individual, 50-min CPT sessions twice a week. Participants continue treatment until good end state is established (process described below) or 24 sessions are completed. Participants are required to complete treatment within 18 weeks. Once the participant completes the final therapy session, he/she will then return for follow-up assessments at 1 month and 3 months posttreatment.
Study procedures
The study is affiliated with the South Texas Research Organizational Network Guiding Studies on Trauma and Resilience (STRONG STAR), a multi-institutional and multi-disciplinary research consortium of investigators based at The University of Texas Health Science Center at San Antonio and focused on the diagnosis, treatment, and epidemiology of combat-related PTSD and co-morbid conditions. This study leverages the STRONG STAR infrastructure in several ways. First, the study is using the common data elements of STRONG STAR and its partnering network, the Consortium to Alleviate PTSD [12]. This will not only help answer the research questions posed in this study, but it also will allow the characteristics of our study sample to be compared to the characteristics of other STRONG STAR studies to ascertain the generalizability of our findings to the larger population of service members with PTSD. This study is employing therapists and independent evaluators trained as part of the consortium and is using the monitoring and quality assurance procedures established for the consortium studies to ensure the fidelity of therapy and independent evaluations in the proposed study. Additionally, the study is overseen by the Data Safety and Monitoring Board (DSMB) established to ensure the appropriate clinical safety monitoring of study subjects participating in the clinical trials conducted by the consortium. In summary, this study is taking full advantage of the structures and processes in place through the STRONG STAR Consortium for a fraction of the expense of setting up such infrastructure de novo.
Recruitment and Screening. The study was reviewed and approved by Institutional Review Boards at The University of Texas Health Science Center at San Antonio (UTHSCSA), VA Boston Healthcare System, and Duke University Medical School. The Carl R. Darnall Army Medical Center (CRDAMC) at Fort Hood deferred its review to the UTHSCSA IRB. The U.S. Army Medical Research and Materiel Command Human Research Protection Office also reviewed and approved the study. Participants are recruited through direct referrals from health care providers at Fort Hood using the CRDAMC electronic medical system. Participants also may self-refer in response to recruitment flyers and pamphlets distributed to health care providers and posted in locations on Fort Hood frequented by service members. Research staff field incoming phone calls and walk-ins and discuss the study treatment and eligibility requirements with the interested person. If a potential participant meets basic eligibility criteria, he or she is consented into the study and completes the baseline assessment.
Assessment Procedures and Measures. All diagnostic clinical interviews are conducted by a masters-or doctoral-level independent evaluator (IE). IEs participate in four stages of training: relevant readings, didactic instruction with an expert in the field, mock interviews, and co-rating exercises with previously taped assessments. After the completion of training, IEs engage in weekly calibration exercises throughout the study to ensure that they continue to meet the highquality standards of the consortium.
The assessment measures consist of a core battery that has been used in STRONG STAR Consortium studies to fully assess and appreciate complex symptomatology in this patient population [12] plus a number of study-specific measures.
Clinician Administered PTSD Scale for the DSM-5 (CAPS-5). The primary outcome is PTSD symptomatology as assessed by the CAPS-5 [13], a semi-structured interview that assesses PTSD symptom severity and diagnosis. Symptoms are rated on a scale from 0 (absent) to 4 (extreme/incapacitating). A total symptom severity score is calculated as the sum of the 20 symptom items, with a range of 0-80. The CAPS-5 is administered at baseline and at 1 month and 3 months following treatment.
PTSD Checklist for DSM-5 (PCL-5). Self-reported PTSD symptoms are assessed using the PCL-5 [14,] at baseline, at the follow-up assessments, and every two sessions during treatment. The PCL-5 is a 20-item self-report measure designed to assess PTSD symptoms as defined by the DSM-5, based on the original PCL created for DSM-IV [15]. Scoring is based on how much the patient has been bothered by the symptoms in the past month (or weekly during treatment) on a scale from 0 (not at all) to 4 (extremely). Several studies have reported the psychometric characteristics of the PCL-5 in veteran [16] and active duty military [17] samples.
Additional study outcomes and potential predictors of interest are organized into the following areas: (1) history and personality, (2) deployment stress, adversity, and trauma, (3) psychological symptoms, (4) cognitive flexibility and inhibition, (5) functional impairment (alcohol use, aggression, health, sleep, etc.), (6) other key mediators and moderators, and (7) biomarkers. The full assessment battery, including diagnostic assessments and secondary measures, is administered prior to treatment, and 1 month and 3 months after the final therapy session. Participants also complete several self-report measures weekly during treatment. The measures and schedule of assessments are listed in Table 1.
Neurocognitive Assessments. The participants complete a neuropsychological test battery at baseline and 1 month posttreatment. The battery is estimated to take 1.5 h, depending on individual participant performance. All cognitive testing appointments are scheduled at the same time, at 9 A.M., to control for any circadian rhythm effects on the biological specimens drawn in coordination with the neurocognitive assessments.
Cambridge Neuropsychological Test Automated Battery (CANTAB)The CANTAB is a standardized, computer-based battery of neuropsychological tests administered to subjects using a touch-screen computer. The CANTAB consists of a series of interrelated, computerized, nonverbal tests of memory and learning, working memory and executive function, visual memory, attention and reaction time, semantic/verbal memory, decision making and response control [18]. It has validity established in neurodegenerative diseases, neurosurgical [57] X X X
Other Key Mediators and Moderators
Trauma-Related Guilt Inventory (TRGI) [73] X X X X Inventory of Complicated Grief (ICG) [74] X X X Childhood Trauma Questionnaire (CTQ) [75] X DRRI-2-Support from Family and Friends Subscale [55] X Post-Deployment Support, Unit Social Support [55] X X X University of Rhode Island Change Assessment Scale (URICA-T) Trauma Version [76] X X Readiness Ruler [77] X X Working Alliance Inventory (WAI) [ cases, psychiatric disorders and acquired pathology [18][19][20]. The battery contains measures of attention, information processing speed, executive function memory, decision making, cognitive flexibility and impulsivity. A detailed description is at http://camcog.com/cantabtests.asp. [21] is a set of standardized tests for comprehensively assessing higherlevel cognitive functions, referred to as executive functions, in both children and adults (aged 8 to 89). In this study, three of the D-KEFS tests are being administered. The D-KEFS Sorting Test is similar to the Wisconsin Card Sorting Test. The test measures a number of processes of executive function, including problem-solving, inhibition, and flexibility of thinking and behavior. Alternate forms allow for repeated measures to assess changes in these constructs following treatment. The D-KEFS Color-Word Interference Test is similar to the Stroop Interference Test. This test measures the ability to inhibit an overlearned verbal response to generate a conflicting response. Additionally, an inhibition/switching condition evaluates both inhibition and cognitive flexibility, and it is used to evaluate ability to inhibit perseverative verbal responses. This test is repeatable to assess changes following treatment.
Delis-Kaplan Executive Function System (D-KEFS). The D-KEFS
Attentional Network Task (ANT). The ANT [22] is a neurocognitive computerized measure designed to evaluate alerting, orienting, and executive functioning in visual processing. Efficiency of the three attentional networks is assessed by measuring how response times are influenced by alerting cues, spatial cues, and flankers. Research using the ANT has shown specific deficits in executive functioning among participants meeting criteria for PTSD relative to participants with similar trauma histories but no PTSD and those with a minimal history of trauma [23]. The two other attentional variables have not been shown to be sensitive to PTSD; therefore, they are serving as control conditions to demonstrate that there is a relationship between specific cognitive processes and PTSD treatment/outcome as opposed to a more general effect on all aspects of attention.
Biological Specimens (salivary cortisol and plasma BDNF). A series of six saliva samples are obtained prior to and following cognitive testing conducted at baseline and 1 month following treatment. Participants are provided with saliva cryovials prior to the appointment and instructed to collect saliva samples by passive drool at three time points prior to their appointment: (1) before bedtime the night before, (2) at wake-up, and (3) 30 min after waking on the day of the appointment. Upon arrival and following a 10-min check-in procedure, saliva samples are collected followed by blood collection (10 ml) by venipuncture. Saliva and blood samples are also collected immediately and 30 min after cognitive testing.
Treatment Description. CPT [24,25] is a highly structured, manualized protocol in which clients learn the skills of recognizing and challenging dysfunctional cognitions, first about their worst traumatic event and then to the meaning of the traumatic event in shaping current beliefs about self and others. In CPT, there are practice assignments, typically progressive worksheets to complete each day between sessions. The following content is discussed in the sessions: Session 1: The initial session of CPT is psycho-educational; symptoms of PTSD are explained within a cognitive and information-processing theory framework. At the conclusion of this session, patients are asked to write an impact statement about the meaning of the traumatic event, as well as beliefs about why the event happened and the impact of the event on beliefs related to areas that are often impacted by trauma (i.e., safety, power/control, trust, esteem, and intimacy). Session 2: In Session 2, the impact statement is read and discussed with a focus on identifying problematic beliefs and cognitions (called "stuck points") which are noted on a Stuck Point Log. The therapist introduces the Activating Event -Belief -Consequence (A-B-C) Worksheet with an explanation of the relationship between events, thoughts and subsequent emotions. Patients are then taught to identify the connection between events, thoughts, and feelings and asked to practice this skill for homework. Sessions 3-4: Sessions 3 and 4 include a review of the self-monitoring homework and a discussion of stuck points. Socratic questioning is first used to identify dysfunctional thoughts about the worst traumatic event (index event) such as erroneous self-blame. Participants are then reassigned daily A-B-C Worksheets about the index event. In Session 4, participants are given the Challenging Questions Worksheet, which examines single beliefs related to the trauma through a series of questions. Sessions 5-6: In Session 5, the Challenging Questions Worksheet is reviewed and the Patterns of Problematic Thinking Worksheet is introduced. Session 6 focuses on the identification of patterns of problematic thinking through both homework review and the introduction of the Challenging Beliefs Worksheet, which incorporates all of the other worksheets and adds the generation of a more balanced factual alternative thought to practice. Participants are asked to use the worksheets daily with everyday events and to challenge trauma-related self-blame cognitions. Sessions 7-12: In Sessions 7-12, over-generalized beliefs are challenged in the five areas of safety, trust, control, esteem, and intimacy as they relate to self and others. Treatment gains are consolidated in the final sessions. Sessions 12-24 (if needed): Participants continue to challenge remaining stuck points using additional Challenging Beliefs Worksheets for the remainder of the treatment. In the event that significant participant crises occur during the course of treatment, up to two sessions are permitted to focus on the crisis situation. Following procedures used in other studies of CPT [8,10,24,26] when a participant experiences a significant stressor that may interfere with treatment, the participant and therapist may discuss postponing CPT content and instead focus the session on addressing the current stressor. CPT is then resumed in the following session.
Determination of Good End State. Participants complete the PCL-5 every two sessions to assess for good end state, which is defined as a score of ≤19 on the PCL-5 and agreement by the patient and therapist that the patient is finished with therapy. Good end-state score on the PCL-5 (≤19) was suggested by Weathers and Schnurr (personal communication, November 26, 2014) and has been used to define stable remission in a large multi-site ongoing trial of CPT in the Department of Veterans Affairs (VA) [26]. If the participant has met the PCL-5 cutoff, the therapist and patient discuss whether the patient is finished with therapy or if he/she should continue. If the patient and therapist decide to stop treatment at that time, the patient returns in 1 week for a final session. If the patient still meets good end state (PCL ≤ 19) at this session, it is the final session. He or she will review the final impact statement and receive and review any CPT materials not covered in the treatment received. If the participant's PCL score no longer meets good end state at this session, he/she continues with treatment until the PCL score returns to good end state and the therapist and participant agree to end treatment. The assessment of progress continues every two sessions. Participants can receive up to 24 sessions, which is twice the length of the usual protocol. In these additional sessions, the patients will continue practicing skills using protocol handouts, the stuck point log, and Socratic dialogue learned in the 12-session protocol until good end state is achieved. No new worksheets or therapy techniques will be added.
Training and Supervision of Therapists. The therapists are trained to conduct the therapy by the last author or another qualified CPT trainer following established procedures used by the STRONG STAR Consortium and in other studies [8,24]. Video recordings of treatment sessions are reviewed by designated CPT supervisors/consultants, and all therapists are required to meet therapy fidelity requirements prior to seeing consented study cases. During the data-collection phase, therapists continue to receive weekly supervision or consultation on their cases by project staff. They have local back-up as well as case consultation with the overall study principal investigators on an ongoing basis as needed. All therapists participate in a weekly CPT therapist teleconference with the CPT consultants to review all new and ongoing treatment cases.
Fidelity Monitoring. Treatment adherence and competence is determined by independent raters who are not otherwise involved in the project. The raters have served on prior CPT studies as fidelity raters. The raters determine adherence to CPT and competence in delivering the therapy by reviewing videotapes of treatment sessions and completing standardized rating forms developed and used in prior studies of CPT [8,24]. Ten percent of sessions are randomly selected for rating by a computer program implemented by a staff member not otherwise associated with treatment, and 20% of the rated sessions are rated by both raters for determination of inter-rater reliabilities (kappas).
Data analytic strategy
Statistical Analysis Plan. All participants will be included in analyses (intention to treat) regardless of the amount of treatment they receive. All participants are asked to complete assessments 1 month and 3 months after treatment ends regardless of the outcome. Our primary analysis plan is largely based on the approach used by Galovski [10,27]. Participants will be classified into one of four outcome groups on the basis of end-state PCL-5 scores and the length of treatment: (1) Early Responders: those who reach good end state with fewer than 12 sessions; (2) Standard Responders: those who reach good end state in exactly 12 sessions; (3) Late Responders: those who reach good end state with 13 sessions or more; (4) Non-Responders: those who complete 24 sessions or reach the 18-week treatment window without reaching good end-state scores. Participants who leave treatment prior to session 24 without reaching good end state will be considered dropouts. However, another consideration is that the nature of active duty military service may require some participants to leave treatment for reasons out of their own control (e.g., deployment or permanent change of duty location). Participants who leave the study under such circumstances will be defined as "pull-outs" rather than "drop-outs," and may be excluded from some analyses because their true end state is not known.
We will report the proportions classified into each of these outcome categories and descriptive statistics such as the average, range and distributions of the number of sessions completed in each subgroup with 95% confidence limits. The outcome groups will then be compared on baseline descriptive, service history, and clinical characteristics to identify prognostic variables using conventional methods such as analysis of variance and chi-square. The trajectories of symptoms and secondary outcome measures over time in each of these subgroups will be described and compared with linear and generalized mixed effects regression models with repeated measures, and the differences between subgroups characterized in terms of standardized effect sizes. As Galovski and colleagues [10] noted, these outcome subgroups are defined by the degree and timing of symptom improvement, so inferential statistical tests of those measures are tautological. However, such comparisons are valid for the secondary outcome measures, which may not be highly correlated with primary symptom outcomes and are not used to make the outcome subgroup classifications.
Predictors of time to recovery will be analyzed using survival analysis methods. The Kaplan-Meier (product limit) survival curve estimates the proportions of participants reaching good end state at each session and gives cumulative estimates of proportions responding over time. Nobler et al. [28] compared four alternative data analysis methods for the study of time to recovery and concluded that survival analysis was most powerful. Proportional hazard survival regression will be used to examine baseline characteristics as predictors of speed of recovery [29]. We will also explore the value of analyzing dropout and recovery as competing risks, treating dropout and recovery as two clinically meaningful events [30].
We will also use growth mixture modeling, or Latent Transition Analysis (LTA) to define outcome groups. LTA does not depend on arbitrary cutoffs. LTA is a data-driven type of cluster analysis that assumes that the participants are drawn from a heterogeneous population comprised of underlying unmeasured groups of individuals who share similar trajectories over time. Subgroups are defined using growth model parameters such as intercepts and linear or nonlinear trends. As implemented in SAS PROC TRAJ [31], for example, the user specifies the hypothesized number of latent groups, the growth function that describes the trajectory (up to a third degree polynomial), and the error distribution (e.g., normal, Poisson, ZIP, logit). Fitting multiple models with different specifications permits selection of a "best" model based on the Bayesian Information Criterion. LTA makes no a priori assumptions about the patterns of symptoms over time. Information criteria are used to guide the decision as to how many subgroups exist. Individual participants are assigned to the most likely subgroup on the basis of estimates of probabilities of subgroup membership, which then can be compared using conventional analysis methods.
Finally, we will examine differences between outcome subgroups on baseline characteristics using recursive partitioning or classification and regression trees (CART) [32]. CART is an empirical, data-driven, computer-intensive methodology that exhaustively searches a set of variables to build (or "grow") a decision tree for classification of patients. CART yields a simple prediction algorithm that is intuitive and easily understood and can be applied without complex computation. It can discover interactions among predictors in a way that is not possible with conventional parametric regression models. Accuracy of prediction in CART is evaluated with cross-validation, holding out some participants in the tree-building process and then using the omitted cases to evaluate the accuracy of prediction.
Biomarker Analysis. BDNF levels in plasma will be analyzed using the Milliplex ® Map Human Magnetic Bead Panel 3 immunoassay for BDNF in saliva and blood in duplicates as per manufacturer's protocol (EMD Millipore, MI). The samples will be analyzed using Luminex Flex Map3D (Luminex Corporation, Austin, TX). BDNF (in pg/ml) will be calculated in MFI on the basis of the standard curve. Free cortisol levels in saliva will be determined in triplicate using the Salimetrics ® Cortisol Enzyme Immunoassay Kit, a competitive immunoassay specifically designed and validated for the quantitative measurement of salivary cortisol (Salimetrics LLC, State College PA, USA).
Genomic DNA will be isolated from whole blood using a Tiangen DNA isolation kit. Genotyping of the BDNF Val66Met single nucleotide polymorphism (SNP) will be carried out using the TaqMan SNP Genotyping Assay on ABI PRISM 7900 sequence detection instrument and SDS 2.0 software. For quality control, all genotypes will be determined blindly in the genotyping processes. The genotyping assays will be repeated type to make sure that the results are concordant structure, and the ability to perform between-and within-groups contrasts both and across assessments. The Val66Met SNP will be analyzed using the Hardy-Weinberg equilibrium testing and allele and genotype frequency analysis. The association of Val66Met genotypes and the selected cognitive domains will be compared between the patient with Met Allele (Val/Met + Met/Met) and those without Met allele (Val/ Val).
Sample size. For an analysis of variance comparing four groups of roughly equal size, power with total N = 130 is 0.80 if Cohen's f = 0.30 [33], slightly larger than his convention for a "medium" effect. Survival regression analysis has power = .80 to detect a hazard ratio of 1.75, representing a different in recovery rates of about 15-20%.
The bulk of potentially consequential missing data occurs when participants drop out and data are missing from that point on. Typically, more complex analyses of missing data patterns are not needed. Individual forms or entire assessments are occasionally missed, but that is generally reasonably attributed to extraneous factors (e.g., illness, child care). Dropout is one of the outcome categories into which participants may be classified, and examination of characteristics associated with dropout including baseline descriptive and clinical characteristics and early outcome trajectories prior to dropout is a primary aim of the study. With respect to missing data, the likelihoodbased methods of statistical analysis such as those implemented in the mixed effects regression modules in the popular statistical libraries yield valid estimates given the commonly made assumption that data are missing at random.
Discussion
Given the unique nature of combat-related PTSD in active duty military and returning veterans, there is an urgent need to identify effective treatments for this condition in this population. Current and previous versions of the VA/DoD Clinical Practice Guideline for the Management of Posttraumatic Stress Disorder and Acute Stress Disorder [34] have included CPT as one of the first-line recommended treatments for PTSD, and numerous clinical trials have demonstrated its efficacy in a range of populations. Although CPT is shown to be efficacious in active duty and veterans [5,8,24,35], the effects have been smaller than for civilian samples, with 49%-67% experiencing a meaningful symptom reduction and one third losing the diagnosis of PTSD [9]. However, a recent meta-analysis indicates that CPT has the highest effect size (mean ES = 1.33) of existing evidence-based treatments for PTSD in military personnel and veterans [36]. This suggests that there is benefit to treating military personnel with PTSD using CPT, while it remains important to continue seeking methods to improve its efficacy. Tailoring treatment using a variable-length protocol could allow patients previously deemed "refractory" to reach good end state by allowing more sessions to be added to the standard treatment protocol. Conversely, patients who are rapid responders may end treatment prior to receiving 12 sessions and return to their duties and lives more quickly.
This study is the first to examine a variable-length treatment for PTSD in an active duty military sample. We seek to establish a method for modifying standard CPT in this population, specifically by formalizing the assessment of good end state and determining the appropriate criteria for stopping treatment early or continuing treatment longer. Importantly, we also will examine predictors of treatment outcome. Establishing factors that indicate whether a service member is likely to respond to CPT-and whether he or she might be an early responder or may require additional sessions to achieve full benefit-could assist in treatment planning for this population.
Previous studies that have examined predictors of treatment response in PTSD have typically examined patient demographic variables [37,38]. This study is unique in that it includes an examination of potentially important but understudied factors that may contribute to treatment outcome and length of time to achieve good PTSD end state. Recent studies suggest that the internalizing/externalizing model of personality and comorbidity may be relevant to understanding patterns of posttraumatic psychopathology and their links to treatment outcome [39][40][41][42]. Based on this research and theory, we hypothesize that individual differences in internalizing and externalizing psychopathology will be associated with poorer treatment outcome, but for different reasons. Internalizing (which is also associated with the tendency to ruminate about past traumas, failure experiences, etc.) is expected to predict deficits in cognitive flexibility and inability to inhibit, which in turn will be associated with longer or poorer response to treatments that require the ability to change one's thinking about a past traumatic experience. In contrast, higher levels of externalizing will be associated with deficits in cognitive flexibility and inhibition that may lead to impulsivity, aggression, or substance abuse.
Additionally, this study is the first to include neuropsychological predictors of response to CPT. The inclusion of these assessment batteries will allow for the examination of potential mechanisms for change in CPT and can inform which patients could most benefit or may have difficulty with the cognitive focus of the therapy. Over the past two decades, neuropsychological studies of patients diagnosed with PTSD have reported impairments in executive functions, including cognitive flexibility (e.g. Ref. [43]). Cognitive flexibility is the capacity to shift one's train of thought and action according to changing demands of the environment and new information and is a facet of frontally mediated cognitive control/executive functioning abilities [44]. It may also include the ability to inhibit information that is incorrect. In adults, it is a trait-like ability indexed by performance on well-established neuropsychology tests. Changing the meaning of the traumatic experience and abandoning maladaptive trauma-related cognitions in favor of new more adaptive ones are fundamental components of the change process for CPT. The ability to incorporate new information about the traumatic experience and inhibit over-learned cognitive and behavioral responses associated with it may be, at least in part, dependent on the patient's capacity for cognitive flexibility and ability to inhibit dysfunctional thoughts. Specifically, patients who are cognitively inflexible will often perseverate in unproductive lines of reasoning that make them difficult to treat. Accordingly, we hypothesize that pretreatment individual differences in this capacity will predict treatment length and outcome. Evaluating these variables as pretreatment predictors of outcome and length of response to treatment may be an essential first step toward the future development of treatment matching and treatment combination algorithms that would better address the unique needs of individual patients, including those who are refractory in response to treatment-as-usual.
The inclusion of biological measures as predictors of treatment response is also a novel aspect of the study. Although BDNF and salivary cortisol have been implicated in the traumatic stress response and the response to treatment [45][46][47], to date there are no published trials examining these biomarkers in the context of CPT. PTSD risk is associated with BDNF Vall66Met SNP and BDNF overexpression [47]. Previous studies have also shown a SNP in the Met-66 allele of the BDNF, which results in lower activity-dependent secretion, predicts poor response to exposure therapy in PTSD and impaired fear extinction in healthy controls [45]. The BDNF Val66Met polymorphism moderated the relationship between PTSD and fear extinction learning, such that poorer fear extinction learning was associated with greater PTSD symptom severity (and PTSD diagnostic status) in individuals with the low-expression Met allele [46].On the other hand, Rauch and colleagues [48] reported that increased cortisol response to personal trauma script prior to PTSD therapy and reductions in cognitive symptoms of PTSD were significantly and uniquely related to reductions in the core symptoms of PTSD in Prolonged Exposure (PE) therapy. In another study, they reported that low treatment responders showed greater increases in salivary cortisol output over the course of PE treatment, indicating that increases in hypothalamic-pituitary-adrenal axis reactivity over the course of psychotherapy may be associated with worse treatment response [49].
In addition to examining changes in PTSD and other mental health symptomatology following CPT, this study also seeks to assess improvements in functioning domains beyond psychopathology. Few published studies to date have examined the full range of psychosocial functioning outcomes following PTSD treatment. Several secondary analyses of previous CPT trials have shown improvements in symptoms such as physical health and sleep, and psychosocial functioning in the areas of work, family, and social/leisure activities [50,51]. However, the samples in these studies were comprised of female civilian assault victims. It remains to be seen how psychosocial functioning and physical health outcomes may be affected by CPT in an active duty military sample. Furthermore, health risk behaviors such as substance abuse and aggression also may be important outcomes to examine in this active duty population. No clinical trials to date have reported changes in these behaviors following CPT treatment. The current study includes an extensive battery of secondary measures to assess the effect of variablelength CPT on a wide range of functioning outcomes.
This study challenges the "one-size-fits-all" approach to trauma treatment and attempts to develop a more tailored and personalized approach to achieving positive treatment outcomes for service members suffering with PTSD. Testing a variable-length version of CPT will determine if increasing the length of the treatment will result in a greater number of service members achieving good end state. It also will also explore if some service members may reach good end state in a shorter amount of time, allowing them to return to duty and resume their lives more quickly. This study seeks to further the knowledge of precision medicine in the field of PTSD treatment by examining potential predictors of CPT treatment response, including neuropsychological and biological factors. Results of this study may help guide treatment matching, first through a greater understanding of which characteristics make someone most likely to benefit from CPT, and second, by illuminating how the course of treatment might be shortened or lengthened to optimize outcomes for particular patients.
Role of the funding source
The funding sources had no involvement in the study design, the collection, analysis and interpretation of data, the writing of this manuscript, or the decision to submit this manuscript for publication.
Conflicts of interest
None to report.
Disclaimer
The views expressed in this presentation are solely those of the authors and do not reflect an endorsement by or the official policy of the U.S. Army, the Department of Defense, the Department of Veterans Affairs, or the U.S. Government. | 2019-06-07T21:13:35.176Z | 2019-05-23T00:00:00.000 | {
"year": 2019,
"sha1": "d959ba93f4067360356cce2c614a26d0c67cac1e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.conctc.2019.100381",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb6cc41561caa9aff63f4aef9c9799216c4722db",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226479365 | pes2o/s2orc | v3-fos-license | Density and Time based Traffic Control System using Video Processing
Traffic is the serious issue which each nation faces due to the expansion in number of vehicles. One of the strategies to beat the traffic issue is to build up a smart traffic control framework which depends on computing the traffic density and about utilizing constant video and picture preparing procedures. The topic is to control the traffic by deciding the traffic density on each roadside and control the traffic signal smartly by utilizing the density data. In this paper, an automated system based on processing of real time videos is proposed for detection of vehicles and recording count of them. The System will consist of various stages which includes Object Car Detection and Signal variation based on density. Captured video will be converted into frames and which will be pre-processed for object detection using Haar-Cascade than detected object count will be used to obtain the density and manipulate the signal accordingly. The density count algorithm works by contrasting the ongoing edge of live video by the reference picture and via looking through vehicles just in the district of intrigue (for example street region). The figured vehicle thickness can be contrasted and other course of the traffic so as to perform control of the traffic flags in more smart and proficient manner.
Introduction
Traffic clogging has become a difficult issue in urban areas. The fundamental explanation is the increase in population in the urban area that along these lines there is high vehicular travel, which births clog issue. Because of traffic blockages there is high cost of transportation as a result of time wastage and additional fuel use [3]. For example, in the event that there is a crisis vehicle with the basic patient ready. In that circumstance if an emergency vehicle stalls out in an overwhelming congested road, at that point there are high possibilities that the patient can't arrive at the clinic on schedule. So, it is critical to structure a keen traffic framework which controls traffic brilliantly to maintain a distance from accidents, crashes and roads turned parking lots [7][8]. The most wellknown explanation of traffic blockage in underdeveloped nations is a traffic signal con-trolling which influences the traffic stream. For example, if one path has less traffic and the other path has more traffic yet the green light is same then it creates problem. By considering the above model if the path with higher traffic thickness should turn on the green signal light for a longer period than the lane with lesser density. It will solve the problem. In other technique it is proposed to control the traffic signal by using image processing, in which they 1 e-mail: tanvi291999@gmail.com 1 * e-mail: parate860@gmail.com 1 e-mail: dharininadkar02@gmail.com initially select the reference picture which is the picture without any vehicles or Less vehicles and each time they co-ordinate ongoing pictures with that reference picture, based on the level of coordinating traffic lights controlled. But in this the image matching is performed by using edge detection [3]. In this paper we propose a density-based counting of vehicles which gives us exact information for signal decision making. They are organized as follows: section II explains the review of literature. Section III shows proposed system. Section IV is with the implementation. Section V shows the result. Finally, Section VI is the conclusion of the paper followed by the key references used for the work.
Review of literature
The proposed system is aimed to give a comprehensive application to dynamically manage the traffic light systems depends on the current vehicle count. Gaussian probabilistic model is used for classification of vehicles, Kalman filter is used to detect the vehicles at the junction of the traffic light timer will be adjusted according. This proposed system will improve the traffic management system significantly by calculating accurate vehicle density on the roads. The proposed system takes the videos of all the traffic lanes at the junction as an input from the local storage of the computer and performs comparison based on the number of cars at the respective lane. The lane with the highest density will be given the priority. The scenario of the starvation of the particular lane is also taken into consideration while developing the system [1].
There are plenty of proposed techniques to build a wise traffic framework, for example, fluffybased controller and morphological edge discovery. This technique is based on traffic density measurement, by relating the live traffic with an image. Other technique is proposed to design an intelligent traffic system, which is on four lanes. This also suggests identification of emergency vehicles, within a restricted scenario. Second technique is proposed based on neural networks, which analyze the traffic videos and classify the vehicles and its density. The proposed technique is based on calculating the traffic load by comparing two pictures. [2].
In this we speak about the camera-video-observation capabilities to track cross sectionally over diverse and changed street environments like vehicle discovery. The system is intended to track road and motorway wellbeing rates, it could detect the vehicles unlawful turn. The system is built using a diverse range of software's. The road intersections are the main focus of this system development. Used microcontroller is Arduino uno, used algorithm is canny. The system successfully reduces the crowd of vehicles and time of waiting through this mechanism at the traffic signals. We have coupled actual traffic images taken with microcontroller. This paper estimates the number of vehicles present, and the density is determined ac-cording to the total. This is intended to develop a system that uses camera to perform execution based on vehicle density, i.e. vehicle counting. It proves that video processing is a better option for calculating traffic density using OpenCV. It implements thickness-based operation system advancing using OpenCV video handling. The blob measurement helps to track the density of the vehicles from the live video recorded for processing. The traffic signal manipulation is successfully achieved through the count of the vehicles detected by the proposed system and reduces the traffic congestion at the intersections [3] [9].
Problem statement
Signals are allocated a fixed time and according to that time it will work but the problem here is even if there are no vehicles in that lane the signal will turn green according to the fixed timer statically allocated to it without any cause. We have to propose a system in which signal lights will be manipulated according to the density of the vehicles in that particular lane. The lane which has a greater number of vehicles will be given a preference. This system has been previously implemented using sensors but we are trying to substitute the whole system into video processing. The substitution results in low costs as there is no need to buy any sensors.
Proposed system
Proposed system is shown in figure 1 and explanation is given below.
Image Acquisition
The system starts with an image acquisition process in which the live video is taken and processed by camera, mounted on the signal stand. The video is captured lane wise, the camera shifts from one lane to another after a specific time interval.
Image cropping
The frames from the video are extracted which are processed further. The second step is the image cropping in which the focus is on region where the vehicles are present and are surrounded by noise and other data. Cropping helps to obtain the ROI for the system which can help to achieve higher accuracy.
RGB to grayscale transformation
RGB Images contains lot of data and it takes time for processing, to minimize this processing time the RGB color images are converted to gray scale and passed to next stages. The equation for rgb to gray scale conversion is given below: Gray=0.2989 * R + 0.5870 * G + 0.1140 * B
Threshold
Thresholding is used for classifying the pixel values in an image which is done on grayscale images, which are images which have pixel values ranging from 0-255. When you threshold an image you classify these pixels into groups setting a upper and lower bound to each group. Thresholding can be performed by local methods as well as by global methods, Thresholding is one of the methods which is used to suppress the background and obtain a clear foreground. In this paper Otsu's Thresholding is ap-plied, it converts the gray scale image to binary form based on the selected threshold value.
Contour
The binarized image obtain from thresholding stage is passed to the contour step to define the contour for the detected objects. Contours can be clarified as basically as a curve joining to every single continuous point having same shade or intensity. The contours are helpful to object detection and recognition.
Calculate traffic density
In this we calculate the density of the number of vehicles present in the lanes by the camera using har cascade algorithm. Which helps to decide the changing of the signal color thus managing the traffic. The input video provided to the system will be checked and the frames will be extracted from it. The noise and other simplifying objects like shadows will be filtered out. The conversion of the extracted frames from rgb to grayscale will be done. After this the contours on the frames will be detected and the object will be successfully detected using Haar-Cascade algorithm. 3. This step will remove the survived noise that is in the video which includes all the things other than the vehicle in the video to prevent from detection, and to simplify the video and identification of the vehicle.
4. This step will remove the survived noise that is in the video which includes all the things other than the vehicle in the video to prevent from detection, and to simplify the video and identification of the vehicle.
5. once the video is detected the video will be in colorful format that is into rgb format, so for the processing of the video the video needs to be in the grayscale format for better processing and according to the algorithm. Hence in this step we convert the RGB video into the grayscale format.
6. contours are basically the boxes formed around a detected vehicle to show that this vehicle is detected or anything that is detected comes under the counters and it gives the system a count of the detected objects.
7. finally, when the objects are countered in the previous step the object are detected with minimum error. In our case the system detects the vehicles as an object using the har cascade algorithm.
The below flowchart is for system architecture. When you turn on the system thee video processing module will open and the module will detect the number of vehicles in the particular lane where the camera is facing. If the vehicles are detected then the module will give us the count of the vehicles detected using the Haar-Cascade algorithm. Once the count is provided the signal will turn green on that particular lane for the time according to the vehicle count detected and the time allotted by the system which in our system, we have given 2 seconds for each vehicle and one extra second. If no vehicles are detected then the camera will move to the next lane to check for the vehicles and repeat the procedure.
Conventional Traffic Control System Manual
Controlling: Manual controlling refers to manual adherence for monitoring and controlling the traffic at the signals. Contingent upon the nations and states the traffic polices are allotted for a necessary region or city to control traffic. They are told to wear explicit garbs so as to control the traffic.
Automatic Controlling:
The traffic lights are automated based on the sensor information and using clock timers for display. In rush hour gridlock light, each stage a steady numerical worth stacked in the clock. The lights are naturally jumping ON and OFF contingent upon the clock worth changes. The automated traffic signals make use of sensors to detect the vehicle availability and a flag is raised at each step, based on the above mechanism the lights turn ON and OFF automatically.
Hardware software requirements
• Ubuntu 19.04 3. This step has the video processing module in which the methods and algorithm is present to detect the vehicles accordingly.
4. This step applies a condition in which if it detects the vehicles has a yes or no answer with the preceding step to be done.
5. If the system does detect the vehicle then it will give the total number of count of vehicles in that lane using the video processing module 6. In this step once the count is given the signal turns green for the time specified according to the algorithm based on the density or count of the vehicles in that particular lane.
7. If the camera detects no vehicle on the particular lane that he is facing then the camera will move and change its direction facing to the other lane to count the vehicle density in that particular lane and repeat from step5.
Implementation
Given system uses OpenCV v3.4.9 for software, and is using the Image Processing principle. The programming language to be used will be python. In the past execution, since desktop monitor is hard to work with, we use Arduino uno micro controllers which make it easy to convert advanced digital language into a binary language. The technique that are used are blob detection and thresholding. The algo being used is Haar-Cascade. Another implementation in this project is using python. We use OpenCV due to which the project's entire cost is lessened.
Figure 4. Image Capturing
The above figure is the image capturing done by our system.
Figure 5. RGB to gray scale transformation
The above image is the transformation of the captured image from RGB to gray scale.
Figure 6. Threshold Image
The above image is the threshold picture of the vehicles detected by our system using Otsu thresholding.
Figure 7. Car Detection by Contours
The above image is the contours detected by our system.
Figure 8. Density Count
The figure 8 shows the count of contours or the density of the vehicles detected by the proposed system.
Result
The lane number 1 has actually 5 number of vehicles present there, our system detects number of vehicles is 6 that is with the accuracy rate of 83.33. The time allotted that is number of seconds per vehicle to cross is 2 seconds and one extra second for any detection. Hence the lane 1 has 13 seconds allotted by the system. The lane 2 has actually 3 cars and our system has detected all 3 cars with the accuracy rate of 100 and the green signal is on for 7 seconds by the system. On peak hours, the camera detects the number of vehicles on its range and turns the green signal ON for 2n+1 seconds. Then camera rotates to another lane.
CONCLUSION
The study showed that video processing is a good technique to control road congestion. It is also more consistent in detecting vehicle presence since it utilizes genuine traffic frames. It envisions the reality so it works far superior to systems which depend on the detection of vehicles. This work can be upgraded further by proposing a framework for controlling the traffic density. That will decrease our serious issue of everyday life, traffic jam. Using video processing and Object Detection mechanism, the proposed system achieves good accuracy in identifying the objects and estimating lane density. Based on lane density the traffic issue can be resolve to greater extent. | 2020-07-30T02:02:40.699Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "aecf95197f1cdf0915cabe5c693aa62458487b57",
"oa_license": "CCBY",
"oa_url": "https://www.itm-conferences.org/articles/itmconf/pdf/2020/02/itmconf_icacc2020_03028.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "39b91f86eacb0f0439892180549424de03687d0e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6311047 | pes2o/s2orc | v3-fos-license | Intermedin 1-53 Inhibits Myocardial Fibrosis in Rats by Down-Regulating Transforming Growth Factor-β
Background Myocardial fibrosis is the result of persistent anoxia and ischemic myocardial fibers caused by coronary atherosclerotic stenosis, which lead to heart failure, threatening the patient’s life. This study aimed to explore the regulatory role of intermedin 1-53 (IMD1-53) in cardiac fibrosis using neonatal rat cardiac fibroblasts and a myocardial infarction (MI) rat model both in vitro and in vivo. Material/Methods The Western blot method was used to detect the protein expression of collagen I and collagen III in myocardial fibroblasts. The SYBR Green I real-time quantitative polymerase chain reaction (PCR) assay was used to detect the mRNA expression of collagen type I and III, IMD1-53 calcitonin receptor-like receptor (CRLR), transforming growth factor-β (TGF-β), and matrix metalloproteinase-2 (MMP-2). Masson staining was used to detect the area changes of myocardial fibrosis in MI rats. Results Results in vivo showed that IMD1-53 reduced the scar area on the heart of MI rats and inhibited the expression of collagen type I and III both in mRNA and protein. Results of an in vitro study showed that IMD1-53 inhibited the transformation of cardiomyocytes into myofibroblasts caused by angiotensin II (Ang II). The further mechanism study showed that IMD1-53 inhibited the expression of TGF-β and the phosphorylation of smad3, which further up-regulated the expression of MMP-2. Conclusions IMD1-53 is an effective anti-fibrosis hormone that inhibits cardiac fibrosis formation after MI by down-regulating the expression of TGF-β and the phosphorylation of smad3, blocking fibrous signal pathways, and up-regulating the expression of MMP-2, thereby demonstrating its role in regression of myocardial fibrosis.
Background
Myocardial fibrosis refers to the excess accumulation of collagen fibers, significantly higher concentrations of myocardial collagen, and changes in collagen elements in the normal structure of cardiac muscle. Such pathological changes exist in a variety of cardiovascular diseases. It has been considered that they are closely related to arrhythmia, cardiac dysfunction, or even sudden cardiac death [1]. Myocardial fibrosis is one of the main pathological features of ventricular remodeling. The improvement of cardiac fibrosis can effectively improve cardiac function and inhibit ventricular tissue hypertrophy, and reduce the risk of cardiovascular events [2]. Therefore, it is of great clinical significance to search for ways to effectively inhibit and regress myocardial fibrosis.
Intermedin (IMD) is a new member of calcium gene-related peptide (CGRP) superfamily identified by Roh et al. [3] and others [4], who used phylogenetic analysis to retrieve GenBank data using the primary and secondary structure specific to this superfamily.
IMD is widely distributed in the tissues. It is expressed in tissues and organs such as human cardiac cells, coronary arterial smooth muscle cells, and hypothalamic supraoptic nucleus [3,[5][6][7][8][9], which indicates that IMD may be involved in regulating the environmental homeostasis of the body.
This study's in vitro experiment detected the collagen synthesis effects of IMD1-53 on rat cardiac fibroblasts induced by angiotensin II (Ang II) and the function of transforming cardiac fibroblasts into cardiac myofibroblasts. This study's in vivo experiment detected the effects of IMD1-53 on cardiac fibrosis using a myocardial infarction rat model and explored its possible mechanism, so as to provide new laboratory data for the prevention and treatment of myocardial fibrosis.
Culture and identification of cardiac fibroblasts
The heart of 1-to 3-day-old SD rats was taken, and its membrane envelopes were cut. The heart was cut into pieces of 0.5~1.0 mm 3 and digested with 0.1% trypsin. Then it was cultured at 5% CO 2 and 37°C in an incubator for 60 min, and cardiac fibroblasts were obtained by differential adhesion. Morphological observations ( Figure 1A) showed that the purity of cardiac fibroblasts was 98%. The second to fourth generations of cardiac fibroblasts were chosen to be used in the experiment. The components of the fibroblast medium were Dulbecco's Modified Eagle Medium (DMEM), 10% fetal bovine serum (FBS), and 1% PS (Gibco, USA). The fibroblasts were treated with IMD1-53 at 1×10 -7 mmol/L and 100 nM Ang II in serum-free RPMI for 24 hours.
mRNA extraction and Q-PCR analysis
The cultured second generation of cardiac fibroblasts was seeded in a 24-well culture plate. The total RNA was extracted by Trizol, and cDNA was reverse transcripted with the TOYOBO reverse transcription kit (FSQ-101). SYBR Green I real time Q-PCR was used to detect the genes' expression. The target genes were normalized, with GAPDH being an internal reference. Using the relative quantification method, the relative quantification of genes in the sample was calculated by 2 -DD Ct. Each sample was detected at least 3 times. Primer sequences are shown in Table 1. PCR conditions were as follows: at 94°C for 4 min, at 92°C for 40 s, at 59.5°C for 45 s, at 72°C for 3 min, 40 cycles, and finally at 72°C for 15 min.
Protein extraction and Western blot
We lysed the cells, centrifuged the lysate, and collected the supernatant. Then the bicinchoninic acid (BCA) method was used to quantify proteins. After sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), transfer membrane of the gel and blot hybridization were used in type I and III collagen protein. We performed SDS-PAGE with the sampling amount per well of 100 μg of protein. Then we transferred the proteins to a nitrocellulose (NC) membrane and washed them with Tris-buffered saline (TBS), which consists of 5% skimmed milk powder, 20 mmol/L Tris-HCl (pH7. 5), 0.05% Tween20, and 0.6% NaCl at room temperature, and sealed the membrane for 1 h. Primary antibodies were goat anti-rat type I, type III collagen monoclonal antibody (abcam, ab32145, ab43212), rabbit anti-rat smad3 monoclonal antibody (cell signaling, #72255), rat anti-mouse phosphorylated smad3 antibody (cell signaling, #64214), and goat anti-rabbit antibody MMP-2 (abcam, ab54322), respectively. Then, they were incubated at 4°C overnight and washed 3 times with TBS, each time adding secondary antibody (horseradish peroxidase-labeled goat anti-mouse polyclonal antibody, mouse anti-rabbit polyclonal antibody) at room temperature for 1 h. They were washed 3 times with phosphate-buffered saline (PBS). Chemiluminescence was utilized for color reaction. The results were analyzed by the professional software National Institutes of Health (NIH) ImageJ to quantify gray values
Rat myocardial infarction model
Twenty 8-to 10-week-old, 250±20 g, male SD rats raised in a specific-pathogen-free (SPF) room were used to construct the myocardial infarction model. The model of left ventricular myocardial ischemia was constructed by using 10% chloral hydrate at 0.3 mL/100 g of body weight (bw) intraperitoneal injection for anesthesia. When there was no reaction of tingling limbs in the rats, we fixed them in the supine position on the operating table, inserted an endotracheal tube, connected it to a respirator machine, removed the chest hair, sterilized the skin with iodine, and cut the skin of the SD rats' left chest between the fourth and sixth ribs. Then, the pectoralis major and pectoralis minor were separated by blunt dissection along the lower edge of pectoralis major, and the intercostal was revealed. Ophthalmic scissors were used to cut along the intercostal, expanding in the front and back without hurting the thoracic artery. Now the beating heart could be seen. A retractor was placed to tear the pericardium until the heart was completely exposed. The left anterior descending coronary artery was ligated, after which the chest and skin were sutured. Rats were divided into three groups: control group (n=6), ischemia group (n=7), and ischemia + IMD1-53 group (n=7). The experimental protocols were approved by the Animal Care and Protection Committee of China Medical University.
Masson staining of heart tissue
The SD rats were killed at 1 week by vertebral dislocation. With PBS perfusion, their hearts were taken. Optimal cutting temperature (OCT) embedding and freezing at -80°C were performed. A freezing microtome was used for cutting into frozen sections of 5 µm/sheet. The Masson staining kit (BOGOO, PT003) was used for staining according to the instruction manual, and ImageJ software was used for analysis of fibrosis area.
Statistical analysis
One-way analysis of variance (ANOVA) tests were completed on all quantitative data using the Dunnett post-test to compare the experimental groups with the saline control. The level of significance was set as P < 0.05. All statistical calculations were computed using GraphPad Prism 4 software.
Intermedin inhibits area of cardiac fibrosis in rats
In vivo results of Masson staining showed that compared with the control group, the area of cardiac fibrosis in rats significantly increased after myocardial infarction, while in the LIMD1-53 injection group, the area of cardiac fibrosis of the heart after myocardial infarction was significantly inhibited (Figure 2).
Inhibition of collagen synthesis of cardiac tissue after treatment with intermedin
In vivo, after real-time PCR was used to analyze the collagen synthesis of rats' cardiac tissue in myocardial infarction, we found that the collagen expression level of type I and III in myocardial infarction heart was obviously higher than that in the control group (P<0.01), while that of the LIMD1-53 injection group was significantly inhibited (P<0.05) ( Figure 3A, 3B). Detection of protein level also showed that the level of collagen protein type I and III in the LIMD1-53 group was significantly lower than the level of collagen synthesis of cardiac tissue in myocardial infarction ( Figure 3C).
Intermedin affects the collagen synthesis of cardiac fibroblasts and the cell transformation to myofibroblasts
In vitro, after real-time PCR was used to analyze the collagen synthesis of myocardial cells in rats, we found that compared with the control group, the collagen gene expression of cardiac fibroblasts type I and III treated with Ang II was significantly higher (P<0.01), while in the IMD1-53 group, compared with the Ang II group, the collagen gene expression of cardiac fibroblasts type I and III was significantly lower (P<0.01) ( Figure 1B, 1C). The expression of cardiac fibroblast marker a-SMA was obviously inhibited by LIMD1-53 at 1×10 -7 mmol (Figure 4).
The influence of IMD1-53 calcitonin receptor-like receptor (CRLR) mRNA expression
In vitro, after we further analyzed the gene expression of fibroblasts in the control group, the Ang II group, and the Ang II + LIMD1-53 group, we found that Ang II inhibited CRLR mRNA expression (P<0.05), and this inhibition could not be reversed by exogenous IMD1-53 ( Figure 5A).
IMD1-53 inhibits the expression of transforming growth factor-b (TGF-b) in fibroblast and cardiac tissue
In vitro, quantitative PCR showed that Ang II significantly stimulated the expression of TGF-b mRNA (P<0.01), while IMD1-53 significantly inhibited the expression of TGF-b induced by Ang II (Figure 5B).
IMD1-53 inhibits smad3 phosphorylation
In vitro, Ang II could induce the expression of TGF-b, thus contributing smad3 phosphorylation and the expression of downstream genes. In the Ang II + LIMD1-53 group, LIMD1-53 significantly inhibited the expression of the Ang II-induced TGF-b, thereby reducing the level of smad3 phosphorylation and inhibiting cardiac fibrosis ( Figure 6A). Q-PCR data showed that in the Ang II + LIMD1-53 group, downregulation of TGF-b was accompanied by the up-regulation of expression of MMP-2 in vivo, which indicates that LIMD1-53 can also perform one of the mechanisms of anti-cardiac fibrosis ( Figure 6B).
Discussion
IMD plays a lot of roles in the body. The role in the cardiovascular system is particularly evident, mainly in terms of relaxing blood vessels and lowering blood pressure [13]. IMD can weaken ischemia/reperfusion injury caused by myocardial apoptosis [14]. At the same time, it can increase myocardial contractility and coronary flow [15,16] and regulate neonatal rat cardiomyocyte hypertrophy [17][18][19][20]. Through the experiment, we found that IMD1-53 reduced collagen synthesis of cardiac fibroblasts in the rat and inhibited the transformation of cardiac fibroblasts into myofibroblasts. The studies showed that IMD1-53 can improve the area of cardiac fibrosis in rats after they died because of myocardial infarction. The results of fluorescence Q-PCR and Western blot showed the reduction of collagen synthesis in myocardial tissue, and IMD1-53 CRLR did not change significantly in the myocardial infarction group and the IMD1-53 group. It was found that the TGF-b expression levels of IMD1-53 were significantly lower than those of the myocardial infarction group, and the expression levels of MMP-2 were significantly increased. We further found that the protective effect could be achieved by inhibiting the expression of TGF-b, thereby blocking the TGF-b-smads signaling pathway.
Cardiac fibrosis is an extremely complex process, involving multiple paths that interact with each other, but TGF-b plays a central role in cardiac fibrosis. Domestic and international studies have reported that during the formation of scar tissue, TGF-b is believed to promote scar generation [1,8]. It also has the physiological function of promoting fibroblast growth and the expression of extracellular matrix (ECM), and inhibits the degradation of ECM. In addition, it can promote the transformation of fibroblasts into myofibroblasts [1]. After cardiac muscle has been affected by pathological factors, substances such as endocrine TGF-b, Ang II, and endothelin directly stimulate the proliferation of fibroblasts and the production of collagen [18,21]. Ang II can inhibit the synthesis of myocardial cells and secretion of IMD1-53, and exogenous IMD1-53 significantly inhibits Ang II induction of the proliferation of cardiac fibroblasts [12,22]. This study found that the gene expression of TGF-b was significantly decreased with IMD1-53, and smad3 phosphorylation levels were significantly decreased, suggesting that the anti-fibrosis effect of IMD1-53 is realized by inhibiting TGF-b expression so as to block smad3 phosphorylation, while the IMD CRLR did not change significantly, indicating that IMD1-53 inhibition of myocardial fibrosis may not related to the number of receptors.
ECM is a dynamic network structure composed of macromolecules such as collagen, proteoglycans, and glycoproteins [8], whose synthesis and degradation are influenced by many factors. The basic characteristics of tissue fibrosis showed the alterations of organizational structure and excessive deposition of ECM [23]. Matrix metalloproteinases (MMPs), a group of highly homologous zinc-dependent peptide enzymes, can degrade various ECMs. Their activity is closely related to the expressions of the specific tissue inhibitors of MMPs (TIMPs) in the tissue [24]. Studies have shown that the key to maintaining normal metabolism of ECM is the secreted balance between MMPs and TIMPs, and their imbalance is one of the important factors in a variety of types of tissue fibrosis [2,25]. TGF-b inhibits the activity of MMPs by inducing the specificity of matrix TIMPs, thereby inhibiting the degradation of ECM [26]. This study found that the down-regulation of TGF-b leads to the higher expression of MMP-2, in which TGF-b has played a role in the degradation of extracellular matrix.
This study defines IMD1-53 regulation of cardiac fibrosis, but more research is needed on its mechanism. The direct targets of IMD1-53 in regulating the TFG-b/smad3 pathway need to be identified, and the exact mechanism of how it influences the expression of MMPs is unclear and requires further study.
Conclusions
In brief, IMD1-53 can inhibit the collagen synthesis of cardiac fibroblasts, and the down-regulation of TFG-b expression is one of the mechanisms of inhibition of myocardial fibrosis. IMD is very likely to be a new endogenous substance that has a role in regression of myocardial fibrosis, with potential clinical application.
Statement
The authors received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. | 2018-04-03T06:10:37.251Z | 2017-01-09T00:00:00.000 | {
"year": 2017,
"sha1": "d3f639e5bd1722cfaf3771cc96911a01c3146313",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5242205?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3f639e5bd1722cfaf3771cc96911a01c3146313",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
240077029 | pes2o/s2orc | v3-fos-license | Characteristics, hospital referrals and 60-day mortality of older patients living in nursing homes with COVID-19 assessed by a liaison geriatric team during the first wave: a research article
Background The infection by SARS-CoV-2 (COVID-19) has been especially serious in older patients. The aim of this study is to describe baseline and clinical characteristics, hospital referrals, 60-day mortality, factors associated with hospital referrals and mortality in older patients living in nursing homes (NH) with suspected COVID-19. Methods A retrospective observational study was performed during March and April 2020 of institutionalized patients assessed by a liaison geriatric hospital-based team. Were collected all older patients living in 31 nursing homes of a public hospital catchment area assessed by a liaison geriatric team due to the suspicion of COVID-19 during the first wave, when the hospital system was collapsed. Sociodemographic variables, comprehensive geriatric assessment, clinical characteristics, treatment received including care setting, and 60-days mortality were recorded from electronic medical records. A logistic regression analysis was performed to analyze the factors associated with mortality. Results 419 patients were included in the study (median age 89 years old, 71.6 % women, 63.7 % with moderate-severe dependence, and 43.8 % with advanced dementia). 31.1 % were referred to the emergency department in the first assessment, with a higher rate of hospital referral in those with better functional and mental status. COVID-19 atypical symptoms like functional decline, delirium, or eating disorders were frequent. 36.9% had died in the 60 days following the first call. According to multivariate logistic regression age (p 0.010), Barthel index <60 (p 0.002), presence of tachypnea (p 0.021), fever (p 0.006) and the use of ceftriaxone (p 0.004) were associated with mortality. No mortality differences were found between those referred to the hospital or cared at the nursing home. Conclusions and implications 31% of the nursing home patients assessed by a liaison geriatric hospital-based team for COVID-19 were referred to the hospital, being more frequently referred those with a better functional and cognitive situation. The 60-days mortality rate due to COVID-19 was 36.8% and was associated with older age, functional dependence, the presence of tachypnea and fever, and the use of ceftriaxone. Geriatric comprehensive assessment and coordination between NH and the hospital geriatric department teams were crucial.
Background
The coronavirus disease 2019 (COVID- 19) was declared a pandemic on March 11, 2020, by the World Health Organization. The severity of this disease increases with age, with a mortality rate in those over 80 years of age between 20 and 30 %, [1,2] and deaths in nursing homes (NH) that represent up to 50 % of total mortality, with a huge variability between countries [3,4]. However, the analysis of 30-days mortality in hospitalized patients over 65 years old seems to be similar to that observed in nursing homes [5][6][7] and a relationship has been described between mortality of institutionalized patients due to COVID-19 with the male sex, cognitive and physical impairments, frailty, the presence of dyspnea or severe forms of the disease [8].
During the first wave of SARS CoV-2, a liaison geriatric hospital-based team was created in different hospitals of the Madrid region for the assessment of older patients in NH, supporting on site care and treatment, or recommending referral to emergency departments for acute cases when needed [7,9]. At that time (first wave) the hospital system in Madrid was almost collapsed, with 105 % of the beds occupied by COVID-19 patients [10].
In Spain, it has been calculated 47-50 % of deaths by COVID-19 in the first wave occurred in this population, [4] which has generated a debate on the suitability of healthcare in the NH during the first wave. Recently, a study has been published where 30-day survival rate of institutionalized older patients with COVID-19 did not depend of where they were treated [7].
There are some published information on the clinical differences in institutionalized and COVID-19-infected older persons treated in their NH or referred to the hospital. However, in most of the available studies, either there is no follow-up or this follow-up is restricted to the first 30 days when it is known that the consequences and repercussions of this disease are prolonged in a longer period in many cases.
Therefore, our study aims to describe the clinical characteristics, the hospital referrals, the 60-day mortality, and the factors related including care setting (NH vs. hospital) in institutionalized patients with COVID-19.
Methods
A retrospective observational study of institutionalized patients with COVID-19 (clinical suspected or confirmed) who required telephone assessment from a liaison geriatric hospital-based team between 30 March and 30 April 2020 was performed. This team assessed any older patients from 31 NH patients between 8 a.m. and 10 p.m. from Monday to Sunday. During this period the hospital had to increase the number of beds from some 750 to a peak of 983, most of them filled by COVID-19 patients, and to increase fourfold the number of ICU beds [11].
We included all patients assessed for COVID-19 clinical suspicion (presence of respiratory symptoms, fever, delirium, functional impairment, decrease or eating disorders, with no alternative diagnosis) or with a confirmation of diagnostic test of active infection (SARS-CoV-2 PCR availability was very limited in the first weeks of March and increased exponentially thereafter). We excluded older patients assessed for other reasons or patients assessed living in disable person centers.
We collected the following variables from clinical records based on the information provided by the NH healthcare professionals: age, sex, nursing home type, Barthel index, Functional Assessment Classification (FAC) for gait, Reisberg's Global Deterioration Scale for dementia staging (GDS), Charlson index, and malnutrition (BMI <22 in the last year, albumin <3.5 g / dl in the 6 months prior or active prescription of nutritional supplements). Also, we recorded COVID-19 related symptoms and date of onset (cough, dyspnea, falls, delirium, functional impairment related to asthenia, general malaise or deterioration of physical capacity, eating disorder defined as hyporexia, refusal to eat, or inability to use the oral route), as well as signs of COVID-19 severity (respiratory failure defined as oxygen saturation less than 92 %, dyspnea, tachypnea, referred to as more than 30 breaths per minute and a measured fever ≥38ºC), and COVID-19 diagnostic test, prescribed treatments (hydroxychloroquine, lopinavir/ritonavir, antibiotic therapy, corticosteroid therapy, low molecular weight heparin), emergency department referrals, hospital admissions, and 60-day mortality. Hydroxychloroquine was used with the prior verbal consent of the patient or relatives, who were informed of its use outside the technical data specifications.
cognitivesituation. The 60-days mortality rate due to COVID-19 was 36.8% and wasassociated with older age, functional dependence, the presence of tachypnea andfever, and the use of ceftriaxone. Geriatric comprehensive assessment andcoordination between NH and the hospital geriatric department teams werecrucial. Keywords: COVID-19, Mortality, Nursing homes, Liaison geriatric team In patients referred to the hospital after the first telephone assessment, we reviewed the result of the PCR for COVID-19, whether they required hospital admission and the length of hospitalization.
A descriptive analysis was performed with measures of frequency, mean, and deviation using the Chi-square test for qualitative variables and U-Mann -Whitney for quantitative variables with non-normal distribution. Also, a univariate and multivariate logistic regression model was used for 60-day mortality analysis. Significance was established at p <0.05. Data analysis was performed using the IBM SPSS Statistics version 20.
The study was approved by the Ethics Committee for Healthcare (Comité de Ética para la Asistencia Sanitaria, CEAS) of the Hospital (approved June 8, 2020, act 393). Informed consent was waived for this retrospective study. All the ethical principles for medical investigation in human beings recorded in the Helsinki declaration of the World Medical Association were followed.
Participant's characteristics
Patient characteristics are shown on Table 1, and correspond with a usual NH population of extreme old age and severe physical and mental disability. COVID-19 symptoms were mostly respiratory, but included geriatric syndromes as falls, delirium or refusal to eat. The median from symptom onset to telephone consultation was 3 days. The most used treatments were hydroxychloroquine (64.2 %), azithromycin (53.2 %), and ceftriaxone (29.4 %). 31 % were referred to the emergency department, 79 % of them were hospitalized with an average length of stay of 9.1 days and an average time of emergency department referral since the onset of the symptoms of 9.3 days.
Hospital mortality was 27.7 % and 60-day mortality was 36.8 %. 70.1 % of deaths happened in the NH, 28 % in the hospital, and 1.9 % in a palliative care center. Those patients who were referred on the first call and returned to the residence were more frequently referred back to the hospital than those who had not been referred initially (13.2 % vs. 4.5 %, p 0.002) ( Table 1).
No differences in hospital referrals were found in terms of age, comorbidity, type of nursing home, time of evolution from the onset of symptoms, or mortality at 60 days.
Factors related to 60-day mortality (Table 2)
The mortality at 60 days was 36. 8
Discussion
The population referred has a high age, a high dependency rate, dementia, comorbidity, and malnutrition, and those conditions are similar to those described by other authors in similar populations [12][13][14]. Furthermore, we found a high prevalence of COVID-19 atypical symptoms such as delirium, functional impairment, or eating disorder with percentages around 20 %. Those percentages were similar to other studies in institutionalized patients with COVID-19 [6,15].
Almost a third of the assessed patients were referred to the hospital. These referrals were associated with a better functional and cognitive situation, the presence of severe symptoms, and the prescription of a specific treatment. A higher hospital referral was also observed in male patients in contrast to the data from Bielza [7]. Therefore, we analyzed characteristics based on gender, observing that institutionalized men with COVID-19 were younger and suffered less functional or cognitive impairment, which could justify this finding.
Additionally, referred patients to the hospital presented serious signs and symptoms of SARS-CoV-2 infection such as respiratory failure, fever, or tachypnea, also described in other studies [7,8,12]. The inhospital mortality rate was 27.7 %, similar to other studies of hospitalized older patients with percentages between 27 % and 49 % [16,17]. Few studies assess mortality at two months in institutionalized patients with COVID-19. In a study carried out in Italy, there was an excess of mortality observed in the months of March and April 2020 in a nursing home, with a mortality of 43 % among patients with COVID-19 [18]. The mortality observed in our population at 60 days was 36.8 %, a percentage similar to that described in other studies that evaluated mortality at 30 days, finding a rate between 20 and 48 % in institutionalized older persons patients [8,15,19,20].
The mortality at 60 days in our study was related to age, functional dependence, the presence of tachypnea or fever, and the use of ceftriaxone. In other studies, age, dementia, frailty, fever, and respiratory symptoms have been associated with higher mortality [7,8,12,[21][22][23][24]. The relationship between mortality and the use of ceftriaxone, not described in previous studies, could be explained since it is a drug for intravenous or intramuscular administration, which was used in case of inability of oral feeding, therefore assuming a greater severity.
In our study, we did not find differences in mortality at 60 days according to the place of care, data similar to that referred by Bielza [7] or by España, [22] nor its association with male gender or the time of onset of COVID-19 symptoms. The fact of not finding differences in mortality depending on the place of care of the patients, enhances the importance of Liaison geriatric hospital-based team, as key in charge of the comprehensive and individualized assessment of each patient and the most appropriate management, thus avoiding unnecessary admissions and therapeutic fierceness.
The coordination between NH, primary care, and hospital care has been key during the pandemic. A study carried out in France in three nursing homes highlights that those nursing homes that received support from the hospital had lower mortality [25] Furthermore, after the first wave, this new form of work and collaboration has served to implement worldwide new strategies with to improve the care of the older persons living in, such as the formation of liaison geriatric hospital-based team units in our region [9].
The strengths of this study are the large sample size, as well as the multiple variables collected, which have made it possible to obtain results according to previous publications. Moreover, a follow-up of 60 days has been performed, higher than the studies published so far since we consider the impact of the disease on mortality could be better evaluated in a longer period.
The main limitation of our study is that it is retrospective, based on the information recorded in the electronic medical record. Furthermore, not all COVID-19 cases were confirmed by diagnostic test due to the limited access, and only the cases in which the nursing home health professionals requested the assessment of the liaison geriatrician were included. This could mean that we had not detected some older patients who could have been included in the study. Also, we could not collect symptomatic treatment or palliative sedation due to the lack of systematic inclusion in the patient's medical history. Institutionalized patients did not have advance care directives, so this information could not be used, which on the other hand, would have facilitated decision-making.
Conclusions and implications
The institutionalized patients evaluated by the liaison geriatric hospital-based team for suspected COVID-19, who were referred to the hospital, presented a better functional and cognitive situation and more frequently received specific treatment for the infection. However, no differences in mortality were found with patients treated in the nursing home. More than a third of the patients had died at 60 days, with higher mortality in those considered the oldest, the most dependent, and those who had presented symptoms such as fever and tachypnea. These findings suggest that the figure of the liaison Geriatrist has been important when individually determining the most appropriate management for each patient. | 2021-10-29T13:52:21.945Z | 2021-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "35fe0f69456f4b11da2182e027ee5dbafdb41b25",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-021-02565-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35fe0f69456f4b11da2182e027ee5dbafdb41b25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254624705 | pes2o/s2orc | v3-fos-license | The Association of CD8+ Cytotoxic T Cells and Granzyme B+ Lymphocytes with Immunosuppressive Factors, Tumor Stage and Prognosis in Cutaneous Melanoma
The immunosuppressive tumor microenvironment (TME) consists of suppressive cells producing a variety of immunomodulatory proteins, such as programmed death ligand 1 (PD-L1) and indoleamine-2,3-dioxygenase (IDO). Although granzyme B (GrB) is known to convey the cytolytic activities of CD8+ cytotoxic lymphocytes, it is also expressed by other cells, such as regulatory T and B cells, for immunosuppressive purposes. The role of GrB+ lymphocytes in melanoma has not been examined extensively. In this study, benign, premalignant, and malignant melanocytic tumors were stained immunohistochemically for CD8 and GrB. PD-L1 was also stained from malignant samples that had accompanying clinicopathological data. The association of CD8+ and GrB+ lymphocytes with PD-L1 expression, tumor stage, prognosis, and previously analyzed immunosuppressive factors were evaluated. Our aim was to obtain a more comprehensive perception of the immunosuppressive TME in melanoma. The results show that both CD8+ and GrB+ lymphocytes were more abundant in pT4 compared to pT1 melanomas, and in lymph node metastases compared to primary melanomas. Surprisingly, a low GrB/CD8 ratio was associated with better recurrence-free survival in primary melanomas, which indicates that GrB+ lymphocytes might represent activated immunosuppressive lymphocytes rather than cytotoxic T cells. In the present study, CD8+ lymphocytes associated positively with both tumor and stromal immune cell PD-L1 and IDO expression. In addition, PD-L1+ tumor and stromal immune cells associated positively with IDO+ stromal immune and melanoma cells. The data suggest that IDO and PD-L1 seem to be key immunosuppressive factors in CD8+ lymphocyte-predominant tumors in CM.
Introduction
During the last few years, the prognosis of advanced cutaneous melanoma (CM) has considerably improved due to the development of novel immune checkpoint inhibitor (ICI) therapies, which act to enhance the anti-tumor immune responses of CD8+ lymphocytes [1]. As microenvironment-related factors may interfere and thus promote melanoma cells to resist and escape immunotherapy, it is essential to better understand how an immunosuppressive tumor microenvironment (TME) is formed and maintained in CM [2].
Immunosuppressive features enable melanoma cells to evade the host immune system by suppressing their attack by CD8+ cytotoxic T lymphocytes (CTLs) [3]. The immunosuppressive TME in CM is composed of different immune cells, such as FoxP3+ Regulatory T cells (Tregs) and tumor-associated macrophages (TAMs), by the expression of immunomodulatory proteins, of which programmed death ligand 1 (PD-L1) and indoleamine 2,3dioxygenase (IDO) are notable examples, as well as different immunomodulatory cytokines and fibroblasts [4].
Histological Specimens
This study consists of benign, pre-malignant and malignant melanocytic lesions collected at the Kuopio University Hospital (Finland) during the years 1980-2010. The malignant melanocytic lesions consisted of in situ melanomas restricted to the epidermis, superficial melanomas with less than 1 mm invasion and deep melanomas with more than 4 mm invasion. From a total of 252 samples, 250 were stained for CD8 and 249 for GrB. PD-L1 was stained from 128 malignant tumors for which clinicopathological patient data were available. Samples with large necrotic areas, abundant pigmentation or insufficient tumor area were omitted, yielding 203 CD8, 172 GrB and 68 PD-L1 samples for analysis (Table 1). Analyzed CD8 and GrB stained samples including patient data are shown in Table 2. This study was approved by the research ethics committee of the Northern Savo Hospital District and by the Finnish National Supervisory Authority for Welfare and Health (VALVIRA, 6187/05.01.00.06/2010). Table 1. Sample sizes of analyzed CD8, GrB and PD-L1 stainings. pT = pathological tumor stage, LNM = lymph node metastasis.
Immunohistochemistry
Tissue samples, 5 µm thick, were formalin fixed and paraffin embedded. For CD8-and GrB-specific immunohistochemical staining of samples, after deparaffinization, antigen retrieval was performed in 10 mM citrate buffer (pH 6.0) in a pressure cooker for 15 min, and the samples were washed with 0.1 M phosphate buffer (PB; pH 7.0). Next, the endogenous peroxidase activity was blocked with 1% H 2 O 2 for 5 min. Finally, to block the non-specific antibody binding, the sections were washed and incubated with 1% milk powder dissolved in PB for 30 min at 37 • C.
Thereafter, the sections were incubated with rabbit monoclonal antibody for CD8 (1:200, Thermo Fisher Scientific, Rockford, IL, USA) and rabbit polyclonal antibody for granzyme B (1:100, Atlas Antibodies, Bromma, Sweden) at 4 • C overnight. The next day, the sections were rinsed with PB and incubated with biotinylated anti-rabbit antibody (1:200, Vector Laboratories) diluted in 1% milk powder in PB. The residual specific antibody was visualized with the Vectastain Elite ABC kit using DAB as the chromogenic substrate to show positive staining in brown, and Mayer's hematoxylin was used for counterstain the nuclei in blue. Thereafter, the sections were washed, dehydrated, and mounted in DePex.
For anti PD-L1 stainings, before deparaffinization, the sections were cooked at 58 • C for 30 min. After that, the sections were treated similarly to the CD8 and GrB stained material. Rabbit monoclonal anti-PD-L1 antibody (1:80 dilution, Cell Signalling, Danvers, MA, USA) was used as the primary antibody and the same biotinylated anti-rabbit antibody was used as the secondary antibody. For negative control samples, the primary antibodies were omitted from the procedure. In addition, tonsil-derived sample sections were used as a positive control for CD8 and GrB staining and, for PD-L1 immunohistochemical staining, sample sections derived from the placenta were used as the positive control.
Stainings for CD68 and CD163 have been described in [24]. Stainings for IDO and FoxP3 have been described in [25].
GrB + FoxP3 Immunofluorescence Double Stainings
Antigen retrieval was performed on deparaffinized sections in 10 mM citrate buffer (pH 6.0) in a pressure cooker for 15 min, followed by a wash in 0.1 M phosphate buffer (PB; pH 7.0). Subsequently, to quench any autofluorescence, the sections were treated with 50 mM glycine for 40 min at room temperature. Thereafter, non-specific staining was blocked with 1% bovine serum albumin for 30 min, followed by an overnight incubation at 4 • C with the primary antibodies against FoxP3 (Abcam, Cambridge, UK) and GrB (both antibodies at a 1:90 dilutions). Next, the sections were washed and incubated for 1 h with the secondary antibodies (1:300, Alexa Fluor 594 anti-rabbit IgG, Cell Signaling and 1:300 Alexa Fluor 488 anti-mouse IgG, Cell Signaling). Nuclei were counterstained with DAPI (1 µg/mL, Sigma-Aldrich, St. Louis, MO, USA). Finally, the sections were mounted in Vectashield (Vector H-1000, Vector). The samples were imaged with a Zeiss Axio Observer inverted microscope (×20 or ×40 NA 1.3 oil objectives) equipped with a Zeiss LSM 700 confocal module (Carl Zeiss Microimaging GmbH, Jena, Germany).
Analysis of CD8 and GrB Stained Samples
CD8 and GrB stainings were analyzed using the hotspot method described earlier [24,25]. The slides were scanned with a whole-slide digital scanner (Hamamatsu NanoZoomer S360). The areas with the highest densities of positive cells were chosen from the scanned slides by the Visiopharm-software and the pictures of these hotspots were captured within the software. If the accuracy of the digitalized slide was not high enough, the sample was imaged with a Zeiss AxioCam ERc 5S microscope-mounted camera (Carl Zeiss, Germany) with identical picture sizes. Three or five hotspots were chosen depending on the lesion size.
An automated computed vision (CV) software reported earlier was used to analyze CD8+ lymphocytes from the images [25]. The software was added with the option to manually correct the selections of the CV. First, all cell selections performed by the CV were checked from the images and corrected semiautomatically from the significant outliers (327 pictures, 36% of all pictures). Validation datasets were created separately for the remaining 390 scanned and 186 microscopic images by analyzing semiautomatically 35% of both microscopic and scanned images (S.Sa.). The Pearson correlation coefficients between automated and semiautomated analyses were r = 0.990 and r = 0.970 for microscopic images and scanned images, respectively (p-values < 0.001). GrB+ cells were counted manually from the images since CV was not sufficient due to the variable non-specific background staining. GrB+ lymphocytes were distinguished from other GrB-expressing cells by their shape and size.
In total, 25% of both CD8 and GrB stained sections (51 and 43 samples, respectively), containing samples from all histopathological groups, were analyzed independently by another investigator (K.H.) by imaging the hotspots with microscope-mounted camera and counting positive cells from the pictures semiautomatically/manually. The Pearson correlation coefficients between the independently analyzed cell counts were r = 0.964 for CD8 and r = 0.920 for GrB stained sections (p-values < 0.001).
Analysis of PD-L1 Stainings
PD-L1 was analyzed from 68 malignant tumors, for which patient data were available. The expression of PD-L1 was evaluated semiquantitatively and separately from stromal immune and tumor cells. A four-level scoring system was used for the assessment. Score 0 was given if <1% of cells were PD-L1 positive. Scores 1 to 3 were given if 1-5%, 6-20% or >20% of cells expressed PD-L1, respectively. PD-L1 expression was evaluated independently by two investigators (S.Sa., H.S.), and samples with differing results were re-evaluated together.
Statistical Analysis
The different statistical analyses were performed using the SPSS Statistics 27 package (IBM Corporation, Armonk, New York, NY, USA). For the comparison between the different histological groups, a non-parametric Kruskal-Wallis test, with pairwise comparisons, was used. A Pearson χ 2 -test was employed to analyze the associations with semiquantitatively assessed immunosuppressive factors and clinicopathological parameters. Univariate and multivariate survival analyses were conducted by using a Kaplan-Meier with log-rank test and a Cox's regression, respectively. A Mann-Whitney U-test was used to interrogate the association of PD-L1 expression with quantitatively assessed immunosuppressive factors.
For the χ 2 -test, Mann-Whitney U-test, and survival evaluations, the CD8+ and GrB+ lymphocyte counts were divided into two groups (low or high) according to the median value. Cell counts less than or equal to the median corresponded to low, and cell counts higher than the median to high cell counts (median 567.60 for CD8 and 79.60 for GrB). PD-L1 expression of tumor and stromal immune cells was also divided into two groups. PD-L1 expression was considered low if ≤5% and high if >5% of cells expressed PD-L1.
Patient Characteristics
The clinicopathological characteristics of the cohort are shown in Table 3. The mean follow-up was 10.2 ± 9.3 years (median 8.0 years). Patients were diagnosed and followed up in the era that preceded the use of current immunotherapies.
CD8+ and GrB+ Lymphocytes Associate Positively with Breslow's Depth
To evaluate the density and localization of CD8+ and GrB+ lymphocytes, melanocytic tumors were stained for CD8 and GrB. The staining pattern of CD8 was cytoplasmic and membranous. GrB showed granular, cytoplasmic staining pattern and was also localized to the plasma membrane (Figures 1 and 2). Altogether, deep melanomas showed more Biomedicines 2022, 10, 3209 6 of 15 CD8+ and GrB+ lymphocytes than superficial melanomas or nevi. Both CD8+ and GrB+ lymphocytes localized mainly in the stromal compartment. In deep melanomas and lymph node metastasis (LNMs) samples, there were also some CD8-positive lymphocytes inside the tumor cell nests ( Figure 1E,F).
CD8+ and GrB+ Lymphocytes Associate Positively with Breslow's Depth
To evaluate the density and localization of CD8+ and GrB+ lymphocytes, melanocytic tumors were stained for CD8 and GrB. The staining pattern of CD8 was cytoplasmic and membranous. GrB showed granular, cytoplasmic staining pattern and was also localized to the plasma membrane (Figures 1 and 2). Altogether, deep melanomas showed more CD8+ and GrB+ lymphocytes than superficial melanomas or nevi. Both CD8+ and GrB+ lymphocytes localized mainly in the stromal compartment. In deep melanomas and lymph node metastasis (LNMs) samples, there were also some CD8-positive lymphocytes inside the tumor cell nests ( Figure 1E,F). and deep melanomas (Breslow's depth > 4 mm, (E)) and lymph node metastasis (F). The dashed line in A and B defines the epidermis in benign and dysplastic nevi, respectively, and the dashed line in C marks the tumor-stroma borderline in in situ melanoma. The asterisks in (D-F) indicates the stromal tumor compartment, and black arrows in D point to tumor cells. Scale bar is 50 µm in A (and applies across (A-C), which all have ×200 magnification) and 100 µm in D (which applies across (D-F), which have all the same ×100 magnification), respectively. The number of CD8+ and GrB+ lymphocytes was significantly higher in pT4 compared to pT1 melanomas (p-values 0.023 and 0.014, respectively). In addition, both CD8+ The number of CD8+ and GrB+ lymphocytes was significantly higher in pT4 compared to pT1 melanomas (p-values 0.023 and 0.014, respectively). In addition, both CD8+ and GrB+ lymphocytes were more abundant in LNMs compared to pT4 melanomas (p-values 0.011 and 0.013, respectively), as well as in LNMs compared to pT1 melanomas (p-values < 0.001) ( Figure 3A,B). and GrB+ lymphocytes were more abundant in LNMs compared to pT4 melanomas (pvalues 0.011 and 0.013, respectively), as well as in LNMs compared to pT1 melanomas (pvalues < 0.001) ( Figure 3A,B). In melanocytic tumors, there was a positive correlation between CD8+ and GrB+ lymphocytes (Spearman's rho rs = 0.829, p < 0.001) ( Table 4). In melanocytic tumors, there was a positive correlation between CD8+ and GrB+ lymphocytes (Spearman's rho r s = 0.829, p < 0.001) ( Table 4).
The Association of CD8+ and GrB+ Lymphocytes with Clinicopathological Parameters
CD8+ and GrB+ lymphocytes associated positively with disease recurrence (p-values < 0.001) as well as poor prognostic factors, such as ulceration (p-values < 0.001 and 0.046, respectively) and nodular growth phase (p-values < 0.001 and 0.023, respectively) ( Table 5).
A Low GrB/CD8 Ratio Is an Independent Positive Prognostic Factor for Non-Immunotherapy-Related RFS in Primary Melanomas
In the group of all malignant lesions, high CD8+ and GrB+ lymphocyte counts associated with poor recurrence-free survival (RFS) in a univariate survival analysis (p-values < 0.001 and 0.002, respectively); however, the result was not retained in a multivariate analysis, which took tumor stage as a covariate. In the group of primary melanomas only, high CD8 count associated with poor RFS (p = 0.015) and poor disease-specific survival (DSS) (p = 0.037) but was not an independent prognostic factor when Breslow's depth was used as a covariate. In the group of primary melanomas, GrB cell number alone did not associate with survival.
In the group of all malignant cases and primary melanomas only, a GrB/CD8 cell number ratio of <0.1 was associated with a better RFS (p-values < 0.001 and 0.019, respectively) and DSS (p-values 0.024 and 0.030, respectively), and was an independent positive prognostic factor for RFS in primary melanomas when Breslow's depth was taken as a covariate in a multivariate analysis of survival (p = 0.012, HR: 0.195, 95% CI: 0.54-0.699) ( Figure 3C).
The Association of PD-L1 Expression with Tumor Stage and Clinicopathological Parameters
Malignant tumors containing the patient data were immune-histologically interrogated with anti PD-L1 antibodies. Tumors were divided into low (≤5% of cells) and high (>5% of cells) PD-L1-expressing tumors. PD-L1+ tumor and stromal immune cells were significantly more abundant in pT4 compared to pT1 tumors (p-values 0.037 and 0.034, respectively) ( Table 6). No statistically significant differences between PD-L1 expression in different tumor stages was observed when PD-L1 was evaluated in a four-level scoring system ( Figure 4A,B). In general, the staining pattern of PD-L1 was cytoplasmic and membranous. Positionally, most often, PD-L1+ cells (tumor cells/stromal cells) localized in the tumor-stroma interface ( Figure 4C-H).
PD-L1+ stromal immune cells associated positively with ulceration (p = 0.018) and both tumor and stromal immune cell PD-L1 expression associated positively with nodular growth phase (p-values 0.009 and 0.001, respectively) ( Table 6). PD-L1 expression was not found to be associated with survival.
In conclusion, both CD8+ and GrB+ lymphocytes associated positively with PD-L1+ and IDO+ melanoma and stromal immune cells. Only GrB+ lymphocytes associated with FoxP3+ Tregs and tumor nest CD163+ macrophages.
The Association of Tumor and Stromal Immune Cell PD-L1 Expression with Other Immunosuppressive Factors
PD-L1+ tumor and stromal immune cells associated positively with IDO+ melanoma cells (p-values 0.040 and 0.044, respectively) and IDO+ stromal immune cells (p-values 0.017 and 0.002, respectively) (data not shown). PD-L1 expression associated neither with FoxP3+ Tregs nor TAMs.
Discussion
The aim of this study was to investigate the association of CD8+ and GrB+ lymphocytes with tumor stage, survival, and immunosuppressive factors, in order to obtain a more comprehensive perception of the immunosuppressive TME in CM. Currently, there are only a few immunohistochemical studies of GrB+ lymphocytes in CM, and none of these have evaluated the association of GrB+ cells with the tumor stage. Nor have most of the previous melanoma studies of CD8+ CTLs focused on their association with the tumor stage but rather on survival. According to our findings, CD8+ and GrB+ lymphocytes are more abundant in pT4 compared to pT1 melanomas and in LNMs compared to primary melanomas. In the present work, we found that CD8+ CTL count is not an independent prognostic factor for non-immunotherapy-related survival, which is in line with results from Wong and colleagues who found that CD8+ CTLs associate with a favorable prognosis in patients treated with PD-1 inhibition but not in the absence of immunotherapy [26]. Moreover, the present work demonstrates that CD8+ and GrB+ lymphocytes associate positively with PD-L1+ and IDO+ tumor and stromal immune cells in the TME.
Surprisingly, a low GrB/CD8 cell count ratio (<0.1) associated with better non-immuno therapy-related RFS in primary melanomas. In colorectal cancer, GrB+ cells have been shown to associate with better prognosis and this has been used as a cytolytic marker for anti-tumor immunity [16,17]. The present results indicating that a low amount of GrB+ lymphocytes with respect to tumor-infiltrating CTLs associates with better RFS may be an indication of an immunosuppressive role for GrB in CM. Indeed, GrB is also expressed by a variety of immunosuppressive cells, such as Bregs and Tregs [18]. For example, GrB+ B cells have been shown to inhibit the proliferation of T cells by degrading the T cell receptor zeta chain, which is a substrate for GrB, in a GrB-dependent manner [27].
Sabbatino and co-workers analyzed immune cells in thin melanomas and found that GrB+ cells did not colocalize with CD8+ cells in double immunohistological staining experiments [28]. In our study, GrB and FoxP3 did not colocalize either; however, further analysis, in order to specify GrB+ lymphocytes in CM, is needed.
Furthermore, there are no previous immunohistochemical reports of the association of GrB+ lymphocytes with survival in CM. In one study, Wu and colleagues used a singlesample gene set enrichment analysis to assess the role of granzymes in CM and found that GrB associates positively with immunotherapy-related prognosis [29].
In this study, we found that PD-L1 expression was higher in deep compared to thin melanomas, which is in line with previous results [30]. CD8+ CTLs associated positively with PD-L1+ tumor and stromal immune cells, which also corresponds to previous findings [31,32]. Furthermore, CD8+ CTLs associated positively with IDO+ stromal immune and melanoma cells and tumor nest CD68+ macrophages, but only weakly with FoxP3+ Tregs and total macrophage counts. In contrast, in their study, Spranger and co-workers found a strong positive correlation between CD8+ CTLs and FoxP3+ Tregs in melanoma metastases [33]. However, another study found that Tregs in melanoma metastases had a significantly lower immunosuppressive phenotype compared with Tregs in ovarian tumors [34]. Thus, the role of Tregs in CM seems to be conflicting, and presumably their immunosuppressive role is influenced by other components of the TME. Furthermore, according to our results, PD-L1+ stromal immune and melanoma cells associated positively with IDO+ stromal immune and tumor cells, but not with TAMs or FoxP3 Tregs. Similarly, Spranger and colleagues have reported a strong positive correlation between the expression of IDO and PD-L1 in CD8+ lymphocyte-predominant metastases [33]. The present results suggest that PD-L1-and-IDO-mediated immunosuppression might be especially important in CD8+ CTL inactivation, as they clearly accumulate in CD8+ lymphocyte-rich tumors.
We also found that GrB+ lymphocytes associate positively with both IDO+ melanoma cells and tumor nest macrophages and moderately with FoxP3 Tregs and IDO+ stromal immune cells. To our knowledge, similar associations have not been studied before in CM. Our results suggest that GrB+ lymphocytes might represent mainly immunosuppressive lymphocytes and thus the positive association of GrB+ lymphocytes with different immunosuppressive factors may indicate that the activation of immunosuppressive lymphocytes is associated with a concurrent accumulation of other immunosuppressive factors into the tumor. However, further studies are needed to examine the role of GrB+ lymphocytes in CM.
In the present study, both CD8+ and GrB+ lymphocyte counts associated positively with tumor stage. Higher numbers of these cells also correlated with poor clinicopathological factors, such as recurrence and ulceration. It is likely that although the number of CD8+ CTLs is higher in more advanced tumors, their activation status is decreased in deep melanomas and LNMs compared with thin melanomas. Indeed, negative immunoregulation, for example, gene expression signatures associated with exhausted T cells, has been shown to progressively increase from benign nevi, dysplastic nevi through to malignant melanoma [35]. In addition, if the role of GrB+ lymphocytes in CM would be mainly immunosuppressive, as we have hypothesized based on our findings, it would be logical that these activated cells associate positively with tumor stage and poor prognostic factors.
In conclusion, our results suggest that the amounts of tumor-infiltrating CD8+ and GrB+ lymphocytes increase with the tumor malignancy. IDO and PD-L1 seem to be key immunosuppressive factors in CD8+ lymphocyte-predominant tumors. However, GrB+ lymphocytes seem to represent the cytolytic activity of immunosuppressive lymphocytes rather than CTLs, and their amount appears to associate with the accumulation of several other immunosuppressive cells into the tumor. | 2022-12-14T16:19:19.289Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "54f348d548376efd9ba95f1fb053570e2dd7d975",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/10/12/3209/pdf?version=1670668042",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa014e975cf7188bc2ed10a8e0faf378fe2f111a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244924011 | pes2o/s2orc | v3-fos-license | Current and Emerging Classes of Pharmacological Agents for the Management of Hypertension
Cardiovascular disease accounts for more than 17 million deaths globally every year, of which complications of hypertension account for 9.4 million deaths worldwide. Early detection and management of hypertension can prevent costly interventions, including dialysis and cardiac surgery. Non-pharmacological approaches for managing hypertension commonly involve lifestyle modification, including exercise and dietary regulations such as reducing salt and fluid intake; however, a majority of patients will eventually require antihypertensive medications. In 2020, the International Society of Hypertension published worldwide guidelines in its efforts to reduce the global prevalence of raised blood pressure (BP) in adults aged 18 years or over. Currently, several classes of medications are used to control hypertension, either as mono- or combination therapy depending on the disease severity. These drug classes include those that target the renin-angiotensin-aldosterone system (RAAS) and adrenergic receptors, calcium channel blockers, diuretics and vasodilators. While some of these classes of medications have shown significant benefits in controlling BP and reducing cardiovascular mortality, the prevalence of hypertension remains high. Significant efforts have been made in developing new classes of drugs that lower BP; these medications exert their therapeutic benefits through different pathways and mechanism of actions. With several of these emerging classes in phase III clinical trials, it is hoped that the discovery of these novel therapeutic avenues will aid in reducing the global burden of hypertension.
Introduction
Hypertension is a multifactorial, complex disorder estimated to affect one in three adults in the United States (US) [1]. Globally, the prevalence of hypertension is expected to reach 1.6 billion by 2025 [2]. Several factors influencing elevated blood pressure (BP) have been identified. Modifiable risk factors affecting BP include diet, obesity, smoking and physical exercise, while non-modifiable factors include increasing age, sex and genetics. Adequately controlling high BP reduces the risk of stroke, coronary artery disease, peripheral vascular disease, congestive heart failure and end-stage renal failure. The risk of developing these complications can start at a BP level as low as 115/75 mmHg [3]. In 2017, the American College of Cardiology (ACC) and American Heart Association (AHA) released updated guidelines in which they lowered the traditional BP cut-off value for diagnosing hypertension (Table 1) [4]. Accurately diagnosing hypertension can still pose significant challenges for physicians, as BP readings can fluctuate unexpectedly, a phenomenon referred to as labile hypertension. In 2020, the International Society of Hypertension produced global guidelines to enable physicians to diagnose and manage hypertension as accurately as possible [5]. These guidelines classified hypertension based on systolic and diastolic cutoffs, taking into account the manner and environment in which the BP measurements were taken.
The exact pathophysiology of hypertension remains largely unknown. It is thought that up to 5% of patients with hypertension have either renal or adrenal system impairment. In the remainder of patients, the cause is not entirely clear and these patients are labelled as having 'essential hypertension'. The regulation of BP relies on cardiac output and systemic vascular resistance. Most patients with essential hypertension have normal cardiac output but raised peripheral resistance. Peripheral resistance is determined by the small arterioles which contain smooth muscle. A rise in intracellular calcium leads to contraction of these smooth muscles, which in turn raises peripheral resistance. The autonomic nervous system is also thought to play an important role in increasing BP. Sympathetic nervous system (SNS) stimulation can lead to an increase in cardiac output and both arteriolar constriction and dilation. The SNS has an important function in short-term changes in BP that occur in response to physical exercise, however there is a lack of evidence suggesting its involvement in the development of chronic hypertension. It has also been suggested that endothelial dysfunction can lead to essential hypertension. Currently available antihypertensive agents exert their therapeutic effect by altering these underlying pathologic processes. These drug classes include those that target the renin-angiotensin-aldosterone system (RAAS), adrenergic receptors, and calcium channel blockers (CCBs), and induce diuresis and vasodilation. The increasing global incidence and burden of hypertension and its associated complications has necessitated the development of novel drugs with greater safety and efficacy than existing treatments.
Angiotensin-Converting Enzyme Inhibitors (ACEi)
The angiotensin-converting enzyme (ACE) is an enzyme produced primarily by the pulmonary vasculature. Upon conversion of angiotensinogen to angiotensin (AT) I by renin, ACE catalyses the conversion of AT I to AT II (Fig. 1). AT II causes vasoconstriction and stimulates the release of aldosterone from the Zona Glomerulosa of the adrenal gland, which in turn leads to Na + and water reabsorption and rise in blood volume. The observation that snake bite caused people to collapse due to a rapid decrease in BP led to the hypothesis that snake venom may induce its antihypertensive properties by inhibiting this pathway. The relationship between ACE, AT I, AT II and snake venom eventually led to the development of the first ACEi, captopril [6]. ACEi are one of the recommended first-line agents for those with uncontrolled stage 1 hypertension and non-Black patients [7]. Commonly used ACEi include benazepril, enalapril, fosinopril, lisinopril, moexipril, perindopril, quinapril, ramipril and trandolapril. The dosage of ACEi varies by medication and population subgroups. Common adverse effects of ACEi include hypotension, therefore these drugs should not routinely be used concurrently with other drugs that suppress the RAAS, such as angiotensin receptor blockers (ARBs) or renin inhibitors. Due to their effect on decreased bradykinin breakdown, ACEi are contraindicated in patients with a history of angioedema. ACEi are known teratogens and are therefore to be avoided in pregnant women. Moreover, caution is required when using ACEi in patients with bilateral renal artery stenosis due to a risk of acute renal failure. Further common adverse effects include dry cough and hyperkalemia and there have also been reports of gynecomastia, agranulocytosis and eosinophilic pneumonitis [8].
Angiotensin Receptor Blockers (ARBs)
AT II binds to the AT 1 subtype of AT II receptors, which are present predominantly in the heart, liver, kidneys, adrenal glands and brain. Activation of these receptors increase cardiac contractility, sodium reabsorption and vasoconstriction, which in turn leads to an increased blood volume and BP. Attempts have been made to develop AT II receptor antagonists to block the receptor and lower BP [9]. These studies resulted in the development of the first ARB, losartan. Other examples of commonly used ARBs include azilsartan, candesartan, eprosartan, irbesartan, olmesartan, telmisartan and valsartan. Because of their excellent tolerability profile, ARBs are one of the first-line drugs for treating hypertension. The JNC-8 guidelines suggest using ARBs as monotherapy or in combination with thiazide diuretics and dihydropyridine CCBs in non-Black populations. These guidelines also advise the use of ACEi or ARBs as a first-line option, regardless of ethnicity, for patients with chronic kidney disease (CKD) [7]. There is an estimated reduction of 8 mmHg for systolic BP (SBP) and 5 mmHg for diastolic BP (DBP), similar to ACEi [10]. The most frequent adverse effects with ARBs include dizziness, headache, and hyperkalemia, while uncommon adverse effects include first-dose hypotension, rash, diarrhoea, dyspepsia, abnormal liver function, muscle cramps, myalgias, back pain, insomnia, decreased hemoglobin, renal impairment, pharyngitis, and nasal congestion [11]. The occurrence of dry cough with ARBs is significantly lower compared with ACEi [12]. In general, withdrawal due to adverse effects is significantly less in patients taking ARBs compared with ACEi [11]. ARBs are contraindicated in patients with bilateral renal artery stenosis because they can precipitate renal failure. Elevated AT II in bilateral RAS helps maintain the glomerular filtration rate (GFR) and administration of ARBs can neutralize this beneficial effect [13]. Although there is no conclusive evidence to demonstrate the cross-reactivity of angioedema between ACEi and ARBs in patients with ACEi-induced angioedema, caution must be exercised [14].
Fig. 1
Mechanism of action of angiotensin converting enzyme inhibitors and angiotensin receptor blocker on The renin-angiotensin-aldosterone system (RAAS). The liver secretes angiotensinogen which is then converted to angiotensin 1 via the action of renin (1), which is released by the kidneys. Angiotensin 1 is subsequently converted into its active component (2) angiotensin 2 via the action of angiotensin converting enzyme, which is produced by the pulmonary vasculature. Angiotensin 2 increases blood pressure via two mechanisms: angiotensin 2 by itself is a potent vasoconstrictor (3) which increases systemic blood pressure. Furthermore, angiotensin 2 acts on the adrenal cortex to release aldosterone from the zona glomerulosa (4). Aldosterone acts on the collecting ducts and distal convoluted tubules (DCT) to facilitate sodium ion reabsorption by acting on the sodium-potassium pump (5); water is also reabsorbed in conjunction with sodium ions during this process (not shown). The reabsorption of sodium ions and water increases intravascular volume thereby increasing blood pressure
Renin Inhibitors
Renin inhibitors bind to the active site of the renin molecule and prevent the binding of renin to angiotensinogen, which is the rate-limiting step of the RAAS cascade. This action prevents the formation of AT I and II. The first and only drug from this class to be approved for use is aliskiren, which has been approved for the treatment of mild to moderate hypertension. In experimental and clinical studies, aliskiren has been shown to reduce BP in a dose-dependent manner [15,16]. A Cochrane review of randomized controlled trials (RCTs) conducted on aliskiren concluded that the mean reduction in BP with 300 mg was almost similar to other classes of antihypertensives [17]. Non-serious adverse effects commonly seen with this drug are headache, diarrhoea, dizziness and fatigue; hypotensive episodes were seen in volume-depleted patients. Further contraindications include aliskiren's concurrent use with ACEi or ARBs in diabetic patients. Despite its potent antihypertensive properties, alikiren is not considered a mainstay option for pharmacological management of hypertension. Research into developing agents in this class with better bioavailability and tissue availability is ongoing.
Calcium Channel Blockers (CCBs)
CCBs are a versatile class of drugs, which can be broadly categorized into two subtypes-dihydropyridines and nondihydropyridines. Both dihydropyridines and non-dihydropyridines act by inhibiting the voltage-gated L-type calcium channels; the former class acts more on vascular smooth muscle and the latter is more cardioselective. By impeding the entry of extracellular Ca 2+ into vascular smooth muscle, CCBs decrease excitation-contraction coupling and thus decrease peripheral vascular resistance and BP. In cardiac myocytes, these agents decrease sinoatrial nodal activity and atrioventricular conduction. Current examples of dihydropyridines that are commonly used for hypertension include amlodipine, nifedipine, felodipine, nicardipine, cilnidipine, while examples of non-dihydropyridines that are commonly used include verapamil and diltiazem (Fig. 2). The ALLHAT study reported that compared with lisinopril, amlodipine yields a 1.2 mmHg lower SBP [18]. The VALUE trial found a greater decrease in SBP and DBP, by 4.0 and 2.1 mmHg, respectively, with amlodipine than with valsartan after 1 month, and by 1.5 and 1.3 mmHg, respectively, after 1 year [19]. According to the JNC-8 criteria, along with ACEi or ARBs and thiazide diuretics, CCBs are indicated as a first-line management of high BP [7]. The ALLHAT study further found that CCBs are preferred over ACEi for the management of hypertension in patients of Afro-Caribbean origin because of the higher rates of stoke, peripheral artery disease and hospitalization due to angina among these patients in the ACEi group [18]. Common adverse effects of dihydropyridine CCBs include peripheral oedema, flushing, tachycardia, and dizziness. Non-dihydropyridine CCBs have an inhibitory effect on cardiac tissue and thus cause cardiac depression and atrioventricular block; verapamil has been known to cause constipation [20]. Rarely, both dihydropyridine CCBs and non-dihydropyridine CCBs can cause gingival hyperplasia, oesophageal dysfunction, and mild elevation of hepatic transaminases [21]. CCBs are contraindicated in patients allergic to any component of the medication and are also contraindicated in patients with severe hypotension, sick sinus syndrome, second-or third-degree heart block,
Diuretics
Diuretics are a vast group of drugs that can be further subdivided into carbonic anhydrase (CA) inhibitors, loop diuretics, thiazides, potassium-sparing diuretics, and osmotic diuretics. Loop diuretics, thiazides and potassiumsparing diuretics are all indicated for the management of high BP, however thiazides and thiazide-like diuretics are more commonly used. Although all diuretics induce a natriuretic action and block Na + reabsorption, they do so at different parts of the nephron by inhibiting various transporters (Fig. 3). Loop diuretics, such as furosemide and torasemide, work at the thick ascending limb of the loop of Henle and block the Na + -K + -2Cl − transport protein. Thiazides such as hydrochlorothiazide and chlortalidone act at the distal convoluted tubule and inhibit the Na + Cl − cotransporter. Potassium-sparing diuretics, which work at the collecting duct, can be divided into two categories: teridine analogues, such as amiloride and triamterene, inhibit reabsorption of Na + via the epithelial Na + channel (ENaC), and the aldosterone receptor antagonists spironolactone and eplerenone are inhibitors of aldosterone at the mineralocorticoid receptor. In recent studies, it was found that thiazides reduced SBP and DBP by 9 mmHg and 6 mmHg, respectively, compared with placebo. They have further been shown to reduce pulse pressure by 4-6 mmHg, which is greater than the reduction caused by ACEi, ARBs, renin inhibitors, or β-blockers (BBs) [23]. Common adverse effects of loop diuretics include ototoxicity, hypokalemia, hypomagnesemia and metabolic alkalosis, whereas adverse effects of thiazides include hyponatremia, hyperglycemia, hyperlipidemia, hyperuricemia, and hypercalcemia. While all potassium-sparing diuretics can lead to hyperkalemia, aldosterone receptor antagonists can lead to gynecomastia and decreased libido. Diuretics are contraindicated in patients allergic to any component of the medication as well as those with hepatic and renal failure. Loop diuretics are contraindicated in gout and pregnancy, thiazides are contraindicated in gout, and potassium-sparing diuretics are contraindicated in patients with hyperkalemia and in those taking ACEi or ARBs, and in pregnancy [24].
β-Blockers
BBs are a class of drugs consisting of many subtypes, each with different pharmacokinetic and pharmacodynamic properties. Some of the mechanisms include a reduction in cardiac output by competitively blocking β1 receptors in the cardiac myocytes, which inhibits the signal transduction pathway of Gs protein, resulting in negative inotropic and chronotropic effects (Fig. 2). BP-lowering properties are also seen when BBs act on β1 receptors in the kidneys, resulting in inhibition of renin release from juxtaglomerular apparatus and subsequent decrease in AT II and aldosterone production, enhancing renal loss of Na + and water and further diminishing arterial pressure. Several trials in the 1990s showed the efficacy of BBs compared with placebo in managing hypertension with a 4-27 mmHg drop in SBP [25,26]. However, the efficacy of BB use in uncomplicated hypertension was first questioned by two large hypertension trials: the Losartan Intervention for End Point Reduction in Hypertension (LIFE) study and the Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm (ASCOT-BPLA), which found superiority of losartan and amlodipine, respectively, compared with atenolol [27,28].
Many clinical trials and meta-analyses have demonstrated that BBs are inferior to other antihypertensive drugs on cardiovascular protection for several reasons, including their suboptimal BP-lowering efficacy [27][28][29], their inability to adequately lower central aortic pressure [30], their reduced effect on left ventricular hypertrophy regression [31], and their unfavourable metabolic effects [32]. A meta-analysis including all studies on cardiovascular mortality and morbidity revealed that first-line BB therapy reduced the risk of stroke, but to a lesser extent when compared with other agents, including low-and high-dose thiazide, ACEi, or CCBs [33]. Some examples of β1-selective BBs include atenolol, bisoprolol and metoprolol. The fact that more than 75% of the data in previous meta-analyses were from studies where atenolol was the main agent of interest increases the risk of selection biases and raises questions whether the lack of therapeutic benefit with BB is a class effect or is limited to atenolol [34,35]. Due to the diversity in the location of the β receptors, BBs result in many adverse effects, including drowsiness, lethargy, sleep disturbance, visual hallucinations, depression, blurring of vision, nightmares, bronchospasm in asthmatic patients, peripheral vascular effects such as cold extremities, Raynaud's phenomenon, erectile and orgasmic dysfunction and masking of hypoglycemic symptoms such as tachycardia [36]. Although BBs are contraindicated in patients with asthma, recommendations have supported the use of β1-selective BBs in this group of patients [4]. Other contraindications include patients with cocaine-induced coronary vasospasm, acute or chronic bradycardia and/or hypotension, long QT-syndrome, and previous Torsades de Pointes.
α-Blockers
Although the use of α-blockers in the management of primary hypertension is limited, they play a significant role in the management of secondary hypertension and benign prostatic hyperplasia (BPH). The combined treatment of hypertension and BPH has made α-blockers an attractive option. α Receptors are categorized into two subtypes-α1 and α2. α1 receptors primarily involve smooth muscle contractions causing vasoconstriction (Fig. 3) [37]. The primary goal of the ALLHAT trial was to find the rate of fatal coronary heart disease (CHD) and non-fatal MI, and the secondary goal was to study the rate of cardiovascular disease events. Whereas patients receiving doxazosin and chlortalidone had similar rates of fatal CHD and non-fatal MI, it was found that those taking doxazosin had a higher incidence of congestive heart failure compared with those taking chlortalidone (8.13% vs. 4.45% at 4 years; p < 0.001). Therefore, the doxazosin arm of the trial was terminated [38]. The efficacy of α-blockers on BPH was documented in the MTOPS trial [39] and the COMBAT study [40]. In the MTOPS trial, complications such as acute urinary retention, urinary incontinence, renal insufficiency, or recurrent urinary tract infections were reduced, compared with placebo, by 39% with doxazosin, 34% with finasteride, and 66% with combination therapy [39]. Phentolamine has been administered during medical management of pheochromocytoma acute hypertensive attacks [41] and phenoxybenzamine is commonly administered prior to surgery [42].
A common adverse effect of α-blockers is the 'first dose' effect. This phenomenon describes the severe decrease in BP, orthostatic hypotension, and syncope in patients after receiving the first dose of an α-blocker. Other common adverse effects of α-blockers are due to the drop in BP, such as dizziness, light-headedness, fatigue and headaches. Intraoperative 'floppy iris syndrome' (IFIS) [43] and priapism [44] are also documented adverse effects of α-blockers and therefore patients are advised to stop the medication for up to 2 weeks prior to surgery.
Combined α/β-Blockers
A number of α/β-blockers have been developed since the first, labetalol, came to clinical use in 1977. However, carvedilol is commonly used in clinical practice. Although the drugs are referred to as combined α/β-blockers, their predominant mode of action is non-selective blockade of β-adrenergic receptors, with α-adrenergic receptor blockade being a supplementary property [45]. The effects of β-blockade activate baroreceptor reflex, resulting in vasoconstriction via α-adrenergic receptor stimulation and a consequent increase in total peripheral resistance. The additional α-blocking properties of these classes of agents theoretically provides a greater BP-lowering effect [46]; however, studies comparing labetalol with the specific β-receptor blocker propranolol have failed to support this theory. Other studies have shown labetalol is able to lower BP without lowering cardiac output and heart rate, as opposed to propranolol, which causes significant reductions in both parameters [47,48]. Systematic review of eight randomized, double-blind, placebo-controlled trials revealed α/β-blockers lowered the trough SBP and DBP by an average of 6 mmHg and 4 mmHg, respectively. [45] Labetalol is currently indicated for use in the management of hypertensive urgencies or emergencies [49], as well as the treatment of intraoperative and postoperative hypertension [50], and is also considered first-line in the management of hypertension in pregnancy [51]. The CAPRICORN [52] and COPERNICUS [53] trials demonstrated that carvedilol reduces mortality and morbidity in patients with chronic heart failure related to both ischemic and non-ischemic causes. It has been demonstrated that treatment with carvedilol is not only associated with more stabilized glycemic control and improved insulin resistance compared with metoprolol but also results in fewer new-onset diabetes and diabetic events in patients with heart failure [54,55]. Analyses of studies comparing carvedilol with captopril, nifedipine or hydrochlorothiazide found that carvedilol achieved a reduction in BP similar to that achieved by its comparators [56]. α/β-Blockers are generally well tolerated. Adverse effects of these agents can occur as a result of antagonism of either α or β receptors. β-blocking adverse effects include dyspnoea, bronchospasm, bradycardia, malaise, and asthenia, while adverse effects due to α-blockade include dizziness, light-headedness, fatigue and headaches. Labetolol use has also been associated with an adverse effect of a tingling scalp. Long-term use of α/βblockers can cause increased sensitivity to catecholamines due to upregulation of adrenergic receptors, and thus their abrupt withdrawal may precipitate tachyarrhythmias, acute hypertensive crises, and palpitations.
α2-Adrenergic Receptor Agonists
α-Adrenergic receptors play a vital role in BP regulation. There are two main types of α-adrenergic receptors-α1 and α2. α1 receptors are found on vascular smooth muscle, and activation leads to smooth muscle contraction, vasoconstriction and elevated BP. α2 receptors are found on the presynaptic neuron and activation causes negative feedback inhibition of norepinephrine release and thus a decrease in BP. Examples of α-agonists commonly used include clonidine, guanfacine and methyldopa. The efficacy of clonidine as an antihypertensive was proved in a study in 28 patients with DBP above 110 mmHg who had either never received pharmacological treatment or for whom previous drug therapy had not yielded a meaningful BP reduction. Following treatment with clonidine, a significant decrease in both SBP and DBP in almost all patients was observed, with an average SBP drop of 54 mmHg and DBP drop of 22 mmHg [57]. A Cochrane systematic review of 12 trials was conducted in 2009 comparing the effects of methyldopa with placebo. The study showed a decrease in BP by 13/8 mmHg in the methyldopa group compared with the placebo group [58]. While clonidine, methyldopa and guanfacine are all approved by the US FDA for the management of hypertension, they are not recommended as a first-line therapy. In addition, methyldopa is also indicated in the management of hypertension in pregnancy. Common adverse effects of α2-agonists include hypotension, sedation and fatigue; α2 receptor agonists are contraindicated in patients with orthostatic hypotension or with any condition causing autonomic instability. Their use is also contraindicated in patients taking phosphodiesterase inhibitors.
Vasodilators
Vasodilators such as nitroglycerin and hydralazine are no longer mainstay treatments in hypertension but are used as an adjunct to control high BP. Vasodilators exert their action by acting at the level of the vessels by producing nitric oxide (NO), which increases cyclic guanosine monophosphate (cGMP), thus causing a relaxation in vascular smooth muscle. Nitroglycerin is primarily a venodilator and at higher doses also leads to arterial dilatation. The relaxant effect of nitroglycerin on the veins and arteries causes reduced preload and cardiac output, respectively. The main indication of nitroglycerin is angina pectoris. However, nitroglycerin is also used in special circumstances, including hypertensive emergencies associated with acute coronary syndromes or acute pulmonary oedema. Hydralazine is a direct-acting arterial vasodilator. In the Cochrane metaanalyses, it was found that after 3-6 weeks of treatment, hydralazine reduced SBP by approximately 5-20 mmHg and DBP by 5-15 mmHg [59]. Hydralazine has been replaced by other agents due to its unwanted adverse effects, including the lupus-like syndrome, which can even occur in low-dose treatment. In a longitudinal study, after 3 years of treatment, hydralazine 100 mg/day resulted in 5.4% of patients developing lupus-like syndrome. Moreover 10.4% developed lupus-like syndrome with hydralazine 200 mg/ day [60]. Other adverse effects include reflex tachycardia, immune-mediated hemolytic anemia, glomerulonephritis and vasculitis. However, hydralazine is still indicated in special circumstances such as pregnancy and heart failure.
Emerging Antihypertensive Agents
Currently available antihypertensive agents have certainly proven effective in controlling high BP; however, there is still a large proportion of patients with inadequate BP control or 'resistant hypertension'. In addition, many current treatments have some intolerable adverse effects.
Our expanding knowledge of the RAAS has led to the development of novel therapeutics, currently in both preclinical and clinical studies. Unfortunately, the coronavirus disease 2019 (COVID-19) pandemic has placed incredible strain on the conception, development and delivery of these novel drugs. Around 80% of non-COVID-19 clinical trials have been stopped or interrupted, with patient enrolments halted, laboratories closed and supply chains lost [61]. Funding and human resources have been diverted away from antihypertensive research, which will have lasting effects for patients, academic scholars and institutions.
Despite this hindrance, several drugs have shown promising safety and efficacy profiles in preclinical trials and are now in phase II/III testing ( Table 2). These have a variety of mechanisms, targeting newly elucidated pathways implicated in the pathophysiology of hypertension. There are also other promising candidates that are still in the preclinical stages, including insulin-resistant aminopeptidase (IRAP), endothelin receptor antagonists (ERAs), dual-acting bispecific peptides and soluble guanylyl cyclase A (sGC) stimulators.
Non-Steroidal Mineralocorticoid Receptor Antagonists
A frequent adverse effect of steroid mineralocorticoid receptor (MR) antagonists, such as spironolactone, is hyperkalemia. One interesting feature of the MRs is that they have essentially equivalent affinity for a range of other steroids, including progesterone, cortisol and corticosterone, at a level equivalent to that of aldosterone [62]. Novel non-steroidal selective MR blockers have been developed and esaxerenone was licensed for the treatment of hypertension in Japan in 2019 [63]. In one in vitro study using pituitary GH3 cells, esaxerenone inhibited the transient, late and persistent components of the voltage-gated Na + current in a concentration-, time-, state-and hysteresis-dependent manner [64]. This was shown to be independent and upstream of its action on the MR and further in vivo studies are in progress to assess its benefit in related conditions, such as primary hyperaldosteronism, CKD and refractory hypertension [65,66].
Aminopeptidase Inhibitors
Aminopeptidase (APA) is an enzyme that cleaves the N-terminal aspartate reside from AT II, converting it to AT III (and AT IV) in the brain. Here, AT III contributes to the regulation of BP through three AT 1 receptor-phospholipase C (PLC)-dependent pathways-sympathetic nerve activation and subsequent noradrenaline release, inhibition of the baroreflex in the nucleus of the tractus solitarius, and stimulation of vasopressin release [67]. As such, there is interest in selective inhibitors of brain APA, such as the prodrug firibastat, which crosses the blood-brain barrier, and the active products that inhibit brain APA activity [68].
Natriuretic Peptide
Not only does the heart pump blood but it also secretes cardiac natriuretic peptides (NPs), which play a crucial role in maintaining cardiovascular homeostasis and regulating BP and glucose and lipid metabolism. These NPs are released in response to atrial or ventricular muscular wall stress from increased intravascular pressure and/or transmural pressure (e.g. due to heart failure, MI or cardiomyopathies) [69]. One of these NPs is B-type NP (BNP), which interacts with membrane-bound guanylyl cyclase A to activate protein kinase G via cGMP. The downstream effects include vasodilation and natriuretic and diuretic effects, which all contribute to lowering BP. [70] Although ANP deficiency causes hypertension (and its overexpression hypotension), ANP levels are elevated in essential hypertension. However, this represents a protective and compensatory response to cardiac wall stress and the increased ANP may also attenuate the upregulation of renin synthesis, therefore buffering renin-dependent hypertension [71,72]. The serine protease Corin was identified as the enzyme responsible for converting pro-ANP to active ANP. Its importance in regulating BP is currently being evaluated, especially in managing pregnancy-induced hypertension [73].Neprilysin is a membrane-bound zinc endopeptidase present in various organs that breaks down BNP. As such, there is interest in using neprilysin inhibitors (NEPi) to prevent its degradation and extend its beneficial effects on lowering BP [74]. Dual inhibition of AT II receptor and neprilysin (ARNI) led to a greater BP reduction compared with blocking the AT II receptor alone [75]. NEPi such as sacubitril in combination with valsartan have been approved for the use of both heart failure with reduced ejection [74] Leads to vasodilation, natriuretic and diuretic effects Phase I and II (TENSE1) [106] 15 patients will receive gradually increasing doses of either nesiritide or placebo bid Estimated completion in 2030 Degraded quickly by tissue proteases and therefore needs to be administered by continuous IV infusion fraction (HFrEF) and heart failure with preserved ejection fraction (HFpEF) [76]. Recently, Chua et al. [77] conducted a systematic review of RCTs comprising 5931 patients. The group reported that when compared with placebo, sacubitril/ valsartan provided a mean reduction of SBP and DBP by 6.52 mmHg and 3.32 mmHg, respectively. Despite sacubitril/valsartan being effective and well tolerated in patients with hypertension, the drug is not currently approved for use in this indication.
Sodium Glucose Co-Transporter 2 Inhibitors
Sodium glucose co-transporter 2 inhibitors block glucose reabsorption in the renal proximal tubule, increasing urinary glucose excretion, and are primarily prescribed for the management of type 2 diabetes mellitus (T2DM); however, several studies have demonstrated the cardiovascular benefits of SGLT2 inhibitors [78]. The glycosuria-accompanied osmotic diuresis increases urine output and may therefore decrease intravascular volume [79]. In addition, SGLT2 inhibitors may inhibit the Na + /H + exchanging channel isoform 3, increasing Na + excretion and reducing preload. Their mechanism of action has been shown to be independent of GFR, suggesting that factors independent of volume depletion also contribute, including upregulation of AT 1-7, decreased arterial stiffness and reductions in serum uric acid [80]. The SGLT2 inhibitor dapagliflozin has also been shown to significantly improve the peripheral microvascular endothelial function leading to reduced BP. This mechanism is postulated to be through a reduction in the oxidative stress, improving the functional recovery of impaired microcirculation and abnormal vasomotion of arterioles with increased vascular tone [81]. With a considerable overlap of patient populations with T2DM and hypertension, interest is growing in the antihypertensive effects of SGLT2 inhibitors.
Soluble Guanylate Cyclase Stimulators
Soluble guanylate cyclase (sGC) is the primary target of NO in the cardiovascular system, eliciting many of the biological actions of NO via cGMP [82]. The NO/sGC/cGMP pathway is involved in the regulation of vasodilation and our understanding of its role in hypertension is evolving. Stimulators of sGC replace the oxidised hem cofactor, the loss of which causes sGC to become NO-insensitive [83]. sGC stimulators have been shown to ameliorate pulmonary and portal hypertension through stopping transforming growth factor (TGF)-β/connective tissue growth factor (CTGF) upregulation and subsequent hepatic stellate cell activation [84]. Other sGC stimulators effectively inhibited leukocyte recruitment and the adhesive properties of neutrophils to fibronectin in sickle cell mice. These agents significantly reduced the frequency of vaso-occlusive episodes, complementing their established dose-dependent blood pressure-lowering effects [85]. Olinciguat, another sGC stimulator that is currently in phase II clinical development for use in patients with sickle cell anemia, not only reduces BP in humans and in hypertensive and normotensive rats but also successfully reduces inflammatory mechanisms in TNF-stimulated mice. [86]
Endothelin Receptor Antagonists
The endothelin axis is involved in the physiological regulation of vascular tone through the G protein-coupled receptors ET A and ET B , and overproduction of endothelin has been implicated in the pathology of pulmonary arterial hypertension [87]. Trials with ERAs have shown BP reduction in models of salt-sensitive hypertension and patients with resistant hypertension [88,89]. The SONAR trial also supported a potential role of ERAs in protecting renal function in patients with T2DM at high risk of developing end-stage kidney disease [90]. Furthermore, in vitro placental studies have shown direct transfer of ERAs across human placenta at term, and that the ET A receptor mediates endothelin-induced constriction in the fetoplacental vasculature, suggesting a role for ERAs in preventing pre-eclampsia [89]. Several large clinical trials have been completed to explore their potential use in treatment-resistant hypertension. The first major investigation was the DORADO trial, where 379 patients with resistant hypertension treated with darusentan had an absolute reduction in BP compared with placebo, with only fluid accumulation as main adverse side effect [89]. However, the larger DORADO-AC trial, which included an active control agent (guanfacine), showed no significant BP difference between darusentan and placebo after 14 weeks of treatment [88]. Consequently, the manufacturer put further development of this agent on hold in treatment-resistant hypertension. More recently, aprocitentan is currently under investigation in the PRECISION phase III trial in 1971 patients with resistant hypertension [91,92]. Previous studies have shown promising results; aprocitentan significantly reduced BP compared with placebo and lisinipril [93] and was shown to potentiate the antihypertensive effect of RAS blockers, suggesting a role in combination therapy [94]. The results of the PRECISION trial are estimated to be published in mid-2022. Bosentan is approved for the treatment of PAH and is currently in clinical trials studying its pharmacodynamics, and a further phase II trial in patients with hypertension is expected to be complete in November 2022 [95,96].
Management of Hypertension in Specific Population Subgroups
Hypertension is a complex, multifactorial disorder that often exists with other comorbidities. Consequently, managing hypertension in specific groups of patients requires different strategies and combinations of pharmacological agents. Some of these populations yet to be discussed include patients with CKD, diabetes mellitus or obesity. The majority (67-92%) of patients with CKD have comorbid hypertension [112]. The JNC-8 recommends a BP goal of < 140/90 mmHg [7], while the Kidney Disease Improving Global Outcomes (KDIGO) BP work group, which analysed systematic reviews and meta-analyses, suggested considering a lower target of < 130/80 mmHg in patients with CKD and albuminuria > 300 mg/day or a urine albumin-to-creatinine ratio of > 30 mg/g [113]. The ACC/ AHA guidelines advocate in favour of the Systolic Blood Pressure Intervention Trial (SPRINT) results and therefore recommend a BP goal of > 130/80 mmHg for all patients with CKD [4]. ACEi (or ARBs if ACEi are not tolerated) are recommended as the first-line agents for the management of hypertension in patients with CKD stage 3 or higher, or stage 1 or 2 with albuminuria > 300 mg/day. As ACEi/ARBs reduce albuminuria through reduction of intra-glomerular pressure leading to a decrease in GFR, serum creatinine may rise up to 30% from baseline; however, a higher rise should warrant further investigation for acute renal failure [114]. Due to the risk of hyperkalemia and acute kidney injury, combination of an ACEi/ARB or ACEi/ARB with aliskiren should be avoided. If BP still remains poorly controlled then diuretics may be a rational option, especially in settings of volume overload, and non-dihydropyridine CCBs could be considered for patients with persistent proteinuria [115,116]. A number of epidemiologic studies have demonstrated a 1.5-to 2-fold greater prevalence of hypertension in patients with diabetes mellitus when compared with non-diabetic patients [117]. When pharmacologic therapy becomes necessary for the management of high BP, clinical practice guidelines published jointly by the AHA, ACC and a number of other organizations suggest diuretics, CCBs, ACEi and ARBs [4]. The ALLHAT study found that ACEi provided a bigger reduction in mean fasting glucose levels (change in mean fasting glucose level for the diuretic, CCB and ACEi groups were +2.8 mg/dL, +0.6 mg/dL and −1.4 mg/dL, respectively) [18,118]. Consequently, ACEi have become the initial mainstay choice for managing hypertension in diabetic patients struggling with glycemic control. Moreover, meta-analysis of controlled trials comparing antihypertensives in diabetic patients with kidney disease revealed a lower incidence of end-stage renal disease with the use of ACEi and ARBs, and therefore recommended these agents for managing hypertension in patients with diabetes and albuminuria [119]. In both obese and morbidly obese patients, pharmacological treatment requires a flexible approach given the metabolic and hemodynamic abnormalities present in this group. Causes of hypertension are multifactorial but include chronic insulin resistance, impaired renal pressure natriuresis and extracellular fluid volume expansion [120]. RAAS blockers, such as ACEi or ARBs are considered first-line, although monotherapy is seldom sufficient to control BP. Often, thiazide diuretics and BBs are combined with RAAS inhibitors; however, given their metabolic adverse effects [121], dihydropyridine CCBs may be more beneficial as they are metabolically neutral [122]. The excess adipose tissue in these patients also secretes aldosterone-releasing factors, stimulating aldosterone secretion and causing MR activation [123]. This hyperaldosteronism causes salt-sensitive hypertension independent of the systemic RAAS. Findings by Buglioni et al. [124] and Morales et al. [125] indicate a significant decrease in plasma aldosterone levels and mean BP in obese patients taking mineralocorticoid receptor antagonists (MRA) versus with RAAS blockers, suggesting combination therapy with MRA may improve BP control.
Conclusion
Over the last four decades, there has been significant progress in developing pharmacological agents to control hypertension via various mechanisms and pathways. Despite these advancements, heart disease remains the number one cause of mortality worldwide, a large proportion of which can be attributed to poorly controlled BP. Several novel pathways have been targeted in order to lower BP. However, given the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, progress in developing these agents has been slow. Nevertheless, with many of these agents in phase III clinical trials, the data emerging from recent studies has been encouraging and could hold promise in adequately controlling BP. Indeed, further trials must then establish which combination of current and novel agents provide not only the most therapeutic value but also pose the greatest risks of adverse effects. | 2021-12-08T14:28:45.665Z | 2021-12-08T00:00:00.000 | {
"year": 2021,
"sha1": "7fafaf193b3fd3984b18668b314475a78cd9ec94",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40256-021-00510-9.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f0ebb6e0f6ff3b304d0f4e85e25e306828fb757e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234154496 | pes2o/s2orc | v3-fos-license | Consumer perception on selecting marketplace for livestock products food
The purpose of this paper is to analyses the relationship between the underlying factors of product and market attributes and socio-demographic profiles of consumers. The aim of the analysis is to assess whether consumer responses vary across product and market attributes. Simple statistical analysis such as descriptive statistical analysis, frequency distribution, cross tabulation, analysis of variance, and factor analysis to assess the consumers’ preferences for livestock products food and market attributes were carried out. Results of this study indicate that indicate that the socio-demographic profile of consumers (gender, age, education and income) significantly influences the purchase decisions. A higher income gender and educational level of consumers influences their decisions on product and market attributes while age seems to have no significant impact. Consumers express significantly different views on various product attributes. Packaging and convenience are important for approximately 60 per cent respondents. Various market attributes clearly indicate that consumers prefer a convenient marketplace with additional service facilities. Development of marketing strategy regarding the products that can be offered at a marketplace based on consumer preferences and behavior.
Introduction
Consumer behavior in purchasing products is not only done to meet needs but to fulfill desires and self-concepts. Consumer food purchasing behaviour are mostly form nearby marketplace [1,2].
Decisions of shoppers such as increasing prices, limiting supply, and limiting purchase quotas can affect consumer behavior. Although these practices usually help to suppress demand, they can also increase consumer anxiety about supply shortages [3]. Increasing prices during supply disruptions is one way to increase seller profits and increase the quantity of goods purchased [4]. Changes in the price of an item will incur expenses from a foodstuff and their expenditure from one store to another to get money [5].
Consumers make decisions in purchasing products and where to buy are influenced by various factors. Psychological factors such as perception of both product and marketplace effect to consumer purchasing behavior. Individual perceptions influence consumers in product purchase decisions. The development of information media is also reported to be a consideration for consumers in purchasing goods [6]. This will certainly influence decisions made by companies, producer, and also government [7].
Manufacturers and retailers are factors that have an influence on consumers. When producers and sellers limit supply, it will generate a stimulus that makes consumers tend to buy more [8]. In this study, we concentrate our attention on studying consumer behavior and examining the influence and impact of various factors affecting consumers when purchasing products and selecting marketplace and market attributes (related to products, market infrastructure, additional services, for purchasing.
Methods
This study used primary data through a survey of respondents. The data of this study were collected through an online quantitative survey. Questionnaires are used to obtain respondent data through closed and open-ended questions. Open questionnaire, namely a questionnaire that gives respondents the freedom to answer and no choice of answers is provided, while a closed questionnaire is a questionnaire that is not given freedom in answering. Open questionnaire data includes the characteristics of the respondent's household, household expenditure and the number of purchases on livestock. The statistical software tools were used to perform analyses in this study. Three hundred sixty-four respondents gave their responses. Table I the demographic distribution of all respondents.
Consumer choice marketplace of the selected food products
Consumers have a lot of choices when they are choosing on products and services. They can get products of their choice with good quality at convenience at a marketplace. Consumer preferences for various livestock products food and marketplace have been assessed through this questionnaire survey.
In the choosing of marketplace, price often gives a definite form to the seeming the products in consumers' minds [9]. Table 2 shows that fresh and fragile food products such as egg dominate buy not using delivery service, with more than 59 per cent of consumer respondent indicating frequent purchasing directly from seller. Other fresh product such as chicken, beef, and dairy products sellers develops their selling method by ordering from home. The development of the selling method makes consumers have alternatives to buy with delivery services.
Purchasing decision of the surveyed consumer
Comparative study of consumer responses on food purchase behavior with the demographic profile of the respondents was done by analyzing the variance (ANOVA) to assess if there are any significant differences in the individual responses for market attribute. The consumers gave significantly different responses on various market attributes ( Table 3).
The survey results indicate that out of the market preference aspects, the responses of males and females differ significantly on the service of the marketplace. Female generally prefer to purchase with delivery service, while male respondents may more mobile for purchasing food. Results indicate that a higher income, educational level, and gender of consumers influences their decisions on product and market attributes while age seems to have no significant impact.
Consumer perceived livestock products with a packet of attributes such as variety and choice, product price, packaging, and freshness. Mixture of these attributes is consumer considerations in choosing products [10]. Consumers look for a positive experience with ease of operation and qualitative product with price sensitivity.
Conclusion
Consumers prefer appropriate marketplace with additional service. Market attributes of the marketplace are also considered to be important by the consumers. Availability and affordability attributes of a marketplace and consumer perspective can be used for food retailing making effective market. | 2021-05-11T00:04:26.217Z | 2021-01-09T00:00:00.000 | {
"year": 2021,
"sha1": "287ec0d9e5ee0b6a675cceb11597ca99b98f4467",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/637/1/012054",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "82c545ac4728801e4730a9144c9ad9d8c066adf0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
212937724 | pes2o/s2orc | v3-fos-license | Factors Affecting the Accuracy of Estimation of BEV Cruising Range
As the number of electric vehicles is increasing year by year, the accuracy of electric vehicle cruising range estimation in high and low temperature environments is one of the performances that consumers are very concerned about. In this paper, the accuracy of cruising range estimation is evaluated by the determination of coefficient, and the influence of ambient temperature on the estimation accuracy of cruising range is deeply studied. Eleven electric vehicles were selected to carry out the cruising range test at normal temperature, high temperature and low temperature respectively, and the influence law at ambient temperature of the accuracy of cruising range estimation was obtained. At the same time, the influence of the self-learning function on the accuracy of the cruising range estimation is also studied. Finally, the performance of the estimation of accuracy of the end mileage at different ambient temperatures is analyzed.
Introduction
China's new energy vehicle market has developed rapidly. By the end of June 2018, the number of new energy vehicles reached 1.99 million, of which 1.62 million were pure electric vehicles, accounting for 81.4% of the total number of new energy vehicles. Compared with fuel vehicles, electric vehicles have fast speed response, no pollution emissions, and low noise, but the virtual cruising range of electric vehicles is limited by battery capacity and power density. The cruising range, especially the cruising range at low temperature, is one of the performance indicators that electric vehicle consumers pay attention to [1].
Compared with traditional fuel vehicles, the range of electric vehicles is greatly affected by the ambient temperature. Studies have shown that temperature can affect battery life, mainly because ambient temperature can affect battery capacity. The capacity retention of the lithium battery at 0°C is 60% to 70%, 40% to 55% at 10°C, and only 20% to 40%at 20°C [2][3]. China has a vast territory, about 5,500 kilometers from north to south, and about 5,000 kilometers from east to west. Therefore, China has both a warm tropical climate and a plateau with snow all year round. Even if the temperature difference between different regions may become larger at the same season, it is important to study the During the actual use of electric vehicles, the meter shows that the remaining cruising range is another very important parameter, and the estimated accuracy of range can greatly reduce the driver's range anxiety [4]. However, the accurate or not range displayed on the dashboard, there is no unified evaluation standard at home and abroad. Therefore, it is necessary to carry out systematic research and establish an objective and feasible evaluation standard to evaluate it.
Experiment method
In this test, according to the "EV-TEST Management Rules" [5], the range test is divided into normal temperature range (25°C), high temperature range (35°C) and low temperature range(-7°C). Before performing the corresponding durability test, it must soak the vehicle in the corresponding environment for 12h to 36h, and then the NEDC condition test is conducted cycled. In the high temperature or low temperature durability test, the air conditioner needs to be turned on. In the high temperature range test, the internal temperature should be maintained between 23°C and 25°C. For low temperature range tests, the internal temperature should be maintained between 20°C and 22°C. The test of estimated cruising range accuracy and range test should be performed simultaneously.
Vehicle parameters
In this test, 8 vehicles with a body length of 4 m or more (conventional vehicle) and 3 vehicle bodies less than 4 m (mini vehicle) were selected. The vehicle parameters are shown in Table 1. The test process is shown in Figure 1. On the drum test rig, when the test cut-off condition is reached, the test under NEDC conditions is stopped [6]. When testing, record the remaining range displayed by the meter before the start of the test (data point of the 0th cycle) and the remaining range displayed by the meter at the end of each complete NEDC condition. When an NEDC condition is completed, a data is recorded, and after the test is completed, the actual remaining rangecorresponding to each the remaining rangeof meter is calculated based on the actual cruising range.
Determination of coefficient
In order to assess the accuracy of the actual remaining range display, it is necessary to convert it from a practical problem to a mathematical model to take the actual remaining range measured on the drum as a regression line and use the remaining range displayed on the dashboard as a dispersion point. In this way, the cruising range estimation accuracy problem is converted into the goodness of fit between the regression line and the scatter point. In this paper, the accuracy coefficient R 2 in statistics is used to evaluate the rangerange estimation accuracy.
Determination of coefficient: A numerical feature that represents the relationship between a random variable and multiple random variables. It is a statistical indicator used to reflect the reliability of the regression model's dependent variable [7].
The physical meaning of the Determination of coefficient is as shown in Figure 2. ŷ i indicates the actual remaining range, which is a regression line, and y i indicates the remaining range displayed by the meter, which is the observed value. Total sum of squares is: SST = ∑( − ̅) 2 (1) Sum of residuals is: Regression square sum is: The relationship between the three squares is: After derivation, the formula for determining the coefficient R 2 is: The closer R 2 is to 1, the larger the ratio of the sum of the squares of the regression to the total square sum, the closer the regression line is to each observation point, the better the fit of the regression line; on the contrary, the closer R 2 is to 0, the regression line The degree of fit is worse. When R 2 ≤0, it is considered that the regression line seriously deviates from the observed value, that is, the degree of fitting is extremely poor.
Rangerange estimation accuracy
The remaining range and actual remaining range displayed by the meter of the electric vehicle are recorded in the experiment. The range displayed by the meter tested in the normal temperature environment is indicated by CB, and the actual remaining range is represented by CS. The range displayed by the meter tested in a high temperature environment is expressed in GB, and the actual remaining range is expressed in GS. The range displayed by the meter tested in a low temperature environment is indicated by DB, and the actual remaining range is represented by DS. Figure 3 shows the range test data of vehicle 10. It can be seen from the above three figures that difference the range displayed by the meter and the actual range are very small in different temperature environments. And the best performance occurs in the high temperature environment. R 2 is highest at high temperatures and lowest at low temperatures. Figure 4 shows the results of the range experiment of the vehicle 7. It can be seen from the figure that the remaining range displayed by the meter in the vehicle 7 is not much different from the actual remaining range. In the low temperature environment, the difference between the remaining range displayed by the meter and the actual remaining range is large. R 2 is highest at normal temperature and lowest at low temperature. Figure 5 shows the R 2 values of eleven types of vehicles at different temperatures. The average R 2 of the eleven types of vehicles is 0.88. Among them, the R 2 of eight types of vehicles is greater than 0.9 and the overall performance is good. The determination of coefficient of the vehicle 11 at high temperature is less than 0, so the cruising range estimation accuracy of the vehicle 11 in the high temperature environment is the worst. Figure 6 shows the Determination of coefficients of the remaining vehicles at high The average of R 2 for 10 types of vehicles is 0.81. Among them, only 5 types of vehicles have R 2 greater than 0.9, and vehicle performance begins to show a gap. Figure 6. Comparison for R 2 ten types of vehicles under high temperature. The Determination of coefficients of the vehicle 3, vehicle 5 and vehicle 9 in the low temperature environment are less than 0, indicating that the cruising range estimation accuracy of the three types of vehicles in the low temperature environment is very poor. Figure 7 shows the Determination of coefficients for the remaining eight vehicles in a low temperature environment. The average R2 of these 8 vehicles is 0.46, and onlyR2 of the only 2 vehicles is greater than 0.9, and the overall performance is poor.
The effect of self-learning on the Determination of coefficient
In order to improve the accuracy of cruising range estimation, some enterprises have added self-learning function in the control strategy, which can correct the current remaining range according to historical energy consumption, thereby improving the accuracy of the remaining range displayed by the meter [8][9]. In order to study the effect of self-learning function on the accuracy of cruising range estimation, this paper removes the data of the first two cycles by comparing and calculating, and collects data from the third cycle. The equivalent effect is to reserve a self-learning distance of approximately 22 km (approximately 11 km for an NEDC cycle) for the vehicle. Figure 8 shows the ratio of R 2 between after the first two cycles and the total cycle of the eleven models in the normal temperature environment. As can be seen from the above figure, after removing the first two cycles in the normal temperature environment, R 2 becomes larger or smaller, and the average decreases by 5%. The accuracy of vehicle 10 andvehicle 11 varies greatly, R 2 Both fell by more than 24%. Figure 9 shows the ratio of R 2 of the ten type vehicles between after the first two cycles removed and all cycles in the high temperature environment, the R 2 of the vehicle 11 is less than zero. The R 2 of the ten vehicles dropped by an average of 3.9%. Among them, R 2 of the types of vehicle 1 and vehicle 3 changed a lot, the accuracy dropped by more than 15%, and the others changed within 4%. Figure 9. Change of ten types of vehicles after removing the first two cycles under high temperature. Figure 10 is a plot of R 2 for the first two cycles and all cycles removed in a low temperature environment. Among the eleven models, the R 2 of the vehicle 2, vehicle 3, vehicle 4, and vehicle 11 is less than 0. It can be seen from the above figure that the determination of coefficient of the seven types of vehicle has averagly dropped to 18.7%. The variation of the 6 and 9 types of vehicle is larger, the R^2 is reduced by more than 40%, and the variation of others is within 6%. Figure 10. Change of seven types of vehicles after removing the first two cycles under low temperature. It can be seen from the above data that the removal of the first two cycles has little effect on R 2 and does not change the overall result range. Therefore, whether the vehicle is equipped with a self-learning function has less influence on the result.
Estimated accuracy of the end range
When the battery power is low, the range anxiety of the electric vehicle driver is particularly prominent. This paper further studies the cruising range accuracy when the battery is low. Figure 11 shows the Determination of coefficient R 2 of the last four cycles of vhhicle 7 at normal temperature, and the end range R 2 of the vehicle 5, vehicle 8, vehicle 9 and vehicle10 is less than zero. It can be seen from the above figure that the Determination of coefficient of the vehicle 7 has dropped by an average of 36.7%. Among them, the coefficient of determination of vehicle 2 dropped the least, down 20%, and the coefficient of determination of vehicle 7 dropped the most, down by 67.9%. Figure 12 shows the end range R 2 of vehicle 5 in a high temperature environment. The end range R 2 of other types of vehicle is less than zero. As can be seen from the above figure, the R2 of the 5 types of vehicle decreased by an average of 29.7%, of which the vehicle10 changed the most, the R 2 decreased by 72.0%, thevehicle8 changed the least, and the R 2 increased by 0.37%. Figure 13 shows the end range R 2 of the two types of vehicle in a low temperature environment. The end range R 2 of others is less than zero. The R 2 of the two types of vehicle dropped by an average of 62.9%. It can be seen from the above data that the R 2 of the end range of the electric vehicle is lower than that of the whole cycle R 2 , and the R 2 in the low temperature environment is more decreased. Under low battery power conditions, vehicles generally do not provide accurate range estimates, which will increase the range anxiety of electric vehicle drivers.
Conclusion
The range estimation accuracy of electric vehicles at different ambient temperatures is also different. The normal temperature is the best, the high temperature is second, and the low temperature is the worst. In the normal temperature, the Determination of coefficient of nine types of vehicle reached 0.9, and the Determination of coefficient of only five types of vehicle in the high temperature reached 0.9, and the Determination of coefficient of only two types of vehicle in the low temperature reached 0.9.
It is small impact to the overall result that eliminating the influence of the self-learning function. So all data points can be directly collected when evaluating the accuracy of the cruising range estimation.
The end range estimate R 2 is greater than the R 2 of all cycles, and even some models have R 2 less than 0, which leaves a great space for the automotive industry to improve the accuracy of the end range estimation. | 2020-01-09T09:13:09.347Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "3006404844989c414c8f75abadf89c6c94ff9e24",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/727/1/012019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "676f6458cc765f631ecd3dbfd6cc3031dd0d48c1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
8427097 | pes2o/s2orc | v3-fos-license | P2Y2R Deficiency Attenuates Experimental Autoimmune Uveitis Development
We aimed to study the role of the nucleotide receptor P2Y2R in the development of experimental autoimmune uveitis (EAU). EAU was induced in P2Y2 +/+ and P2Y2 -/- mice by immunization with IRBP peptide or by adoptive transfer of in vitro restimulated semi-purified IRBP-specific enriched T lymphocytes from spleens and lymph nodes isolated from native C57Bl/6 or P2Y2 +/+ and P2Y2 -/- immunized mice. Clinical and histological scores were used to grade disease severity. Splenocytes and lymph node cell phenotypes were analyzed using flow cytometry. Semi-purified lymphocytes and MACS-purified CD4+ T lymphocytes from P2Y2 +/+ and P2Y2 -/- immunized mice were tested for proliferation and cytokine secretion. Our data show that clinical and histological scores were significantly decreased in IRBP-immunized P2Y2 -/- mice as in P2Y2 -/- mice adoptively transfered with enriched T lymphocytes from C57Bl/6 IRBP-immunized mice. In parallel, naïve C57Bl/6 mice adoptively transferred with T lymphocytes from P2Y2 -/- IRBP-immunized mice also showed significantly less disease. No differences in term of spleen and lymph node cell recruitment or phenotype appeared between P2Y2 -/- and P2Y2 +/+ immunized mice. However, once restimulated in vitro with IRBP, P2Y2 -/- T cells proliferate less and secrete less cytokines than the P2Y2 +/+ one. We further found that antigen-presenting cells of P2Y2 -/- immunized mice were responsible for this proliferation defect. Together our data show that P2Y2 -/- mice are less susceptible to mount an autoimmune response against IRBP. Those results are in accordance with the danger model, which makes a link between autoreactive lymphocyte activation, cell migration and the release of danger signals such as extracellular nucleotides.
Introduction
Several factors are required to trigger an autoimmune organ-specific disease. Genetic and environmental components will collaborate to cause a breakdown of the peripheral immune tolerance (activation of autoreactive clones) and activation of resident cells of affected tissues [1].
Autoimmune uveitis (AIU) illustrates these autoimmunity paradigms. Genetic susceptibility of individuals with AIU has been widely studied and the association of several polymorphisms at different loci of the HLA system is well known [2]. Similarly, different groups have demonstrated the presence of autoreactive T cells and the role of cytokines released by these cells in the activation of resident cells of the eye during development of AIU [3].
Several data also attest the importance of the role of danger signals during these two key phases of pathological activation [4]. If a central place was given to exogenous danger signals, in particular microbial, the importance of endogenous danger signals began to emerge [5]. In this context, nucleotides are an important family of potential endogenous danger signals. In fact, normally, they are present almost exclusively within the cells. However, during cellular destruction or stress, like inflammation, nucleotides can be released in various amounts in the extracellular space and activate nucleotide receptors belonging to the P2X and P2Y families [6]. In many cases, the activation of P2 receptors results in an increase of the inflammatory process [7]. In agreement with the danger signal theory, we have shown that different nucleotides, ATPγS, UTP and UDP, are involved in the activation of retinal pigment epithelium and increase the basal as well as the TNFα-induced release of IL-8 [8]. Similarly, several works have demonstrated that nucleotides also profoundly influence antigen presentation and lymphocyte activation [9]. Accordingly, Granstein et al have shown, in vivo, that extracellular nucleotides strongly increase lymphocyte activation after systemic immunization [10].
In this work, we have thus hypothesized that nucleotides can act as danger signals during AIU and interfere with both the activation of autoreactive lymphocytes and the stimulation of blood retinal barrier cells. Using the native and adoptive transfer models of experimental autoimmune uveitis (EAU) we have compared the induction of uveitis in P2Y 2 R wild-type (P2Y 2 +/ + ) and KO mice (P2Y 2 -/-). In parallel, we have also compared in vitro the cell recruitment, phenotype, proliferation and cytokine secretion between P2Y 2 +/+ and P2Y 2 -/-IRBP (interphotoreceptor retinoid binding protein)-immunized mice.
Reagents and animals
IRBP peptide 1-20 (GPTHLFQPSLVLDMAKVLLD), representing residues 1-20 of human IRBP, was synthesized by New England Peptide (Gardner, MA USA). Pertussis toxin (PTX) and complete Freund's adjuvant (CFA) were purchased from Sigma-Aldrich (Bornem, Belgium). Pathogen-free C57BL/6 male mice were purchased from Janvier (Genest St Isle, France). P2Y 2 R wild-type (P2Y 2 +/+ ) and KO (P2Y 2 -/-) mice on C57Bl/6 background were generated by one of the authors (B.R.) as previously described [11]. All mice were housed and maintained at the animal facilities in accordance with European guidelines. Animal treatment conformed to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research. Euthanasia of mice were conducted in CO2 chambers. This research was approved by the CEBEA (Commission d'Ethique du Bien-Etre Animal). The agrement number of the institute is LA: 1230332. All cells were cultured in RPMI 1640 medium supplemented with 25 mM HEPES, 10% fetal bovine serum, 1% L-glutamine, 1% sodium-pyruvate, 100 IU/ml penicillin, and 100g/ ml streptomycin and 5.10 -5 M β-mercaptoethanol in a 5% CO2 and 95% humidity incubator.
EAU induction
EAU was induced by injecting subcutaneously 100 μl of a mixture of 500 μg IRBP peptide 1-20 emulsified in CFA and supplemented with 2.5 mg/ml killed Mycobacterium Tuberculosis. All animals received at the same time an intraperitoneal injection of a single dose of 1.5 μg/100 μl PTX.
Adoptive transfer model of EAU
EAU was also induced by adoptive transfer of autoreactive semi-purified IRBP 1-20 peptidespecific enriched T lymphocytes following a protocol adapted from Shao H et al [12]. Briefly, naïve C57BL/6 or P2Y 2 +/+ and P2Y 2 -/mice were immunized as for EAU induction. Twelve days later, mice were sacrificed and their spleen and draining lymph nodes collected. Splenic T cells were semi-purified by passage on nylon wool fiber columns, pooled with total lymph node cells and cultured with 10 μg/ml IRBP 1-20 peptide for 2 days before being injected i.p into naïve C57BL/6 or P2Y 2 +/+ and P2Y 2 -/mice (3x10 6 cells/mouse).
Clinical grading
A clinical grading was performed at days 7, 14 and 21 after EAU induction. For this purpose, animals were anesthetized by a 50 μl intramuscular injection in the leg of a Rompun (0.2%) and Ketalar (20 mg/ml) mixture. Eyes were dilated with tropicamid (5mg/ml) and phenylephrin (1.5 g/ml) and examined under the slit-lamp of a surgical microscope (Zeiss, Göttingen, Germany) by using a cover slip coated with a viscoelastic gel (Vidisic, Tramedico, Belgium) and positioned on the cornea. The clinical grading system used has been adapted from Xu H. et al. [13]. Briefly, vitritis, optic neuropathy, vasculitis and retinitis were separately scored in each eye, from 0 (no disease) to 4 (highly severe disease) with half points increments. The clinical score attributed to one mouse corresponds to the mean of the 4 parameters average of the 2 eyes.
Histological grading
For histological grading, the mice were sacrificed 21 days after EAU induction. The eyes were collected, prefixed for 6 h at 4°C in PFA (paraformaldehyde) 4%, sucrose 3% and then put in three successive baths of 5%, 10% and 20% sucrose in PBS, respectively, for 24h each. Entire eyes were embedded in OCT (Sakura) and cut in 10 μm frozen sections using a cryostat (CM3050S Leica). A classical hematoxylin-eosin staining was then realized. The severity of EAU was evaluated on six sections, cut at different levels in each eye, and scored on a scale from 0 (no disease) to 4 (maximum disease) with half points increments, according to lesion type, size, and number by using the R. Caspi's histological grading system [14]. In brief, the minimal criterion to score an eye as positive by histopathology is an inflammatory cell infiltration of the ciliary body, choroid, or retina (EAU grade 0.5). Progressively higher grades are assigned for the presence of discrete lesions in the tissue, such as vasculitis, granuloma formation, retinal folding or detachment, and photoreceptor damage. The histological score attributed to one mouse corresponds to the mean of the inflammatory lesions average of the 2 eyes.
In vitro characterization of cell recruitment, phenotype, proliferation and cytokine secretion Twelve days after immunization with IRBP 1-20 peptide, mice were sacrificed and their spleen and draining lymph nodes collected. Splenic T lymphocytes were semi-purified by passage on nylon wool fiber columns and total lymph node (LN) cells recovered by cell dissociation. Each cell type was characterized by flow cytometry for cellular phenotype by using FITC-or PEanti-mouse antibodies directed against CD3, CD11b, CD11c, MHCII molecules, OX40, OX40L and CCR6 (BD Pharmingen). Semi-purified splenic T cells and LN cells were also pooled and recultured in vitro, for 48h, with IRBP 1-20 peptide before being tested for cell proliferation and cytokine secretion. For cell proliferation, 5.10 5 cells/200 μl/well were seeded in 96-wells plates, in complete RPMI medium alone or supplemented with IRBP 1-20 peptide (10 ng/ml) or with Dynabeads Mouse T-activator CD3/CD28 (5μl/well, Invitrogen). T cell proliferation was measured on day 3, by thymidine incorporation, after an 18 hours pulse with 1 μCi/well ( 3 H) thymidine (Perkin Elmer, Zaventem, Belgium). To measure cytokine production 10 6 cells/well/1 ml were seeded in 24-wells plates, in complete RPMI medium alone or supplemented with IRBP 1-20 peptide (10 μg/ml). Supernatants were collected after 48h and cytokine secretion quantified by specific ELISA, following manufacturer's instructions (IFNγ and TNFα: Biosource; IL-17α: R&D Systems). Within each experiment, cell proliferation and cytokine secretion were measured after pooling cells from 4 animals in each group. Due to inter-experiment variations in absolute values, repeat experiments could not be combined. Patterns of response were however highly reproducible. Figures depict representative experiments.
Intracytoplasmic cytokine secretion
Same semi-purified splenic T cells pooled with LN cells were also processed for a CD4+ T celldependent detection of intracytoplasmic secretion of IFNγ and IL-17. Briefly, after the 48hculture in presence of IRBP 1-20 peptide (10 μg/ml), the cells were first restimulated for 5 h with PMA/ionomycin (at 50 ng/ml and 1 μg/ml respectively, Sigma-Aldrich, Bornem, Belgium) in the presence of GolgiStop (1 μl/ml, BD Biosciences), a protein transport inhibitor. Cells were then washed, fixed and permeabilized (Cytofix/Cytoperm, BD Biosciences) before being incubated with the Mouse Th1/Th17 Phenotyping Cocktail (BD Biosciences) following manufacturer's instructions. Samples were analyzed by flow cytometry for intracytoplasmic IFN-γ and IL-17 production by CD4+ T cells.
Real-time PCR
Twelve days after immunization with IRBP 1-20 peptide, mice (n = 5/group) were sacrificed and their spleen collected and dissociated. Splenic cells were proceeded for qRT-PCR analysis of CD 3 and Foxp3 gene expression. β-Actin was used as a non-modulated reference gene. The mRNA extraction and isolation were carried out using the automated MagNA Pure LC Instrument system and the MagNAPure LC mRNA Isolation Kit II, following manufacturer's instructions (Roche Applied Science, Vilvoorde, Belgium). A one step real-time quantitative PCR technique using the RNA Master Hybridization Probes Kit (Roche Applied Science) was used to quantify mRNA expression using specific primes and fluorescent probes for mouse CD 3 , Foxp3 and β-actin (Applied Biosystems). Data are presented as relative expression of Foxp3 versus CD 3 .
Mixed lymphocyte reaction (MLR)
MLR were performed between T lymphocytes and antigen-presenting cells (APC) of P2Y 2 +/+ and P2Y 2 -/-IRBP-immunized mice in autologous and cross-culture conditions in order to assess their responder and activation capacities, respectively. Briefly, CD4+ T cells were isolated from the draining lymph nodes of day-12 IRBP-immunized P2Y 2 +/+ and P2Y 2 -/mice by using the CD4+ T cell isolation Kit II and a MACS negative selection, according to the manufacturer's protocol (Miltenyi Biotec). The CD4+ T cell purity was confirmed by flow cytometry and was over 94%. Irradiated (30 Gy) splenocytes isolated from same mice were used as stimulator APC. T lymphocytes (2.10 5 cells/100 μl/well) and APC (10 6 cells/100 μl/well) were co-cultured in 96-well plates in complete medium alone or supplemented with IRBP 1-20 peptide (10 ng/ ml). T-cell proliferation was measured on day 3, by thymidine incorporation, after an 18 hours pulse with 1 μCi/well ( 3 H) thymidine (Perkin Elmer).
Statistics
EAU clinical and histological scores were expressed as medians and compared using the Mann-Whitney test. Data from cell proliferation and cytokine secretion were statistically analyzed with the unpaired t test, using a Welch correction if needed.
P2Y 2 receptor deficiency attenuates EAU development
To investigate the possible role of P2Y 2 receptors (P2Y 2 R) in the development of intraocular inflammation, we first immunized P2Y 2 +/+ and P2Y 2 -/mice with IRBP 1-20 peptide plus CFA and PTX. We next performed a fundus examination by a masked observer at days 7, 14 and 21 to monitor disease severity. A clinical score was then established for each eye of every animal in the two groups. Fig. 1 shows that P2Y 2 -/mice are less severally affected with a statistically significant difference at day 21.
P2Y2 receptor deficiency attenuates EAU development induced by the adoptive transfer of semi-purified IRBP-specific TL (efferent phase study) We further investigated the effect of P2Y 2 R deficiency on EAU development by performing an adoptive transfer of semi-purified IRBP 1-20 peptide-specific enriched T lymphocytes (TL) isolated from a group of IRBP-immunized C57Bl/6 mice into P2Y 2 +/+ and P2Y 2 -/mice. A masked observer performed a fundus examination, at days 7, 14 and 21 to monitor the severity of the disease. A clinical score was then established for each eye of every animal in the two groups. As revealed on Fig. 2A, P2Y 2 -/mice developed, at each time point, a significant less severe disease after TL adoptive transfer, as compared to P2Y 2 +/+ mice. At day 21, after the last fundus examination, all animals were sacrificed, eyes prepared for histological analysis and a histological grading performed by two masked observers. As shown on Fig. 2B, the histological score obtained at day 21 was also significantly different, the eyes from P2Y 2 -/mice being less severely affected than those from P2Y 2 +/+ mice.
P2Y 2 deficiency decreases the capability of semi-purified IRBP-specific enriched TL to induce uveitis (afferent phase study) We have next analyzed the role of P2Y 2 R expression on the activation of autoreactive lymphocytes, by performing an adoptive transfer of semi-purified IRBP-specific enriched TL from P2Y 2 -/or P2Y 2 +/+ immunized mice into naïve C57Bl/6 mice. A masked observer performed a fundus examination, at days 7, 14 and 21 to monitor the severity of the disease. A clinical score was then established for each eye of every animal in the two groups. As shown in Fig. 3A, the induced disease was significantly less severe in mice having received an adoptive transfer of TL from P2Y2 -/immunized mice. At day 21, after the last fundus examination, all animals were sacrificed, eyes prepared for histological analysis, and a histological grading performed by two masked observers. Data from Fig. 3B show that the histological score is also lower in the eyes of mice having received semi-purified IRBP-specific enriched TL from P2Y 2 -/immunized animals.
We have further purified the autoreactive CD4+ TL from these P2Y 2 -/or P2Y 2 +/+ immunized mice and have tested their capability to induce uveitis by adoptive transfer into naïve C57Bl/6 mice. Our results showed similarly decreased efficacy of purified TL from P2Y 2 -/immunized mice in inducing EAU as compared to the P2Y 2 +/+ counterparts since the median of clinical scores were 0.625 versus 1.375, respectively, with a significant p value of 0.034. In order to investigate the mechanisms responsible for the decreased susceptibility of P2Y 2 -/mice or TL to develop or induce uveitis, respectively, we have first analyzed comparatively, 12 days after immunization with IRBP 1-20 peptide in CFA and Mycobacterium plus PTX, the spleen and draining lymph nodes of P2Y 2 -/and P2Y 2 +/+ IRBP-immunized mice for cell composition. As shown in Fig. 4, we did not find differences between P2Y 2 -/and P2Y 2 +/+ immunized mice in total cell count ( Fig. 4A) or in the expression of CD3, CD11c, CD11b and MHCII surface molecules (Fig. 4B). Same proportions of TL, DC and total APC (DC + macrophages) were thus induced in both P2Y 2 -/and P2Y 2 +/+ mice by IRBP 1-20 peptide immunization.
Semi-purified lymphocytes from P2Y 2 -/immunized mice proliferate less and secrete less cytokines in response to IRBP 1-20 peptide in vitro restimulation
Since the effect of P2Y 2 deficiency could not be explained by a difference in secondary lymphoid organ recruitment of immunocompetent cells, we hypothesized that the absence of P2Y 2 R could functionally impact either the TL proliferation and cytokine secretion or the antigen presenting cell capabilities. Semi-purified TL from P2Y 2 -/and P2Y 2 +/+ immunized mice were thus cultured in vitro for 48h either in medium alone or supplemented with the IRBP 1-20 peptide and their proliferation measured by thymidine incorporation. Fig. 5A clearly shows that, in response to IRBP1-20 peptide restimulation, the TL from P2Y 2 -/immunized mice proliferate less than the TL from P2Y 2 +/+ mice. Moreover, as shown in Fig. 5B, they also secrete less IFN-γ, IL-17 and TNF-α. Using intracellular flow cytometry, further experiments were done in order to assess more precisely the CD4+ T cell-dependent secretion of IFN-γ and IL-17. Our results showed that, in response to IRBP1-20 peptide restimulation, 3.6% of CD4+ TL from P2Y 2 +/+ mice versus only 1.8% of CD4+ TL from P2Y 2 -/mice did secrete IFN-γ (p = 0.05). For the detection of CD4+ TL secreting IL-17, the assays were however not conclusive as the cell cultures were not polarized to amplify the Th17 subset.
P2Y 2 deficiency affects antigen presenting cell capabilities
As shown in Fig. 6A, the decreased proliferation of the P2Y 2 -/-TL in response to IRBP1-20 peptide restimulation was not related to a default in TCR signaling pathway. Indeed, when semi-purified IRBP-specific enriched TL isolated from P2Y 2 +/+ or P2Y 2 -/immunized mice were restimulated in vitro with anti-CD3/CD28 coated beads they showed equal proliferation in response to TCR engagement. On the other hand, we have also investigated by qPCR the expression of Foxp3 mRNA in spleens of P2Y 2 +/+ and P2Y 2 -/-IRBP-immunized mice and did not find a different proportion of Foxp3+ Treg among CD3+ TL between P2Y 2 +/+ and P2Y 2 -/immunized mice (Fig. 6B). The next step was then to analyze if P2Y 2 R deficiency could affect the capabilities of the antigen-presenting cells. For that purpose, P2Y 2 -/and P2Y 2 +/+ mice were immunized with IRBP 1-20 peptide in CFA and Mycobacterium plus PTX. 12 days after immunization, the lymph nodes were recovered and the CD4+ TL purified by depletion of magnetically labeled non-target cells. Those CD4+ lymphocytes from P2Y 2 +/+ or P2Y 2 -/immunized mice were co-cultured with irradiated splenocytes (used as APC) from the same P2Y 2 +/+ or P2Y 2 -/immunized mice, in medium alone or supplemented with IRBP 1-20 peptide. Fig. 6C illustrates proliferation data from all the different combinations of responder TL versus stimulator APC that have been tested in co-cultures in presence of IRBP 1-20 peptide. Results show that by reconstituting Cytokine secretion was quantified, by specific ELISA, in culture supernatants after 48h of IRBP restimulation. Data are from a representative experiment of 5 or 6 or 3 respectively, with 4 mice per group. ns: not significant; **p < 0,01; ***p < 0,001. autologous culture conditions with TL and APC from either P2Y 2 +/+ or P2Y 2 -/immunized mice, we could observe similar reduced P2Y 2 -/-TL proliferation as illustrated in Fig. 5A. Moreover, when P2Y 2 +/+ CD4+ TL were put in culture with P2Y 2 -/irradiated splenocytes, their proliferation was significantly decreased as compared to autologous P2Y 2 +/+ culture condition. On the opposite, when P2Y 2 -/-CD4+ TL were put in culture with P2Y 2 +/+ irradiated splenocytes, their proliferation was restored to the one of P2Y 2 +/+ CD4+ TL. Altogether those data suggest that P2Y 2 R deficiency affects the antigen presenting cell capability of splenocytes. Moreover, they argue against a role of Treg in the observed effect of P2Y 2 R deficiency.
Discussion
The development of autoimmune uveitis (AI) requires both the activation of retinal specific autoreactive lymphocyte clones, their migration to the eye and the breakdown of the blood Effect of P2Y 2 R Deficiency on EAU retinal barrier (BRB) [1]. In this work, we found that P2Y 2 deficiency attenuates EAU development and strongly affects the activation of IRBP-specific autoreactive lymphocytes after systemic immunization.
Our results are in accordance with the danger model, which makes a link between autoreactive lymphocyte activation, immune cell migration and the release of endogenous danger signals (DAMPs, damage associated molecular patterns) such as extracellular nucleotides. Among them, extracellular ATP (eATP) was recognized as an important modulator of immune responses through its binding to plasma membrane P2 purinergic receptors which are expressed by a wide range of cells [15]. In physiologic conditions, the concentration of eATP is quite low but it can be rapidly released in various amounts after cell stress, damage or death [16]. Then, eATP exerts immunostimulatory or immunosuppressive effects depending on its extracellular concentration (high or low, respectively), on which P2 receptors are engaged on specific immune cells and on the extent of the stimulation [15]. Roughly, murine models of inflammatory and autoimmune diseases have shown that eATP can act as a proinflammatory molecule not only by stimulating innate immune responses but also by favoring effector T-cell activation, mainly through P2X 7 signaling. In our work, by using a specific gene knockout approach, we have focused on the effect of eATP on P2Y 2 R instead of using receptor specific antagonists. However, it is likely that other P2 receptors, including P2X receptors might also play a role in EAU development. In this context, two studies reported contradictory effect of P2X7 deficiency on experimental autoimmune encephalomyelitis development, either protective [17] or deleterious [18]. Yet, It has been shown also that the nucleotide (specially ATP) affinity for P2Y 2 receptors is significantly higher than for P2X 7 receptors (EC 50 0.1 μM vs 100 μM) [19] In humans, several in vitro studies have pointed out a more complex role of eATP, able also to inhibit the cytokine secretion, proliferation or cytotoxic activity of immune cells such as DC, macrophages, NK cells and T lymphocytes through the activation of P2Y 11 receptors [16]. Those P2Y 11 receptors are however not expressed on murine cells. Another major fact to point out is that nucleotides are unstable short-lived molecules acting in an autocrine or paracrine manner. Especially, eATP is hydrolyzed into ADP/AMP and adenosine by plasma membranebound ectoenzymes, i.e. CD39 and CD73, whose expression on different immune cells or even on a same cell subset but in different location is quite variable. Therefore, a lot of studies evaluating the effect of eATP were done with ATPγS, a non-hydrolysable ATP analogue, rendering those data not (easily) comparable to ours.
As concerns our experimental model of EAU, we first induced a uveitis in P2Y 2 +/+ and P2Y 2 -/mice and observed reduced clinical scores in P2Y 2 -/animals. To our knowledge, there is no other publication on the role of the P2Y 2 deficiency during the development of an experimental autoimmune disease. We next demonstrated that P2Y 2 -/mice developed a significantly less severe disease after the adoptive transfer of semi-purified IRBP 1-20 peptide-specific enriched T lymphocytes (TL) isolated from C57Bl/6 immunized mice, as compared to P2Y 2 +/+ mice (efferent phase study). In order to evaluate a potential role of P2Y 2 R deficiency in lymphocyte migration, we similarly transferred Indium-radiolabeled TL and followed their in vivo migration with a SPECT camera. Unfortunately, this methodology appeared not sensitive enough to detect the presence of a small number of autoreactive TL in the eye. We however observed, in the few animals we have experienced, that the TL migration from the peritoneal cavity to peripheral organs was more important in P2Y 2 +/+ than in P2Y 2 -/mice (Lia J. M. Relvas et al, unpublished data). Besides, our findings are in line with several studies showing the importance of P2Y 2 R expression on epithelial and endothelial cells for VCAM1 expression and secondary recruitment of inflammatory cells [11,20,21].
Our data also showed that naïve C57Bl/6 mice, which have received an adoptive transfer of enriched TL from P2Y2-/-IRBP-immunized mice, displayed significantly lower disease (afferent phase study). Yet, no differences in term of spleen and lymph node cell recruitment or phenotype appeared between P2Y2-/-and P2Y2+/+ immunized mice. Nevertheless, once restimulated in vitro with IRBP, P2Y2-/-semi-purified T cells proliferate less and secrete less cytokines (IFNγ, IL-17 and TNFα) than the P2Y2+/+ one. The decreased proliferation of the P2Y2-/-TL in response to IRBP 1-20 peptide restimulation was not related to a default in TCR signaling pathway. Moreover, we did not observed a different proportion of Foxp3+ Treg among CD3+ TL between P2Y2+/+ and P2Y2-/-immunized mice. Together, our data on TL proliferation and Foxp3 expression strongly argue against a role of Treg in the observed effect of P2Y2R deficiency.
Lastly, our data showed that P2Y 2 R deficiency negatively affected more precisely the stimulatory capacities of the antigen-presenting cells and subsequently the lymphocyte activation. By addressing the potential mechanisms explaining the APC defects, we have detected within the IRBP-immunized P2Y 2 -/mice a trends toward a decreased expression of OX-40L, CCR6 and CXCL9, different molecules implicated in lymphocytic activation and migration [22][23][24] (Lia Judice Relvas, personnal communication). These results contrast with the findings of Müller T et al who described no significant differences in the priming capabilities of dendritic cells from P2Y 2 -/mice [25]. This discrepancy can be explained by important differences in experimental design. Altogether our results are in agreement with the present literature showing the DAMPs properties of nucleotides. Hence, Idzko M et al have shown that extracellular ATP triggers and maintains asthmatic airway Th2 inflammation [26] and Granstein RD et al have demonstrated that ATPγS enhances cutaneous Th1 immune response [10]. However, our data showed that the DC migration toward secondary lymphoid organs was not influenced by P2Y 2 R deficiency. This somewhat contrasts with the description by Müller T et al or Communi D et al that P2Y 2 receptors mediate DC lung chemotaxis during allergic inflammation [25] or pneumonia virus infection [27], respectively. Again, this discrepancy can be explained by profound differences in experimental protocols. First, we use two adjuvants for the induction of EAU: in addition to an i.p. injection of PTX inducing DC maturation [28] and identified as crucial for the emergence of the autoimmune pathology, the IRBP 1-20 peptide is indeed injected s.c. as emulsified in CFA enriched in heat-inactivated mycobacterium tuberculosis. Increased extracellular ATP concentrations could probably amplify adjuvant-mediated TLR stimulation of innate immune cells. The innate immune activation must thus be largely different in our model as compared to both lung models. Second, in a cutaneous immunization model, ATPγTP emulsified in CFA envating/maturing the resident antigen-presenting Langerhans cells [10], which is quite different from the recruitment of immature DC in bronchoalveolar fluids. Third, an exogenous ATPγS stimulation was performed and required in the lung model to observe the difference of dendritic cell migration, highlighting the unstable nature of the ATP released locally [26]. It is well-known that ATP exerts immunostimulant effects on DC/Langerhans cells [10]. Some studies argued for the implication of P2X 7 receptors [29]; others suggested the enrolment of other P2 receptors [30]. Our work highlights such a role for P2Y 2 receptors.
In conclusion, our data show that P2Y 2 -/mice are less susceptible to mount an autoimmune response against IRBP peptide 1-20, influencing the development of EAU. Those results are in accordance with the danger model, which makes a link between autoreactive lymphocyte activation, immune cell migration and the release of danger signals such as extracellular nucleotides. But, as compared to the literature, our results demonstrate for the first time an extension of the role of P2Y 2 R to TH1-and TH17-mediated autoimmune responses. | 2017-04-03T05:55:09.213Z | 2015-02-18T00:00:00.000 | {
"year": 2015,
"sha1": "53a900eddc3e33b82dbdcb31162dc50ce1fde530",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0116518",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53a900eddc3e33b82dbdcb31162dc50ce1fde530",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
215790559 | pes2o/s2orc | v3-fos-license | Human Immune Responses to Adeno-Associated Virus (AAV) Vectors
Recombinant adeno-associated virus (rAAV) vectors are one of the most promising in vivo gene delivery tools. Several features make rAAV vectors an ideal platform for gene transfer. However, the high homology with the parental wild-type virus, which often infects humans, poses limitations in terms of immune responses associated with this vector platform. Both humoral and cell-mediated immunity to wild-type AAV have been documented in healthy donors, and, at least in the case of anti-AAV antibodies, have been shown to have a potentially high impact on the outcome of gene transfer. While several factors can contribute to the overall immunogenicity of rAAV vectors, vector design and the total vector dose appear to be responsible of immune-mediated toxicities. While preclinical models have been less than ideal in predicting the outcome of gene transfer in humans, the current preclinical body of evidence clearly demonstrates that rAAV vectors can trigger both innate and adaptive immune responses. Data gathered from clinical trials offers key learnings on the immunogenicity of AAV vectors, highlighting challenges as well as the potential strategies that could help unlock the full therapeutic potential of in vivo gene transfer.
INTRODUCTION
Adeno-associated virus (AAV) is a small (25 nm), non-enveloped virus composed by an icosahedral capsid that contains a single-stranded, 4.7-Kb DNA genome. AAVs are Dependovirus, as they replicate only in the presence of helper viruses such as adenovirus, herpes virus, human papillomavirus and vaccinia virus (1)(2)(3)(4). AAV genome is composed by two genes rep and cap, flanked by two palindromic inverted terminal repeats (ITR). Rep encodes for proteins involved in replication of the viral DNA, packaging of AAV genomes, and viral genome integration in the host DNA (5). Cap encodes for the three proteins that form the capsid, VP1, 2 and 3, and for the assembly activating protein (AAP) and the newly identified MAAP (5,6). Wild-type AAVs naturally infect humans around 1 to 3 years of age (7)(8)(9) and are not associated with any known disease or illness (10). After infection, AAV remains latent in integrated or not-integrated forms, until a helper virus provides the functions necessary for its replication (5). In recombinant AAV vectors (rAAV), the parental virus rep and cap genes are replaced with the DNA of choice flanked by the two ITRs, and referred to as the transgene expression cassette when used for gene therapy purposes. rAAV vectors are produced efficiently by several approaches: transient double or triple transfection of mammalian cells (11,12); infection of mammalian cell lines with adenovirus (13) or herpes simplex virus (14,15); and infection of insect cells with baculovirus (16). During packaging, rep and cap genes are provided in trans together with the adenoviral helper proteins required for AAV genome replication and packaging (17,18). Triple transfection of HEK293 cells is one of the most commonly used methods for rAAV production. It is based on the co-transfection of three plasmids: one containing the transgene expression cassette flanked by the viral ITRs, a second packaging plasmid expressing the rep and cap genes and a third plasmid encoding for adenoviral helper genes (17,19). Historically, the purification of rAAV vectors was performed by ultracentrifugation in two successive density gradients (17). Nowadays, the purification of AAV capsids by affinity chromatography is more frequently used as column process is more scalable and yields high purity preparations that are amenable for clinical use (20). Based on the purification method, rAAV preparations differ in terms of contaminants and the ratio of empty to full capsids. An important focus in the field is the continuous improvement of the rAAV manufacturing processes to increase vector yields and purity while reducing costs (17,18,21,22). A significant concern related to the methods of production and purification is the impact of rAAV purity on the overall vector immunogenicity profile. One obvious example of contaminant is the presence of empty capsid in rAAV preparations (23).
The protein capsid of rAAV affects the affinity of the vector for a given tissue. Transduction of a cell by rAAV vectors requires the interaction of the viral capsid with surface receptors followed by internalization and intracellular trafficking through the endocytic/proteasomal compartment. Capsid proteins then mediate the endosomal escape and nuclear import, and after uncoating, the single stranded genome carried by rAAV is converted to a double stranded DNA. This conversion step may represent a limiting factor to gene transfer that selfcomplementary (sc) AAV vectors could bypass at the cost of reduced packaging capacity (24). Different from wild-type AAV, the genome of rAAV vectors inefficiently integrates into the host DNA and remains mostly episomal (10,25,26). Transgene expression finally results from the transcription of the mRNA and the successive translation of the transgene coding sequence (Figure 1) (27).
To date, 13 different AAV serotypes and 108 isolates (serovars) have been identified and classified (5,28). The relatively low complexity of AAV biology facilitates production of rAAV vectors composed by a transgene expression cassette flanked by the ITRs from serotype 2 (29) pseudotyped into any of the available AAV capsid variants (5). This process allows a comparison of the properties conferred to rAAVs by the capsid proteins e.g., tissue targeting, potency or immunogenicity. In particular, the capsid composition influences the first steps of transduction i.e., interactions with receptors on target cells and impact, post-entry trafficking and endosomal escape. Therefore, rAAV vectors bearing different capsids have different transduction potential, but also potentially different immunological properties (5,30). In recent years, several approaches have been used to enable the generation of engineered rAAV vectors and a significant expansion of the rAAV vector toolkit (31)(32)(33)(34). While the increasing knowledge on rAAV capsid structure-function (35) led to the generation of capsid variants by direct mutagenesis of specific amino acid residues, the development of rAAV capsid libraries and high-throughput screening methods resulted in the generation of a variety of novel capsids by directed evolution (31). Recently, two different approaches were reported to overcome the limitations of "conventional" methods of vector evolution. Both methods take advantage of the latest advancements in DNA synthesis and sequencing, and are aimed at the reduction of the complexity of the initial library to be screened either by artificial intelligence (6) or by grafting peptides derived from a subset of proteins involved in specific cellular functions (36). Regardless of the method used, the isolation of new rAAV vector capsids responds to a precise need of increased transduction efficacy with optimized biodistribution and reduced immunogenicity, at least in terms of cross-reactivity with preexisting antibodies.
Despite the efforts dedicated to the enhancement of transduction efficacy, several challenges for clinical use of rAAV vectors remain. Among them, vector immunogenicity, which reflects the interactions of rAAVs with the host immune system, is perhaps of the utmost relevance given its impact on the outcomes of treatment in terms of transgene expression durability, and to the ability to eventually re-administer the vector in case of loss of efficacy over time (Figure 2). Here, we will outline general concept on rAAV vector immunogenicity, and comprehensively discuss the clinical experience in the context of systemic vector administration for liver-targeted gene therapy trials for hemophilia and for the treatment of neuromuscular diseases.
HUMAN IMMUNE RESPONSES TO WILD-TYPE AAV
Humoral Immunity to AAV Although AAV seroprevalence varies geographically, neutralizing antibodies (NAbs) that recognize virtually all AAV serotypes can be found in a large proportion of the human population (ranging from 30 to 60%). Successive infections may explain the high prevalence of anti-AAV NAbs in humans (37)(38)(39), resulting in broad cross-reactivity across AAV serotypes of different origins, including human, mammalian or engineered. Regardless of the geographic region, the most prevalent NAbs are directed against AAV2, followed by AAV1 (38). NAbs appear early in childhood with a peak at 3 years of age, leaving a short time-window that is potentially convenient for gene transfer between 7 and 11 months, after the infant loses humoral protection by passive transfer of maternal antibodies (38).
While production of Abs from all four IgG subclasses have been observed, IgG1 is the predominant subclass found in seropositive individuals and, in general, titers of anti-AAV IgG antibodies correlate with the those of anti-AAV neutralizing antibodies (40,41). Similarly, subjects undergoing gene transfer with AAV vectors develop anti-AAV antibodies from all the four IgG subclasses (mainly IgG1 but also IgG2 and IgG3) as well as IgM, concurring to the resulting high neutralizing titers (42). Interestingly, some individuals carry non-neutralizing IgG binding to the AAV capsid (43). While even low-titer NAbs are associated with efficient vector neutralization in vivo (44)(45)(46), the presence of binding non-NAbs appear to enhance rAAV transduction efficacy, at least in some tissues (43). Pre-existing anti-AAV antibodies in individuals receiving rAAV vectors are being investigated as a potential source of toxicity related to complement activation (47) although a direct interaction of rAAV vectors with complement proteins was also reported (48,49).
T Cell Responses to AAV
Early clinical trials of gene transfer with rAAV demonstrated the potential negative impact of T cell-mediated immunity on the outcome of gene transfer (45,50,51). Since then, research efforts have been focused on the study of pre-existing anti-AAV cell response and on the development of methods to detect and monitor cell-mediated immunity in AAV-based gene transfer (52)(53)(54)(55).
Different assays were developed to estimate the frequency of T cells specific for AAV in humans, including IFN-γ ELISPOT (56)(57)(58) and flow cytometry based assays (52,56,58). In general, AAV-specific cellular responses are less frequently observed than humoral responses, probably due to the lower assay sensitivity and to the fact that capsid-reactive lymphocytes are found in low frequency in peripheral blood. This may explain why different independent studies reported a lack of correlation between preexisting cellular and humoral immune responses (50,56,58). Indeed, efficient AAV-specific T cells detection in peripheral blood and spleen requires several rounds of in vitro expansion with peptide libraries derived from the capsid VP1 (50,56). Alternatively, FACS staining after an AAV specific tetramermediated magnetic enrichment can be used to increase the detection of AAV-specific T cells (59). Recently, we showed a correlation between anti-AAV antibodies and circulating FIGURE 2 | Factors influencing AAV capsid immunogenicity. The proteins of the capsid, the genome and the transgene product are the main potential immunogenic components of AAV vectors. Production of dsRNA driven by the promoter activity of ITRs can also trigger innate immunity. Additional host-dependent and vector-dependent factors can modulate the overall vector immunogenicity. These factors are mostly poorly understood, although innate immunity activators like CpG and vector dose appear to be important determinants of AAV vector immunogenicity.
AAV2-specific memory CD8 + T cells secreting TNF-α (52), suggesting that IFN-γ, which is currently broadly used as a marker of capsid-specific T cell activation, may not be the only cytokine that needs to be tracked for immunomonitoring in gene transfer trials.
As observed with humoral responses, capsid T cell responses are less frequent in young children (<5 years) compared to older healthy donors (56,60), suggesting that anti-AAV immune responses may arise during infancy after AAV infection, and persist throughout lifetime as a pool of memory T cells in secondary lymphoid organs. Consistently, differentiation markers measured at single cell level by flow cytometry indicated that the majority of AAV-specific T cells found in humans present a memory phenotype (50,52,53,58). After exposure to the capsid antigen, AAV-specific memory T cells produce IFN-γ, IL-2 and TNF-α, and acquire a cytotoxic phenotype measured by the expression of granzyme B and CD107a degranulation markers (52,53,56,58). In addition to that, two patterns of cellular responses to AAV that are dependent on the serology status of patients were recently identified by our laboratory (61). Exposure of human peripheral blood mononuclear cells (PBMCs) obtained from AAV-seropositive donors to capsid epitopes induced an effector memory phenotype in activated CD8 + T cells, with secretion of TNF-α, and expression of granzyme B and CD107a (52). In seronegative patients, transient activation of Natural Killer (NK) cells, but not naive CD8 + T cells, was observed. The role of these activated NK cells, which secrete both IFN-γ and TNF-α without a cytotoxic phenotype, in the context of gene transfer remains unknown.
Innate Immunity of rAAV
Vectors derived from AAV are constituted by a protein capsid, which is highly similar, if not identical, to that of wild type AAV, a single or double stranded DNA genome that does not express any viral proteins, and the inverted terminal repeats (ITR), GC-rich regions of the single stranded genome with a complex secondary structure. Both the capsid and the DNA components of rAAV may concur in the activation of innate immunity along with other host-specific factors (Figures 2, 3). In addition to that, production and purification of the vectors lead to the presence of DNA-depleted AAV capsids (empty capsids) and both DNA and protein contaminants. In comparison to other biological drugs such as monoclonal antibodies, rAAV are quite complex and the prediction of the immune-mediated toxicities after vector administration remains elusive, partially because of the lack of fully predictive animal models (62)(63)(64).
In recent years, significant research efforts were focused on establishing a causative role of innate immunity in the immunemediated toxicities observed in humans. However, the intrinsic characteristics of the innate immune system, together, and the lack of clinical evidence of innate immune system activation still represent a challenge toward the appraisal of the innate immunity role in the immune-mediated toxicities observed in gene therapy trials.
Innate immunity is the first barrier against pathogens as it mounts rapidly and does not require a specific adaptation to the pathogens. Innate immune response depends on the recognition of pathogen-associated molecular patterns by the pattern recognition receptors (PRRs) expressed by immune cells. The molecular recognition of viral nucleic acids, membrane glycoproteins, or even chemical messengers by PRRs leads to the nuclear translocation of Nuclear Factor κB (NF-κB) and Interferon-Regulatory Factor (IRF), transcription factors with a central role in the expression of pro-inflammatory cytokines, or type I interferons (IFNs), respectively (65).
In the context of rAAV-mediated gene transfer, preclinical studies supported the important role of type I IFNs in the induction of CD8 + T cell responses. In particular, blocking the activation of innate immune responses prevented both cytotoxic (66,67) and humoral (52) anti-capsid responses in vivo.
Most of the data on the role of type I IFNs in rAAV vectors immunogenicity were obtained in the context of liver-targeted gene transfer. The liver represents a unique immunological environment, characterized by the presence of resident immune cells together with both specialized and non-specialized antigen presenting cells (APCs). In nonparenchymal liver cells, including Kupffer cells and liver sinusoidal endothelial cells (LSECs), innate immunity activation after liver gene transfer with rAAV mainly occurs through binding to TLR2 expressed on the cell surface (68). The rAAV double-stranded DNA genome, and in particular its unmethylated CpG motifs, may be recognized by the endosomal TLR9 in either Kupffer cells (69), peripheral plasmacytoid DCs (pDCs) (66,70) or monocyte-derived DCs (71). TLR9 engagement was associated with enhanced activation of AAVspecific CD8 + T cell due to increased antigen presentation on class I major histocompatibility complex (MHC) (54,66,72).
An intriguing hypothesis is that, in addition to the vector DNA genome, double-stranded RNA (dsRNA) may participate to the induction of innate immunity to rAAV (73). According to this study, dsRNAs are produced by the promoter activity of the ITRs. Accumulation of dsRNAs would, in turn, stimulates the MDA5 sensor in human hepatocytes transduced with AAV, leading to the expression of type I IFNs. Interestingly, the blockade of MDA5 decreased the IFN response and improved transgene expression in transduced cells in vitro (73). Although this hypothesis is not yet supported by clinical data, it may explain why cellular responses are sometimes initiated weeks after vector administration in clinical trials, a timeframe that is consistent with the dsRNA synthesis in vivo (51,74).
Adaptive Immune System Activation Following rAAV Administration
The induction of an adaptive immune response requires a longer time than innate immunity and is considered as the second barrier toward pathogens. On the other hand, adaptive immune responses are antigen-specific and eliminate pathogens while generating an immunological memory.
T and B lymphocytes are activated after the molecular recognition of an antigen presented by APCs (75). After activation, lymphocytes expand and differentiate into effector cells and specifically inactivate or clear antigens through the induction of humoral or cytotoxic responses. When the levels of circulating antigen are reduced by the mounting immune response, memory T and B lymphocytes are generated and can respond to successive antigenic stimulation in a more efficient and faster manner (75). After rAAV vector administration, both transduced cells and professional APCs present capsidderived epitopes to cytotoxic CD8 + T cells via MHC class I (50,56,66,76). Activated CD8 + T cells may clear rAAVtransduced cells thus inducing inflammation in the target organ, affecting the gene transfer outcome (45,51,77,78). The concurrent presentation of capsid-derived MHC class II epitopes by professional APCs activates CD4 + T helper cells, which facilitate humoral and cell-mediated immune responses (67). Indeed, experience from clinical trials indicate that rAAV vectors administration leads to the development of anti-AAV IgG and NAbs (42), likely preventing vector readministration. Administration of immunomodulatory regimens (79) or, B-cell depletion prior to gene transfer, (80) have been effective in blocking humoral immune responses to rAAV in the preclinical setting. To this aim rituximab in combination with rapamycin is currently being tested in humans as a strategy to enable vector re-dosing (NCT02240407).
Clinical experience indicates that, to some extent, rAAV vector immunogenicity is dose-dependent (81,82). Low vector doses appear to be managed, in most cases, by short courses of corticosteroids or other mild immunosuppressive regimens with rescue of transgene expression (82). Accordingly, a dosedependent increase in AAV-antigen presented on MHC class I together with higher CD8 + T cells activation was reported in vitro (76,83). Nevertheless, deleterious effects of anti-capsid cellular response, and in particular the very slow onset of the T cell response, are not fully recapitulated by animal models. This has prevented the investigators to formulate predictive models of rAAV vector immunogenicity, and to some extent hindered the development of specific immunomodulatory protocols specific for gene transfer.
IMMUNE RESPONSES AGAINST THE TRANSGENE PRODUCT
A large part of the gene therapy strategies for monogenic diseases aim at the replacement of a mutated gene with a corrected copy to restore its function and correct the disease phenotype. The potential development of an immune response against the transgene is dependent on several variables, including the tissue targeted with gene transfer, the host genetic background, and the extent of the residual expression of the donated gene. Gene transfer in a context of missense mutations and residual endogenous expression of the full length protein is unlikely to induce anti-transgene immune responses, although some preclinical studies suggest that even single amino acid variations can be recognized by the host immune system (84). Conversely, gene transfer in the context of stop mutations with no residual protein expression, is potentially more likely to result in antitransgene immunity due to the absence of central tolerance against the transgene product itself.
In the clinical setting, anti-transgene immune responses were documented, so far, in only few clinical trials, mostly after intramuscular delivery of rAAV vectors. In particular, evidence of T cell-mediated anti-transgene cytotoxic T cell responses was documented in a phase I/II trial of intramuscular gene transfer in Duchenne muscular dystrophy patients (85). In this trial, AAVmediated transfer of a mini-dystrophin transgene resulted in poor expression and was associated with the development of T cell responses directed against transgene epitopes or, possibly, in the recall of pre-existing anti-dystrophin T cells response. Similarly, decreased transgene expression and transgene-specific cytotoxic T cells were reported after intramuscular delivery of alpha-1 antitrypsin with rAAV vector in one subject (86), although most of the clinical trial participants achieved longterm expression of the transgene (87). In other clinical trials, the impact of immune response on the treatment outcomes was less clear. Finally, in a phase I/II trial of intracranial delivery of a rAAV5 vector for mucopolysaccharidosis type IIIB, antitransgene T cells were also reported (88). Taken together, the clinical data presented suggest that disease-specific conditions e.g., the ongoing inflammation in muscle dystrophies (85), are likely to increase transgene immunogenicity after gene transfer.
The apparent inconsistency between the potential immunogenicity of the transgene and the low number of actual reports of anti-transgene immunogenic responses could possibly be explained also by the fact that most of the clinical trials, so far, were performed: (i) in subjects already exposed to protein replacement therapy prior to gene transfer; (ii) in subjects with residual endogenous expression of the gene targeted with gene transfer; (iii) by gene transfer restricted to immune-privileged compartments like the eye, the liver or the brain; and (iv) when gene transfer was administered together with an immunomodulatory regimen.
One important determinant of anti-transgene immune responses is the tissue distribution of transgene expression. The selectivity of gene expression for a given target tissue is the result of the combined tropism of the AAV capsid, the route of vector administration and the specificity of the promoter included in the transgene expression cassette. In general, intramuscular administration and strong ubiquitous promoters are more likely to induce anti-transgene immune responses than systemic administration and tissue-specific promoters (89,90). Another layer of complexity in the evaluation of the potential immunogenicity of gene transfer is that, in some tissues, such as muscle (91), the presence of inflammation due to the underlying disease might result in a higher risk of triggering transgene-directed immune responses. Conversely, in tissues that are defined as immune-privileged per se due to the presence of barriers that reduce antigen presentation and immune system cells trafficking (e.g., the eye or the nervous system) or to their particular immunological milieu (e.g., the liver), the overall risk of encountering anti-transgene immune responses is low.
The liver, due to the constant exposure to non-self antigens, has peculiar immunological properties that prevent uncontrolled immune activation. Several studies of gene transfer with rAAV in both small and large animals indicated that hepatocyterestricted transgene expression induced a robust, antigenspecific peripheral tolerance (60,92,93). In animal models, liver-induced immunological tolerance has been exploited to counteract deleterious immune responses induced by gene transfer targeting more immunogenic tissues, such as the muscle (89,94). The different antigen presenting cells (APCs) resident in the liver are involved in the tolerogenic effect after liver gene transfer. In particular, Kupffer cells, the macrophages resident in the liver, seem to have a less mature phenotype compared to other professional APCs (95,96). This, together with the secretion of the anti-inflammatory cytokine IL-10 (97-99) by Kupffer cells leads to poor T cell-activation. Antigen presentation through MHC class I expressed onto hepatocytes has been associated with incomplete CD8 + T cell activation and increased exhaustion and apoptosis (89,93,94,(100)(101)(102)(103). Liver sinusoidal endothelial cells (LSECs) can also act as professional APCs and promote tolerance through the induction of T regulatory cells (Tregs) (104,105). Tregs play an essential role in tolerance induction after liver gene transfer as demonstrated by the increased transgene immunogenicity observed after Tregs depletion (60,89,92,106). Consistently, increased Tregs expansion by rapamycin treatment, favored the induction of liver-mediated tolerance even in the presence of pre-existing anti-transgene immunity (107,108). Other mechanisms like the induction of CD8 + regulatory T cells (109), the degradation of T cells in hepatocytes (110), and the CD4 + T cell anergy (111) were proposed in the establishment and maintenance of liver tolerance.
IMMUNE RESPONSES TO rAAV VECTORS IN CLINICAL TRIALS AFTER INTRAVENOUS INFUSION OF rAAV VECTORS Liver Gene Transfer -The Experience With Hemophilia B
The largest set of clinical data available on rAAV-mediated gene transfer for the treatment of liver diseases derives from hemophilia B studies. Hemophilia B is an ideal target for rAAV gene therapy for different reasons. First, the transgene, human coagulation factor IX (hFIX), can be expressed in a variety of tissues including muscle and liver, the latter being its natural site of synthesis, and low levels of transgene expression (around 5% of normal) are sufficient to greatly reduce the impact of the disease on the quality of life of the patients. Another important advantage is that hFIX is small and, based on the experience with protein replacement therapy, seems to have a relatively lower immunogenicity potential compared to, for example, human coagulation factor VIII. Finally, the disease is very well characterized, small and large animal models of hemophilia B are available, and methods and endpoints to evaluate the efficacy of a given treatment are well established.
The first demonstration that hFIX can be secreted by human hepatocytes following rAAV vector-mediated gene transfer was obtained in a seminal clinical trial where 7 subjects with severe hemophilia B received through the hepatic artery a singlestranded rAAV2 vector carrying the hFIX transgene under the control of a liver-specific promoter (45). The clinical trial was designed with three increasing dose cohorts of 8 × 10 10 vg/kg, 4 × 10 11 vg/kg, and 2 × 10 12 vg/kg, respectively. Therapeutic levels of hFIX expression were reported only in the first patient who received the highest vector dose. However, differently from animal models that showed a stable expression of hFIX over time (51,112), in humans, transgene expression started to decline 4 weeks after vector injection. This decline was associated with a self-limited increase in liver transaminases and the detection of circulating AAV-specific CD8 + T cells (45,50). In a second patient dosed at the 2 × 10 12 vg/kg dose, no transgene expression was observed possibly due to the presence of an anti-AAV2 pre-existing humoral immune response (45).
This first clinical trial demonstrated that the rAAV vectors were safe and efficacious in liver targeting, although transgene expression was only transient. Both small and large animal models used in preclinical research failed to predict this negative outcome linked to an anti-AAV capsid cellular response. Nevertheless, this clinical trial provided unique information for the future use of the rAAV technology for gene transfer in humans. A second clinical trial was then carried out using rAAV8 serotype for the expression of hFIX (81). The improved transduction of hepatocytes achieved with this serotype (113) was combined with a self-complementary genome and a codonoptimized transgene sequence to optimize the expression in the liver (114).
In this second trial, participants were screened for preexisting humoral response against AAV8 and only seronegative patients were included. Three doses of the vector were infused through a peripheral vein ranging from 2 × 10 11 vg/kg to 2 × 10 12 vg/kg. Differently from the previous trial, hFIX expression was detectable and reached 1-4% of normal in the low and mid dose cohorts (81). At the highest dose, in the first subject dosed from this cohort, 8 weeks postinfusion the therapeutic hFIX levels (approximately 8-10% of normal) started to decline and, similarly to the previous trial, an elevation of liver enzymes and an increase in circulating capsid-specific T cells was detected. A tapering course of steroids was administered to control the liver enzyme elevation, which allowed for rescue of transgene expression (81). Of the additional participants enrolled in high dose cohort, a total of 6, 4 developed a transient transaminitis that rapidly resolved after transient treatment with prednisolone (74). In this study, in most participants, and in particular those from the high dose cohort, a significant reduction in annualized bleeding episodes in the absence of recombinant hFIX prophylaxis was reported (74) with a stable transgene expression documented for up to 10 years (115).
Results obtained in a third clinical trial for hemophilia B further support the role of the total capsid dose as a determinant of rAAV vector immunogenicity. Subjects seronegative for anti-AAV antibodies received a relatively low dose (5 × 10 11 vg/kg) of an AAV vector expressing a hyperactive variant of hFIX (hFIX-R338L). At this dose, only 2 out of 10 participants had an elevation of liver enzymes, which was successfully controlled with corticosteroids. In this study rAAV administration resulted in therapeutic transgene expression in all enrolled subjects (78). rAAV5 was also used to express hFIX in hepatocytes of hemophilia B patients. This vector, produced in a baculovirus system, was injected at doses up to 2 × 10 13 vg/kg, in 10 hemophilia B patients (116). Therapeutic levels of hFIX were observed for most of the treated patients. ALT elevation, which was reported in 3 out of 10 patients, was neither associated with T cells activation detected by ELISPOT, nor correlated with the presence of preexisting NAbs, and was treated by corticosteroids with no measurable decrease in hFIX transgene expression. Similarly, in a clinical trial for hemophilia A, the infusion of up to 6 × 10 13 vg/kg of a rAAV5 vector expressing human coagulation factor VIII resulted in ALT elevation in several enrolled subjects (117). The impact of liver enzyme elevation on transgene expression in this study is being debated, although a decrease in factor VIII levels has been detected in several participants followed at longterm (118).
Corticosteroid treatment given in response to ALT elevation usually controlled the anti-AAV vector immune response and stabilized transgene expression. However, in some cases this approach has failed to control the anti-capsid immune responses. One example is a clinical trial in which 7 hemophilia B subjects received a scAAV8 vector expressing the hyperactive hFIX-R338L variant (119) at doses ranging from 2 × 10 11 vg/kg to 3 × 10 12 vg/kg (NCT01687608). Sustained transgene expression (hFIX activity of about 20%) was reached in only one participant, while all other patients enrolled in the trial had either no expression or lost transgene expression despite corticosteroid treatment within 5 to 11 weeks post vector infusion without any evidence of anti-hFIX antibodies formation (120). Vector immunogenicity was possibly dependent on the elevated CpG content of the transgene expression cassette. Indeed, transduction of primary human liver cells with this vector induced the secretion of more Th1-oriented chemokines compared to a CpG-null version of the same vector (121). In a second clinical trial, rAAVRh10 serotype expressing hFIX was administered to six seronegative hemophilia B patients at doses of 1.6 × 10 12 and 5 × 10 12 vg/kg (122). A transient expression of hFIX was reported also in this study. After vector administration, 5 out of 6 injected individuals had ALT elevation associated with loss of the transgene expression despite corticosteroid treatment. Four of them demonstrated low anti-capsid and anti-hFIX response as measured by IFN-γ ELISPOT assay, possibly reflecting the high doses of corticosteroids they received. Subject 6 had a higher ALT elevation, associated with a strong CD4 + IL-2 + IFN-γ + T cell response against an epitope spanning the hFIX mutation, and increased inflammatory cytokines in the serum (123), however, no anti-hFIX humoral immune response was documented in this subject.
Taken together, the clinical experience accumulated with clinical trials for Hemophilia B indicate that mild immune responses to the vector may clear transgene expression from the liver. These immune responses are controlled by corticosteroid treatment in most of the cases. However, transient expression of the hFIX transgene was observed regardless of corticosteroid treatment in some clinical trials, potentially due to a higher intrinsic immunogenicity of the vectors used.
AAV Gene Transfer for Neuromuscular Diseases
Neuromuscular diseases of genetic origin are a multitude of diseases that are heterogeneous in terms of pathophysiology, tissues involved, age of onset, and clinical manifestations. Clinical experience accumulated on gene replacement with rAAV vectors indicates that regardless of the disease, muscle or central nervous system targeting requires high vector doses in the range of 1 × 10 14 vg/kg. In recent years, the improvements in large-scale rAAV vector manufacturing allowed for delivery of the large doses of AAV vectors needed in patients with neuromuscular diseases.
One of the most exciting results in the field of rAAVmediated gene transfer was the recent approval of Zolgensma (Avexis, Novartis) for the treatment of spinal muscular atrophy (SMA) type I (124). The efficacy of this drug was initially proved in a pivotal clinical trial involving 15 patients with SMA type I (125). A rAAV9 expressing the SMN1 gene under the control of a ubiquitous promoter was infused intravenously at doses of 6.7 × 10 13 and 2.0 × 10 14 vg/kg. In the first patient dosed at the low vector dose, a robust ALT elevation (31 times above the upper limit of a normal range) was reported although it was controlled by corticosteroids. After this first observation, prophylactic corticosteroids treatment was applied (30 days, starting 1 day before vector infusion) and ALT elevation was reported only in 3 patients within the high dose group (125). At the same time, high percentage of capsid-specific T cells were detected in peripheral blood by IFN-γ ELISPOT (126). However, neither the ALT elevation nor the presence of capsid-specific T cells was associated with reduced vector efficacy. The excellent efficacy profile and the mild adverse events reported in this first clinical trials were confirmed at long-term (125,127,128) and also in a phase III trial (ClinicalTrials.gov: NCT03306277). Based on these results, a dose-finding clinical trial was launched to evaluate the safety and efficacy of intrathecal administration of Zolgensma in SMA type 2 patients (ClinicalTrials.gov: NCT03381729). So far, 31 patients received the treatment at the three vector doses ranging from 6 × 10 13 to 2.4 × 10 14 total vector genomes (129). Results of the first two dose cohorts indicated an amelioration of the motor function with mild adverse events. Despite corticosteroids treatment, elevated ALT and AST were reported in one patient and were considered related to the treatment. However, this trial was placed on partial hold by the FDA after the detection of possible toxicity in dorsal root ganglia neurons in two independent preclinical studies performed in large animals (130,131). Importantly, this unexpected toxicity was never reported in patients that received the same vector by peripheral vein (125) or in a different trial of rAAV gene transfer for giant axonal neuropathy (ClinicalTrials.gov: NCT02362438) in which intrathecal delivery of rAAV9 was associated with transient corticosteroids and a longer course of tacrolimus and rapamycin.
Additional information on the administration of high doses of rAAV in pediatric patients derives from a Phase I/II clinical trial for X-linked myotubular myopathy (MTM, ClinicalTrials.gov: NCT03199469) and three studies for Duchenne muscular dystrophy (DMD, NCT03375164, NCT03362502, NCT03368742). In the MTM study, doses up to 3 × 10 14 vg/kg of a rAAV8 vector expressing the Mtm1 transgene under the control of a muscle-specific promoter were administered to patients less than 5 years old (132). Clinically meaningful improvement together with robust protein expression and recovered histology were reported. In this study, a prophylactic regimen of corticosteroid was applied. Although the vector was generally well tolerated, increased liver enzymes, creatinine kinase and troponin were reported. Both anti-transgene and anti-capsid immune responses were detected after vector administration, although it is unclear whether there is any correlation with the clinical outcomes. Further studies and long-term follow-up of participants will help to elucidate the relevance of the measured immune responses.
Variable outcomes across trials were reported in the context of systemic rAAV vector delivery for DMD. One clinical trial used a rAAVrh74 vector to express a microdystrophin transgene with the MHCK7 muscle specific promoter (ClinicalTrials.gov: NCT03375164). In this trial, 3 out of 4 DMD patients who received a dose of 2 × 10 14 vg/kg had elevated liver enzymes, which were controlled by increasing corticosteroid dose (133). So far, no data was provided by the sponsor on specific responses against the capsid or the transgene, although sustained transgene expression, together with amelioration of circulating creatine kinase levels (133), suggest no impact of immune responses on muscle transduction. A randomized, double blind, and placebocontrolled study was recently opened to strengthen the data of this pilot study (ClinicalTrials.gov: NCT03769116). Two weeks after vector injection, a case of rhabdomyolysis was reported in a participant from this second study. However, the study drug safety monitoring board reviewed the data and recommended the study to continue, thus suggesting that the adverse event was possibly unrelated to the investigational product.
Two additional studies of systemic rAAV vector delivery for DMD presented a more complex clinical picture. In the first one, a rAAV9 vector expressing a micro-dystrophin under the control of a muscle-specific promoter was infused intravenously in 6 adolescent patients at two doses, 5 × 10 13 and 2 × 10 14 vg/kg (ClinicalTrials.gov: NCT03368742). Despite microdystrophin expression, this trial was placed on hold because 2 patients, one in the low dose and the second in the high dose cohort developed acute kidney injury and activation of the complement system with signs of cardiopulmonary decline (134). In a second trial for DMD, 1 × 10 14 vg/kg or 3 × 10 14 vg/kg of an rAAV9 vector expressing a mini-dystrophin under the control of musclespecific promoter was injected in 6 adolescent patients from 6 to 12 years of age (ClinicalTrials.gov: NCT03362502). Minidystrophin expression levels above 20% of wild-type were observed at 2 months post vector infusion, while activation of the immune system, as measured by neutralizing antibody levels and T-cell responses with ELISPOT, was documented in all the participants (135). As in the previous clinical trial, one of the participants showed symptoms of complement activation and acute kidney injury that required a treatment with a complement inhibitor (Eculizumab). Importantly, the sponsor observed that the activation of the complement was associated with a rapid antibody response against the vector (135).
The data obtained from clinical trials in neuromuscular diseases support the concept that high doses of rAAV vectors are in general well tolerated and have the potential to treat FIGURE 3 | Immunomonitoring in gene transfer. A broad range of assays can be implemented for immunomonitoring in gene transfer trials. Serum samples or other relevant samples like cerebrospinal fluid (CSF) can be used to monitor markers of innate immunity as well as to determine antibody titers before and after vector administration. The cell fraction of peripheral blood is frequently used for both B and T cell assays by ELISPOT. More complex technologies can also be useful for example to track T cell clones via TCR sequencing, or to define transcriptome changes at the single cell level. Additionally, high content flow-based assay can be applied for the simultaneous characterization of a large number of surface and intracellular markers. While a lot of information can be gathered by studying immune response to AAV in peripheral blood, access to tissue samples could potentially help better define the nature of the local immune response in a transduced tissue as well as its impact on vector genome persistence. As many questions remain on AAV immunogenicity, the field of AAV gene therapy research needs further efforts to resolve the complexity of capsid-related immune responses. The harmonization of patient immunomonitoring using standard guidelines, and quality controls to check immune assay performance over time and across clinical trials, would greatly facilitate the comparison of data, and subsequently the understanding of the complexity of anti-AAV immune responses.
rare genetic diseases affecting muscle or central nervous system, particularly when these high doses are administered during early childhood. However, in adolescent patients, complement activation, possibly due to exaggerated anti-vector immune responses may represent a limitation to the application of gene therapy at such doses. It should be noted that this hypersensitivity to rAAV was identified in DMD patients that, due to the ongoing muscle degeneration and the underlying inflammation, have a peculiar immunological environment that tends to exacerbate immune responses (91). The diseasespecificity of these responses is possibly supported by the fact that in a clinical trial for Limb-girdle muscular dystrophy type E, no instances of immune mediated toxicities were reported despite the use of a similar vector at comparable doses. Other triggering factors are also being considered to explain these emerging clinical findings, such as the age of the subjects enrolled, the different vectors used in the studies, and eventually contaminants derived from the different processes used to manufacture the clinical lots of rAAVs used in the studies. Larger studies will help to better define the determinants of the immunotoxicities observed in those trials.
CONCLUSION
In recent years, we have accumulated significant clinical experience with AAV vectors. While the study of immune responses in AAV trials has resulted in important advances for the field, a lot more needs to be done to provide a clear picture of the complex interactions of AAV with the host immune system in the different clinical settings of gene transfer. Of note, relatively little is understood on the host-and vectordependent factors influencing the development of cytotoxic immune responses leading to poor gene therapy outcomes in terms of duration of transgene expression. To this end, one limitation of the current immunomonitoring methods is that they rely on the in vitro testing of immune cells isolated from peripheral blood vs. lymphocytes infiltrating the peripheral tissues transduced with rAAV vectors. A distinct activation profile of tissue-resident cells, and the variability associated to PBMCs collection and testing across clinical trials (i.e., the lack of standardization of assays used for immunomonitoring), may explain the poor correlation between IFN-γ ELISPOT and immunogenicity outcomes observed in some clinical trials. Additionally, the monitoring of cytokines other than IFN-γ may be helpful in more effectively monitor T cell activation following rAAV vector administration (52) (Figure 3). The relatively low number of subjects enrolled in each trial also represents a potential limitation to the study of immune responses after gene transfer, although the analysis of the collective results in the clinic, along with preclinical studies, has begun to highlight some of the determinants of vector immunogenicity (e.g., the ability of rAAV vectors to activate innate immunity). A more systematic and standardized approach to immunomonitoring in rAAV trials may help further boost our knowledge on vector immunogenicity in humans, particularly for what concerns the understanding of the importance of the disease-specific immune context of gene transfer. This will be key to devise strategies aimed at reliably achieving safe and long-lasting therapeutic efficacy following rAAV vector delivery.
AUTHOR CONTRIBUTIONS
GR, D-AG, and FM wrote the manuscript. | 2020-04-17T13:07:51.913Z | 2020-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "826ccf96fa0cf53c09e1c17e2f6d6fe8b0150959",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fimmu.2020.00670",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "826ccf96fa0cf53c09e1c17e2f6d6fe8b0150959",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
213006214 | pes2o/s2orc | v3-fos-license | Under-the-Radar Dengue Virus Infections in Natural Populations of Aedes aegypti Mosquitoes
Since 1999, dengue outbreaks in the continental United States involving local transmission have occurred only episodically and only in Florida and Texas. In Florida, these episodes appear to be coincident with increased introductions of dengue virus into the region through human travel and migration from countries where the disease is endemic. To date, the U.S. public health response to dengue outbreaks has been largely reactive, and implementation of comprehensive arbovirus surveillance in advance of predictable transmission seasons, which would enable proactive preventative efforts, remains unsupported. The significance of our finding is that it is the first documented report of DENV4 transmission to and maintenance within a local mosquito vector population in the continental United States in the absence of a human case during two consecutive years. Our data suggest that molecular surveillance of mosquito populations in high-risk, high-tourism areas of the United States may enable proactive, targeted vector control before potential arbovirus outbreaks.
sence of a human case during two consecutive years. Our data suggest that molecular surveillance of mosquito populations in high-risk, high-tourism areas of the United States may enable proactive, targeted vector control before potential arbovirus outbreaks.
KEYWORDS dengue virus serotype 4, transmission, Aedes aegypti, DENV4, flavivirus, mosquito, arbovirus, surveillance, insect-specific viruses A pproximately 40% of the globe is at risk of infection by flaviviruses, such as dengue virus (DENV), an enveloped, single-stranded RNA virus transmitted primarily by Aedes aegypti mosquitoes (1,2). Since severe disease from DENV infections can manifest as dengue hemorrhagic fever/dengue shock syndrome, DENV establishment in the continental United States is a major concern for public health agencies (1). In the United States, Florida has experienced increases in local DENV transmission since 2009, driven in part by human and pathogen movement (3). A. aegypti is endemic throughout subtropical Florida, and the vector population has resurged recently, following its near-displacement by A. albopictus (4). Autochthonous DENV infection occurs sporadically, primarily in southern Florida, with limited local cases elsewhere in the state (3).
Recently, reports have indicated that certain insect-specific viruses (ISVs) can negatively impact or enhance arbovirus (including DENV) infections in insect cells and mosquitoes (5)(6)(7). Although the impacts of many ISVs on arboviral competence have yet to be determined, the evidence to date clearly indicates that the mosquito virome cannot be ignored and likely influences the risk of autochthonous DENV transmission once the virus is introduced into an area. Therefore, we conducted a metaviromic study of A. aegypti adult F 1 female (first-generation) mosquitoes raised from eggs collected from ovitraps in 2016 to 2017 from Manatee County to assess the presence of any potentially influential ISVs in local mosquito populations outside southern Florida. Although no indexed human case of DENV4 was reported during 2016 to 2017 in the county, we detected and sequenced DENV4, which was maintained vertically for one generation (since the adults were raised in the laboratory from field-caught A. aegypti eggs), in four mosquito populations from Florida's Gulf Coast. We followed up this unexpected finding with genetic analyses to determine the DENV4 strain's likely location of origin, to assess the time frame of virus introduction, and to investigate strain-specific mutations that may have potentially enabled adaptation to and/or persistence within local mosquito populations.
RESULTS
DENV4 and ISVs in A. aegypti mosquitoes from Manatee County, FL. Our metaviromic analysis of female A. aegypti mosquitoes detected DENV4 alongside several ISVs in four sites in 2016 and in only the Anna Maria and Cortez sites in 2017 (Fig. 1). A full DENV4 genome (GenBank accession no. MN192436) was constructed with overall genome coverage of ϳ11ϫ across the reads (Fig. 2). We observed that the 2017 DENV4 signal was much lower than the 2016 DENV4 signal for Anna Maria and Cortez ( Fig. 3; also, see Fig. S1 in the supplemental material), and although Palmetto had the highest proportion of 2016 reads, this signal was virtually absent in 2017. To confirm DENV4 infection, we amplified and confirmed by direct sequencing the NS2A DENV4 amplicon for 2016 Longboat and Palmetto mosquito samples. Cumulatively, the data indicate that the drop in DENV4 signal relative to the "metavirome" from 2016 to 2017 was statistically significant (effect size of Ϫ2.026; P ϭ 0.035).
The Manatee County A. aegypti RNA metavirome profile (Fig. 3) indicated abundances of Partitiviridae, Anphevirus, Whidbey virus, and cell fusing agent virus (CFAV). Partitiviridae are known to primarily infect plants, protozoa, and fungi, but all the abundant groups in the metavirome have previously been detected in mosquitoes. We noted that the highest levels of CFAV (Anna Maria and Cortez sites) in 2016 were associated with DENV4 persistence into 2017 (P ϭ 0.07109; R 2 ϭ 0.7943). Additionally, Anphevirus signals were notably abundant in the Palmetto samples in 2016 and 2017, coincident with DENV4 signal loss in Palmetto in 2017.
DENV4 phylogenetic and molecular clock analyses. After analyzing the metavirome, we investigated the genome of the DENV4 strain to determine its likely source and to assess the potential time frame of introduction into Florida. Our first analysis confirmed the phylogenetic signal and absence of nucleotide substitution saturation ( Fig. S2a and b). We subsequently explored Manatee County DENV4's phylogeny with a 234-genome DENV4 data set constructed from GenBank sequences (see Table S1 in the supplemental material) by maximum likelihood (ML) phylogenetic inference (Fig. 4a). The ML phylogeny showed three clades: two Asian clades and one American clade with two Senegalese strains (MF00438; GenBank accession no. KF907503) and one Thai strain (GenBank accession no. KM190936) at the base (Fig. 4a). Manatee County DENV4 can be classified as DENV4 genotype IIb. The DENV4 genome obtained in Florida clustered most closely with two Haitian isolates from 2014 (GenBank accession no. KT276273 and KP140942) and a cluster of Puerto Rican isolates (Fig. 4a). Further back in time, a Haitian isolate (GenBank accession no. JF262782) collected 20 years earlier also clustered with the Manatee County-associated clade (Fig. 4a).
To estimate the most recent common ancestor (MRCA) for DENV4 entry into Manatee County, FL, as well as the date of divergence of the strain with Haitian isolates, we performed a molecular clock analysis using a Bayesian evolutionary framework on a reduced data set that included only the "Americas clade" (8). We assessed the The MCC phylogeny showed that the time of the MRCA (tMRCA) for the DENV4 Manatee County isolate and Haitian isolates was 2010 (node A in Fig. 4b). This 95% high posterior density (HPD) interval for this tMRCA suggests that DENV4 may have entered Manatee County sometime between 2006 and 2013. For node B (Fig. 4b), the tMRCA of 1992 with a 95% HPD interval of 1901 to 1994 indicated that the Manatee County and 2014 Haitian strains diverged from the 1994 Haitian DENV4 (GenBank accession no. JF262782), almost a decade before its arrival to Florida. However, strain divergence may have occurred in Haiti and was not necessarily precipitated by its introduction into Manatee County. Therefore, the introduction time frame could have been more recent than the estimated tMRCA.
DENV4 SNVs/read analyses. Next, we examined the Manatee County DENV4 genome sequences to compare strain variations between years and to identify mutations unique to the strain that potentially enabled local adaptation to and/or persistence in local mosquito populations. Following the MCC phylogenetic analysis, site-specific reads from mosquito populations in Manatee County were analyzed for single-nucleotide variations (SNVs) by determining the number of SNVs/read against the Manatee County consensus genome and other global DENV4 genomes (Fig. 4c). SNV/read values showed only 22 SNVs across the 11,650-nucleotide Manatee County genome against all reads. SNVs were more substantial per read in the other DENV4 genomes. This indicates the likely persistence of a single strain of DENV4 in Manatee County during the 2016 -2017 transmission seasons.
Signatures of Manatee County DENV4 adaptation. We then explored selective pressures on the Manatee County DENV4 strain's coding sequence that may be functionally important with respect to transmission and persistence of DENV4 in Floridian aegypti. The DENV4 genome has 5= and 3= untranslated regions (UTRs) flanking eight protein-coding genes (encoding nonstructural [NS] protein 1, NS2A, NS2B, NS3, NS4A, the 2K peptide, NS4B, and NS5) (Fig. 5a). The protein-coding regions of Manatee County DENV4 were compared to those of four Haitian DENV4 genomes from 1994 to 2015 and a 1981 Senegalese DENV4 genome. These were analyzed for all amino acid substitutions between strains, and analysis of the ratio of nonsynonymous to synonymous evolutionary changes (dN/dS) was conducted comparing the Senegalese DENV4 genome with the Manatee County DENV4 genome ( Fig. 5a; see also Table S2). The highest proportions of amino acid substitutions were seen in NS2A and the 2K peptide; simultaneously, the highest dN/dS values occurred for the NS2A gene, to a point of weak positive selection (dN/dS Ͼ 1) that covered a V1238T mutation discussed further here. We then calculated dN/dS data for all DENV4 genomes and for genotype II, genotype IIa, and genotype IIb with all sequences available, as well as within the Haiti-Florida and Haiti-Florida-Puerto-Rico clades (Fig. 5b). Purifying selection, which occurs when nonsynonymous mutations are deleterious, dominated, but we found weaker purifying selection in NS2A and 2K peptide genes, correlating to the Manatee County-to-Senegal dN/dS analysis conducted previously. Values of dN/dS for these genes increased relative to those for flanking genes for genotype IIb and Caribbean/Florida-specific groups as well (Fig. 5b).
Next, we further analyzed coding sequences in specific regions of the genome to investigate specific mutations that may have mediated Manatee County DENV4 Floridian entry and persistence. The Senegal sequence and the oldest Haitian sequence from 1994 lack these mutations. In a selective pressure analysis utilizing the aforementioned 234-genome assembly, we observed strong background purifying selection with 143 sites that were found to be under episodic negative/purifying selection within the NS2A gene. Episodic diversifying/positive selection (evolutionarily preferred nonsynonymous mutation) was detected in two sites corresponding to amino acids 1238 and 1333; both residues localized to transmembrane segments of the protein. This makes V1238T a mutation of note corresponding to the previous NS2A-associated analysis, detected in different analyses as a point of possible positive selection. The 2K peptide was next analyzed against the four Haitian genomes and the Senegalese genome from the first NS2Aspecific analysis (Fig. 5d), and we observed that it had the second highest general rate of nonsynonymous mutations and had a peak of weaker purifying selection (Fig. 5a). There was only one nonsynonymous mutation among the six genomes, which is significant considering the size of the 2K peptide. This was a T2232A mutation present solely in the Manatee County DENV4 sequence.
DENV4 3= UTR sequence and secondary structure analysis. To complete our genomic analysis of Manatee County DENV4, we examined the 3= UTR, as this region and its derivative subgenomic (sRNA) RNA have been implicated in epidemiologic and stem-loop (3= SL) (Fig. S4). The 3= UTR, through structural conformations, can affect viral replication in hosts (10). We noted several transition substitutions in the DENV4 IIb lineage prior its arrival in Florida (node B in Fig. 4b and Fig. S4b). Most of these mapped to either the highly variable region (HVR) or the adenine-rich segments that space functional RNA elements in DENV 3= UTRs (11). The U10318C substitution in fNR2 (fNR1 present only in other DENV serotypes) and the G10588A substitution on the 3= SL mapped to base-pairing positions. Manatee County DENV4 also underwent a rare transversion (A10478U) in a conserved position in DB2. This substitution favors formation of a new base pair in the structure of DB2. Additionally, an insertion (10467A) occurred in the adenine-rich segment upstream of DB2; this insertion is common for all lineages.
DISCUSSION
Our metavirome analysis of A. aegypti from Manatee County has revealed potential insight into, and new possible examples of, human arboviruses and ISV relationships in a state prone to autochthonous flavivirus transmission, considered in light of previous research on such interviral relationships. The observed drop in DENV4 relative to the mosquito virome (ISVs) between 2016 and 2017 was statistically significant (P ϭ 0.035), opening the possibility that the ISV profile for individual mosquitoes may influence persistence of DENV4 in site-specific mosquito populations within the surveyed area. Note that our analysis was conducted on pools of mosquitoes, which limits the conclusions that can be drawn in this study versus an analysis conducted on individual mosquitoes and interviral dynamics. For example, although Anphevirus (described in our analyses) has been shown to reduce DENV titers in vitro during coinfections, further analysis performed using individual mosquitoes of this specific dynamic is necessary to fully determine whether Anphevirus coinfection (occurring singly or concomitantly with other ISVs) can mediate multigenerational dengue virus persistence in mosquitoes (12).
With respect to the role of natural infections by insect-specific flaviviruses in the proliferation of pathogenic arboviruses carried by different mosquito vector species, current knowledge is uncertain. A mosquito-specific flavivirus that we detected that is known as cell fusing agent virus (CFAV) is of particular interest. Coinfection studies performed in vitro with DENV2 and CFAV resulted in enhanced proliferation in both (13). The observed correlation between persistence of DENV4 infection into 2017 in Anna Maria and Cortez mosquitoes and CFAV abundance in 2016 (Fig. 3) may provide an example of the dynamics described previously by Zhang et al., showing the enhanced replication of the two viruses (13). An important caveat is that the research reported by Zhang et al. was conducted in vitro. Conversely, Baidaliuk et al. demonstrated in vivo amplification-restrictive interactions between CFAV and DENV1 (7). How interactions between DENV4 genotype, mosquito genotype, and CFAV genotype ultimately influence the vector competence of Floridian A. aegypti mosquitoes remains to be determined. The observed metavirome patterns set the stage for follow-up studies to characterize the precise nature of ISV-DENV-mosquito interactions viz. vector competence.
The absence of an index human DENV4 case does not preclude the possibility that DENV4 was transmitted locally. Up to 88% of primary DENV infections are asymptomatic, with DENV4 being widely understood to cause primarily subclinical infections (14,15). Importantly, clinically inapparent infections could contribute to 84% of DENV transmission events through mosquitoes, so the threat of local transmission cannot be ruled out (14). However, it is noteworthy that DENV4 was detected in adult female mosquitoes reared from wild-captured eggs, implicating transovarial transmission (TOT) in local A. aegypti as has been shown for DENV1 in Key West, FL (16). However, since the DENV4 signal measured in 2017 was lower than that measured in 2016, with two sites losing DENV4 prevalence, if it had played a role in maintaining DENV4 in Manatee County mosquitoes, TOT alone might have been insufficient to maintain DENV4 from 2016 to 2017. At present, vertical transmission remains only a possibility, especially since this phenomenon has not been fully described outside laboratories.
Furthermore, we suspect that despite Manatee County DENV4's divergence from Haitian strains sometime between 2006 and 2013, it likely did not enter Manatee County until 2014 or after, given its similarity to the 2014 -2015 Haitian DENV4 isolates and the fact that TOT is an inefficient process. A recent review evaluating the influence of TOT in DENV epidemiology concluded that the current body of research suggests that vertical transmission is likely to be insufficient to represent an independent mechanism of DENV maintenance (17). Tertiary mechanisms, beyond ISV composition profile and TOT, could include inapparent human-mosquito infection cycles during the summer transmission (mosquito) season, which may have also contributed to DENV4 persistence in Manatee County aegypti. The exact mechanisms of maintenance in mosquitoes and proof of local transmission are difficult to elucidate at this juncture, considering that all mosquito samples were processed for rRNA-depleted total RNA sequencing (RNASeq) and reverse-transcription PCR (RT-PCR) (i.e., no live virus can be isolated). Importantly, a comprehensive serological survey with subsequent confirmation by gold-standard neutralization assay of the population from the four sample collection sites was not possible within the estimated mean half-life of detectable anti-DENV4 virion IgM or IgG. This limitation was unavoidable since (i) the complete viral genome assembly and orthogonal confirmation occurred more than 2 years following the initial mosquito collections and (ii) there are significant confounders and logistical obstacles (well outside the current scope of the study) that complicate working with transient worker and migrant communities in the sampled area. However, the data representing complete assembly and persistence over 2 years of an individual strain of DENV4, which is supported by results from orthogonal analytical approaches, remain provocative and reveal an unappreciated ecological process for DENV4 transmission in a nonendemic setting.
Tracking and predicting movement and introduction of arbovirus into the United States, especially into Florida, can potentially lead to proactive efforts for increased monitoring and vector control at critical points of introduction into the state. DENV4 has been reported throughout the Caribbean, especially in Puerto Rico and Haiti and, more recently, in Cuba (18). Florida has the largest populations of people of Puerto Rican, Haitian, and Cuban origin and descent in the United States, and there are ongoing efforts to develop effective "sentinel" surveillance programs that can prepare Florida to deal with potential local arbovirus transmission. As expected, our analysis suggests a Caribbean origin for the Manatee County isolate due to movements of DENV4 into Florida from Haiti and, preceding that, into Haiti from Puerto Rico. These results agree with previous findings depicting the Caribbean as a hot spot for arboviral spread in the Americas (18)(19)(20). Diversifying selective pressure in the NS2A gene and the 2K peptide ( Fig. 5a and b) experienced by American/Caribbean DENV4 may have contributed to the fixation of mutations driving the adaptation of DENV4 to local infections of human and mosquito populations. NS2A mutations that characterized the 1998 DENV4 outbreak in Puerto Rico are conserved between the Manatee County, Puerto Rican, and two Haitian genomes (GenBank accession no. JF262782.1 and KT276273.1) (Fig. 5c) (9). The 1981 Senegalese strain, the strain clustering closest to the Manatee County strain isolated outside the Americas (Fig. 4a and b), shares none of these mutations with Manatee County DENV4. An in-depth understanding of how putative "hallmark" mutations in arboviruses can lead to increased local aegypti mosquito infections is lacking, compelling further study.
We observed the expected 15-nucleotide deletion (Δ15) in the Manatee County DENV4 3= UTR (see Fig. S4 in the supplemental material) that is present across all circulating DENV4 strains but absent from the extinct genotype I DENV4 lineage (GQ868594_Philippines_1956). Since the Δ15 deletion maps to the HVR, it does not alter the secondary structures required for subgenomic flaviviral RNA (sfRNA) production. However, the HVR is an adenylate-rich unfolded spacer with poor sequence conservation-where no reliable secondary structure can be predicted, as our previous analyses suggested (11). It has been speculated that these spacers favor the correct folding of adjacent functional structured RNA elements. The deletion might change the rate of folding of the downstream functional structured RNA and thus might alter sfRNA production levels. Clearly, a closer molecular exploration of the exact role of this Δ15 deletion is needed.
The potential implications of our findings are intriguing, especially considering that arboviral surveillance of mosquito populations during the extended Florida mosquito season (April to October) is limited. To our knowledge, this is the first reported characterization of a DENV4 infection in native mosquito populations in Florida in the absence of an index human case across 2 years in a specific county. These data highlight the importance of knowing when and where arboviruses are introduced and point to the potential benefit of surveilling local mosquito populations for arbovirus infections prior to an outbreak. Given the increasing number of travel-related arbovirus introductions into Florida alone and the risk of local establishment in the state, we expect that while our report is seminal, it likely represents the tip of the iceberg. Furthermore, in 2019, 16 cases of locally acquired DENV were reported for the state, including an area along the West Central Florida Gulf Coast (21). Among the 335 travel-associated cases in 2019, DENV1 (n ϭ 63), DENV2 (n ϭ 235), and DENV3 (n ϭ 31) serotypes were identified by PCR from 329 samples. DENV serotypes for the local cases are reasonably predicted to mirror the geographical distribution of serotypes for the travel-associated cases. If these data and our own findings are any indication, the number of "under-the-radar" arbovirus infections of mosquito populations in migration hot spots across the state (and perhaps across other states and regions around the globe) remains significantly underestimated.
Mosquito sample preparation and viral RNASeq.
To avoid cross contamination of genetic material by sympatric A. albopictus in the area and to ensure the preparation of sufficient quantities of RNA extracted from only pristine samples of A. aegypti nulliparous females, we elected not to use adult traps. As such, eggs were collected in ovitraps in 2016 and 2017 (15 May 2016 and 19 June 2017) from four Manatee County sites (Fig. 1). To avoid cross contamination of mosquito viromes, each year eggs from each site were hatched independently in distilled water, reared to adulthood, identified by species, and then frozen. Twenty female mosquitoes were selected from each site, and abdomens were processed as a single pool per site (n ϭ 20/pool) for the four collection sites for a total of eight individual pools. Total RNA was extracted using an AllPrep DNA/RNA minikit (Qiagen), and rRNA was depleted using a NEBNext rRNA depletion kit (New England BioLabs). A NEBNext Ultra II directional RNA library preparation kit (New England BioLabs) was used to prepare shotgun metagenomics libraries. Reverse-transcribed RNA libraries were sequenced using a HiSeq 3000 instrument (Illumina) in 2 ϫ 101 run mode. The data were deposited into the NCBI Sequence Read Archive and Biosample archive under BioProject PRJNA547758.
Initial assembly and metavirome analysis. BBduk (version 37.75; https://sourceforge.net/projects/ bbmap/) was used to trim adaptor sequences and remove contaminants. A. aegypti sequences were removed using BBsplit (https://sourceforge.net/projects/bbmap/) and the A. aegypti Liverpool genome (AaegL5.1). Nonmosquito reads were assembled using Spades (3.11.1) in metagenomics mode (22). For each contig, a local similarity search in protein space was run using Diamond (0.9.17) against the NCBI NR (National Center for Biotechnology Information nonredundant) sequence database (23). Reads were mapped against assemblies using Bowtie (2.3.4.1) and were then sorted/indexed using Samtools (1.4.1) (24,25). Megan 6 was used to assign contigs and read counts to the lowest common ancestor (LCA; the lowest common ancestor on a phylogenetic tree if situated vertically, making the LCA the nearest common ancestor) and to view viral contigs (26). To estimate microbial community abundance, Diamond (0.9.17) was used to search reads against the NCBI NR database, Megan 6 was used to assign read counts to the LCA, and R (3.6.0) package Compositions (1.40-2) was used to create a subcomposition of RNA that was used to quantify RNA virus taxon abundance by site/year (e.g., Palmetto 2016 and Palmetto 2017) from the body of overall compositional RNA data in each site/year data set ( Fig. 3) (23,26,27). Compositional count data from the Megan LCA classification were assessed by the use of ALDEx2 to estimate the statistical significance of the change in DENV4 reads from 2016 to 2017 (26,28,29). ALDEx2 uses a Dirichlet multinomial Monte Carlo simulation to estimate the variance of the centered log ratio (CLR) values for taxa among the reads (28,29). Using the variance of the CLR, ALDEx2 computes P values using Welch's t test and returns an effect size (CLR/variance) for the estimate (28,29). For a determination of the statistical significance of the observed decrease in cell fusing agent virus (CFAV) reads from 2016 to 2017, a linear regression fitted to the CLRs of the Anna Maria and Cortez site DENV4 reads in 2016 and 2017 was utilized to yield an R 2 value and a P value to describe the trend.
DENV4 refinement and genome-closing assembly. Two contigs covering most of the genome with a small gap were obtained. To create a closed genome, a data set of genomes for DENV1, DENV2, DENV3, and DENV4 (GenBank accession no. NC_001477.1, NC_001474.2, NC_001475.2, and NC_002640.1) and the two assembled contigs were used. We selected reads sharing a 31-mer with the data set using BBduk (https://sourceforge.net/projects/bbmap/), followed by assembly with Spades in meta mode and classi-fication using Diamond for a complete DENV4 genome (22,23). Read mapping performed with Bowtie revealed incorrect bases near the 3= end, which were manually corrected (24). The genome was annotated using the Genome Annotation Transfer Utility from the Virus Pathogen Database and Analysis Resource (ViPR) (30,31).
Phylogenetic and molecular clock analyses. Two hundred thirty-four DENV4 genome sequences from GenBank (see Table S1 in the supplemental material) were aligned using MAFFT version 7.407 with the L-INS-I method (32,33). IQ-TREE software was used to evaluate phylogenetic signal in the genomes by likelihood mapping and to infer maximum likelihood (ML) phylogeny based on the best-fit model according to the Bayesian Information Criterion (BIC) (34)(35)(36). Statistical robustness for internal branching order was assessed by Ultrafast Bootstrap (BB) Approximation (2,000 replicates), and strong statistical support was defined as represented by BB values of Ͼ90% (37).
To estimate when DENV4 entered Florida, we used 145 strains, including all isolates from the Americas, related Asian and African isolates, and randomly reduced oversampled Brazilian isolates. The strains in this data set were not recombinant, as assessed by scanning the alignments for possible recombination points using the RDP, GENECONV, MaxChi, CHIMAERA, and 3Seq algorithms implemented in RDP4 software (available from http://web.cbio.uct.ac.za/~darren/rdp.html) (38). Determinations of correlations between root-to-tip genetic divergence and date of sampling were conducted to assess clock signal before Bayesian phylodynamic analysis (39). Time-scaled trees were reconstructed using the Bayesian phylodynamic inference framework in BEAST v.1.8.4 (40,41). Markov chain Monte Carlo (MCMC) samplers were run for 200/250 million generations to ensure Markov chain mixing, assessed by calculating the effective sampling size (ESS) of parameter estimates. The HKY substitution model was used with empirical base frequencies and gamma distributions of site-specific rate heterogeneity (42). The fit of strict versus relaxed uncorrelated molecular clock models and constant size versus Bayesian Skyline Plot demographic models were tested (8). Marginal likelihood estimates (MLE) for Bayesian model testing were obtained using path sampling (PS) and stepping-stone sampling (SS) methods (43,44). The best model consisted of a strict clock and a constant demographic size. The maximum clade credibility tree was inferred from the posterior distribution of trees using TreeAnnotator, specifying a burn-in of 10% and median node heights, and was then edited graphically in FigTree v1.4.4 (http://tree.bio.ed.ac .uk/software/figtree/), alongside ggtree, available in R (45).
Single-nucleotide variation analyses. The viral RNA sequencing reads were mapped onto the complete genome of seven DENV4 strains. These strains represent all the known DENV4 lineages (accession numbers are provided in Fig. 4c). We also mapped the reads onto the assembled Manatee County DENV4 full genome. The read mapping was performed using the Geneious platform (Geneious Prime version 2019.2.1) and the "map to reference" function with standard settings (Mapper: Geneious; Sensitivity: Highest Sensitivity/Slow; Fine tuning: Iterate up to 5 times; no trim before mapping). The single-nucleotide variation quantification was performed in the same platform using the "find Variation/ SNV" function under default settings.
DENV4 genetic analyses. From the alignment of the 234 DENV4 genomes, sequences corresponding to the NS2A gene were extracted to investigate selection pressure and mutations that potentially influenced adaptation to and/or persistence in mosquito populations. Comparative selection and mutation analyses revealed NS2A to be a relatively strong region of potential selection for the Manatee County genome. HyPhy algorithms were used to estimate nonsynonymous (dN) to synonymous (dS) codon substitution rate ratios (), with values of Ͻ1 indicating purifying/negative selection and values of Ͼ1 indicating diversifying/positive selection (46,47). Fast, unconstrained Bayesian approximation (FUBAR) was used for inferring pervasive selection and the mixed-effects model of evolution (MEME) to identify episodic selection (48,49). Sites were considered to have experienced diversifying/positive or purifying/negative selective pressure based on posterior probability (PP) values of Ͼ0.90 for FUBAR and likelihood ratio test result of Յ0.05 for MEME.
To elucidate influential mutations in the Manatee County DENV4 genome that potentially enabled persistence in the local mosquito population, a dN/dS analysis of the Manatee County DENV4 against the closely related but geographically distant 1981 Senegalese DENV4 (GenBank accession no. MF004387.1) was conducted using JCoDA with default settings, a 10-bp sliding window, and a jump value of 5 (50). To further assess the selective pressure throughout coding sequences in the DENV4 lineage that established transmission in Manatee County, we implemented a single-likelihood ancestor counting (SLAC) method using the DataMonkey 2.0 Web application (51,52). That application combines maximum likelihood (ML) and counting approaches to infer nonsynonymous (dN) and synonymous (dS) substitution rates on a site-by-site basis for the different DENV4 coding alignments and corresponding DENV4 phylogeny. The measurements were performed on different alignments that included all strains, only genotype II strains, only clade IIa or IIb strains, or only strains that are closely related to the DENV4 Manatee County strain (multiple alignments of DENV4 coding sequences are available as a Mendeley data set). NS2A and 2K peptide genes were individually aligned and inspected in closely related DENV4 strains (1994 Fig. S4 in the supplemental material are available with relevant accession numbers in a Mendeley data set (https://data.mendeley.com/datasets/kwszjp63rb/draft?aϭe11f9b80 -bcfb-443b-918d-3016032ef3bd). The GenBank accession numbers for the sequences compared in the alignments shown in Fig. 5c
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. | 2020-01-30T09:14:34.446Z | 2020-01-25T00:00:00.000 | {
"year": 2020,
"sha1": "2b879c3e4d8a7883cafa81ab5cd88a6b3ccd7c93",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/msphere.00316-20",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9011eb415cf4a9be661570efdfe93dc73dce3d64",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
10941268 | pes2o/s2orc | v3-fos-license | Pulmonary pathology of pandemic influenza A/H1N1 virus (2009)-infected ferrets upon longitudinal evaluation by computed tomography.
We investigated the development of pulmonary lesions in ferrets by means of computed tomography (CT) following infection with the 2009 pandemic A/H1N1 influenza virus and compared the scans with gross pathology, histopathology and immunohistochemistry. Ground-glass opacities observed by CT scanning in all infected lungs corresponded to areas of alveolar oedema at necropsy. These areas were most pronounced on day 3 and gradually decreased from days 4 to 7 post-infection. This pilot study shows that the non-invasive imaging procedure allows quantification and characterization of influenza-induced pulmonary lesions in living animals under biosafety level 3 conditions and can thus be used in pre-clinical pharmaceutical efficacy studies.
The ongoing emergence of novel pathogens (CDC, 2009;Osterhaus, 2008) calls for the concomitant development of animal models that address their pathogenesis and assess the potential of preventive and therapeutic intervention strategies. For example, the emergence of the 2009 pandemic A/H1N1 influenza virus (pH1N1) highlighted the need for the rapid development of animal models that closely mimic the human infection (Itoh et al., 2009;Osterhaus, 2008;van den Brand et al., 2010). Studies on the pathogenesis of the disease and the timely assessment of the efficacy and safety of the rapidly developed vaccine candidates, antiviral drugs, antibody preparations and immune modulators, largely depend on newly developed animal models (Del Giudice et al., 2009;Friesen et al., 2010;Itoh et al., 2009;van den Brand et al., 2010;Zhou et al., 2010). However, in these models the assessment of virus-induced lesions is largely based on findings from an arbitrarily chosen time point after experimental infection. When evaluating infections with a peracute onset, significant early findings may be overlooked unless large numbers of animals are sacrificed at consecutive time points. Especially when dealing with outbred animals, like ferrets, evaluation and integration of consecutive findings from different animals may be speculative. In addition, working with highly pathogenic viruses like the pH1N1 virus when it emerged or with the highly pathogenic avian H5N1 influenza viruses, is limited to biosafety level (BSL)-3 laboratory settings. The complexity of working under these stringent restrictions also limits the possibilities to work with large numbers of animals sacrificed at consecutive time points. To overcome these limitations we performed repeated computed tomography (CT) scans of ferrets under BSL-3 conditions before and during infection with the pandemic H1N1 influenza (2009) virus. The pattern of influenza virus attachment and replication in the ferret respiratory tract is largely similar to that in humans van Riel et al., 2007), making influenza virus infection of the ferret the model of choice to study human influenza.
The ferrets (Mustela putorius furo) used were approximately 8 months of age, females, all seronegative for antibodies against circulating influenza viruses and for antibodies against Aleutian disease virus. They were routinely housed and handled under BSL-3 + conditions in negatively pressurized and high efficiency particulate air (HEPA)-filtered biocontainment isolator units, approved by an independent institutional laboratory animal ethics and welfare committee. Animal handling and scans were performed under general injection anaesthesia (ketamine 12.5 mg kg 21 and medetomidine-HCl 7.5 mg body weight kg 21 ). Eight ferrets were inoculated intratracheally with 10 6 TCID 50 of pandemic influenza virus A/Netherlands/602/ 2009 (pH1N1) as described previously (Del Giudice et al., 2009;van den Brand et al., 2010). The virus was propagated in Madin-Darby canine kidney cell cultures and the infectious dose was determined as described previously , and titres calculated according to the method of Spearman-Karber (Kärber, 1931). Virus shedding was monitored daily by collecting nasal and oropharyngeal swabs that were analysed for determination of viral loads by standard procedures (van den Brand et al., 2010), and expressed as log TCID 50 . All animals had detectable levels of virus in their upper respiratory tract ( Supplementary Fig. S1, available in JGV Online).
The CT scanner used is a dual-source ultrafast CT system (Somatom Definition Flash; Siemens Healthcare) with a temporal resolution of 0.075 s and table speed of 458 mm s 21 , the spatial resolution is 0.33 mm. This CT system requires short acquisition times (#0.22 s) for data recording of an entire ferret thorax. Such a high temporal resolution enables accurate scanning of living ferrets without the necessity of breath holding, respiratory gating or electrocardiogram (ECG) triggering to generate sharp images. During in vivo scanning the anaesthetized ferrets were positioned in dorsal recumbency in a perspex biosafety container of approximately 8.3 litre capacity that was purposely designed and built (Tecnilab-BMI) (Supplementary Fig. S2a, b, available in JGV Online). The oxygen concentration in the container did not drop below 14 % as measured by oxymetry. All animals had been scanned 3 days prior to virus inoculation to define the uninfected baseline status of the respiratory system. Four ferrets (#3, #4, #5 and #7) were scanned twice (on days 23 and days 3 or 4), two ferrets (#2 and #8) were scanned three times (on days 23, 4 and 7) and two ferrets (#1 and #6) were scanned four times (on days 23, 3, 4 and 7; Supplementary Table S1, available in JGV Online). In humans, CT-images have been described previously (Gill et al., 2010;Li et al., 2010;Marchiori et al., 2010;Mollura et al., 2009;Perez-Padilla et al., 2009) for pulmonary alterations caused by pandemic (2009) H1N1 influenza virus infection and the histopathological nature of these alterations in humans has been evaluated only to limited extent (Gill et al., 2010;Perez-Padilla et al., 2009). We found consistent bilateral ground-glass opacities in the lungs on all time points of scanning. They were most severe on days 3 and 4 post-infection (p.i.) and showed a reduction on day 7 (Fig. 1). The post-infectious reductions in aerated pulmonary volumes were measured from 3D CT reconstructs using lower and upper thresholds in substance densities of 2870 to 2430 Hounsfield units (HU). The mean decrease in aerated lung volumes was most pronounced on days 3 (26 cm 3 ) and 4 (24 cm 3 ) p.i. compared with day 3 before infection. On day 7 the mean aerated lung volume returned to, and equalled, baseline values (31 cm 3 ) from day 3 before infection (Table 1).
In addition to CT scanning, we also performed magnetic resonance imaging (MRI) scanning of the ferrets. The MRI scanner used is a High Definition 3 Tesla clinical scanner Fig. 1. Two rows of four consecutive 3D lung CT images of ferrets #1 and #6 recorded in vivo under BSL-3 + compared with their appearance at necropsy on the far right. At day 3 before infection, the lungs showed the clear aerated baseline condition, at day 3 p.i. with the new pandemic H1N1 influenza virus marked almost diffuse ground-glass opacities are present that show a gradual reduction towards 7 days p.i. The two photographs taken at necropsy on 7 days p.i. depict the ventral aspect of the lungs, within the centre the hearts still attached to the pulmonary hilus. Both lungs show multifocal reddish consolidated areas of acute inflammation that essentially match with the opacities on the CT images taken just before necropsy; non-affected aerated lung tissue is light pink in colour.
In vivo imaging of influenza in ferrets (General Electric Healthcare) that requires data acquisition times to such a degree that motionless imaging, without the application of respiratory gating and/or ECG triggering, is only possible post-mortem. Because of this limitation and the lower spatial resolution compared with CT scanning, the MRI scan proved impractical for use in this animal model and set-up. Accordingly, the MRI results are not presented. However, MRI scanning under BSL-3 conditions could be of value to image other organ systems in vivo that are not hampered by heartbeat or respiratory motion.
Within 1 h after euthanasia by exsanguination from cardiac puncture (time needed for post-mortem MRI scanning) the ferrets were submitted for a full necropsy to compare (histo)pathological data with those that were CT scanned. Animal # 4 succumbed spontaneously on day 3 p.i. and had to be necropsied without prior MRI scanning. The entire intact lungs were instilled with, and submerged in 10 % neutral buffered formalin for fixation and disinfection. The lungs were transversely cut just caudal (approx. 10 mm) of the tracheal bifurcation and matched to the same transversal CT image. Additionally from each animal, four similarly cut left lung sections were made not guided by gross lesions. The lung sections were routinely processed, paraffin embedded and 4 mm thin micro-sections were stained with haematoxylin and eosin (H&E) for histopathology. All entire slides were evaluated and scored for the extent of alveolar damage/ alveolitis and for the extent of alveolar oedema (0, 0 %; 1, ,25 %; 2, 25-50 %; and 3, .50 %). For the detection of influenza A virus-infected cells, additionally serial cut micro-sections were stained for influenza A virus nucleoprotein (NP) as described previously (van Riel et al., 2007).
The pulmonary ground-glass opacities corresponded on histology to extensive alveolar oedema admixed with variable proportions of alveolar macrophages, neutrophils, erythrocytes, fibrin and cellular debris. Immunohistochemical staining for viral NP showed infected pneumocytes lining the inflamed and flooded alveoli (Fig. 2). Additionally, there was a moderate necrotizing bronchiolitis and similar but milder bronchitis. Despite the return to baseline values in mean aerated lung volumes on day 7 p.i. there were still histological lesions mainly in the form of mixed inflammatory cellular infiltrates and type II pneumocyte hyperplasia. Although not statistically significant, the median histopathology scores for the extent of alveolar damage/alveolitis did show a slight decrease from 3 (range 2-3) on day 4 to 2 (range 2-3) on day 7 p.i. Additionally, matching the improvement in lung aeration is a significant (P50.031) decrease in the median extent of alveolar oedema from 3 (range 2-3) on day 4 to 1.5 (range 1-3) on day 7 p.i. (Table 1). The aerated lung volumes calculated (used lower and upper thresholds in substance densities: 2870 to 2430 HU) from the 3D CT reconstructs are presented in cm 3 ±SD for all animals individually and averaged on the various days. On 7 days p.i., the mean aerated lung volume returns to, and equals, the mean baseline value on 23 days p.i. of 31 cm 3 . Although not statistically significant, the decrease in lung aeration from baseline value on 23 days p.i. of 31 cm 3 to 4 days p.i. 24 cm 3 (P50.20). The median extent of alveolar damage/alveolitis (score range 0-3) shows a slight decrease from 3 on 4 days p.i. to 2 on 7 days p.i. The median extent of alveolar oedema (score range 0-3) shows a significant (P50.031) decrease from 3 at 4 days p.i. to 1.5 on 7 days p.i. We show that monitoring of pulmonary lesions of pH1N1 influenza virus-infected ferrets under BSL-3 conditions by consecutive in vivo imaging with CT scanning provides valuable data on disease progression and severity that closely coincide with post-mortem data obtained at the same time points from euthanized animals. The groundglass opacities observed by CT scanning in all infected lungs largely corresponded to areas of alveolar oedema upon necropsy that were most pronounced on days 3 and 4 and decreased towards day 7. As this method involves repeated CT scans of the same animal instead of sacrificing multiple animals at different time points, it results in a refinement of data collection and a significant reduction in numbers of laboratory animals. In other words the development of respiratory tract lesions of each individual animal can be compared with the situation before infection and followed over time. In this way outbred animals serve as their own baseline control generating more detailed and relevant data per animal. In addition, the assessment of the severity and extent of the lesions over time will lead to more adequate and objective criteria for the time point of euthanasia. The ability of this CT-scanning methodology will not only allow for a more comprehensive study of the pathogenesis of life-threatening infectious diseases, but also of the assessment of the efficacy and safety of vaccination and antiviral strategies against them. In the present pilot study CT scan opacities correspond to alveolar oedema, this parameter is among others used as read out in influenza vaccine efficacy studies (Baras et al., 2011;van den Brand et al., 2011). Obviously this methodology is not limited to studying the respiratory tract, but could also be exploited for new emerging pathogens with their specific target organs in other animal models. | 2018-04-03T00:21:10.543Z | 2011-08-01T00:00:00.000 | {
"year": 2011,
"sha1": "eaa25998ea8ae6fb5236277766548a8880b9c270",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1099/vir.0.032805-0",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "15f5e677c2ad25855dc87986d325dab841d4fc66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234448454 | pes2o/s2orc | v3-fos-license | A SIMULATION STUDY ON NEUROMUSCULAR FACTORS AFFECTING CONSECUTIVE MOTOR UNIT ACTION POTENTIAL WAVESHAPE
Quantification of consecutive motor unit potential (MUP) is used to diagnose and monitor the progress of neuromuscular pathologies in clinical applications. In this study, a detailed motor unit simulation was conducted to reveal and understand the factors affecting MUPs. Using a volume conductor model and real muscle parameters, normal and pathological MUPs were created. The shape changes observed in consecutive MUPs, called jiggle, are calculated with a quantification method. Increased jitter duration and re-innervation percentage commonly observed during motor unit loss increase the jiggle value proportionally. Moreover, increasing fiber density changing different regions of a muscle bundle decreases the jiggle value. The blocking phenomena generally observed in re-innervated fibers affects the jiggle value similar to jitter duration. But, higher blocking levels (50%) of re-innervated motor fiber do not have an effect on jiggle value as lower levels of blocking (20%). In conclusion, simulation of pathological MUPs showed that it is useful for clinicians to understand the progress of a neuromuscular pathology and the factors affecting consecutive MUP wave shape.
INTRODUCTION
The analysis of motor unit potential (MUP) recorded during voluntary muscle contraction provides important information to help in the diagnosis and characterization of neuromuscular pathology. Besides the analysis of such classical parameters as duration and amplitude, the degree of change in MUP shape at consecutive discharges can be analyzed [1]. This variability depends on the behavior of the multiple SFAPs of MUP, which is seen in single-fiber electromyography (SFEMG). The variability of the intervals in SFAPs, which is called the jitter, is between 10-30 µs in normal muscles [2]. However, the jitter is increased for disturbed neuromuscular transmission such as in early reinnervation and myasthenic disturbances. If there are more severe disturbances, some SFAPs can be lost as a result of an intermittent failure of transmission in the motor unit (MU) endplate. All these disturbed neuromuscular transmission conditions cause instability or variations in consecutive MUPs and this is called "jiggle" by Stålberg and Sonoo [3].They proposed a method to express the quantification of shape variability and defined two parameters: the normalized mean of median consecutive amplitude differences (CAD) and the median of the cross correlational coefficient of consecutive discharges (CCC). The mathematical expression of CAD and CCC is [4]: where ( ) = amplitude of i th waveform at time t, m = number of waveform, n = number of sample in a waveform, C = noise calculation parameter and ̅ is the mean of .
For the calculation of jiggle parameters, a total 5 ms analysis window centered at the maximum negative peak was used from 30 ms MUP trace. CAD is actually the abbreviation of "normalized mean of median consecutive amplitude differences" and it expresses the ratio between the area of amplitude difference of the MUP waveform at consecutive discharges and the area of the averaged MUP ( Figure 1).
Figure 1.
Jiggle calculation schematic representation [3] The method and the mathematical functions were based on simulation studies and tested with real electromyographic signals [3,4]. The alignment of the waveforms, the choice of the reference point and the sensitivity of the method were tested with real electromyographic signals. Furthermore, for the biological and the technical noise, the segment of 5 ms nearest to the right endpoint in each trace was used to get C value using the points contained in an interval of ±20 % of the acquisition gain ( Figure 2). C was designed here to compensate selectivity for smooth baseline fluctuations by excluding the activity from nearby MUPs (recruited MUPs other than the analyzed ones) [4].
The relationship between the jiggle and the jitter, the temporal dispersion of the waveforms were also tested with simulations [3]. Simulation studies indicated that the jiggle assessed by this method is proportional to the jitter of the SFAPs. But the effect of blocking phenomena, percentage of reinnervated fibers and the fiber density on the shape variability of the MUP has not been tested with simulation. For this purpose, a detailed muscle bundle was simulated with real muscle parameters to reveal the effects of blocking, re-innervation and fiber density.
MATERIALS and METHODS
It is impossible to experimentally change the MU properties of a living muscle. Hence, MUP simulation is essential to understand, quantitatively express and interpret the shape changes observed in consecutive MUPs. For this reason, a MUP simulation was created based on the MU parameters obtained by previous studies [5][6][7][8][9].
Simulation Parameters
The muscle fiber density, muscle fiber length, number of muscle fibers and neuromuscular junction (NMJ) positions in a MU were set to create specific MUP to the specific MUs in each muscle group. In addition, the number of healthy and pathological or reinnervated fibers in a MU was set as variable.
The jitter value, NMJ delay, was set as 20µs for normal muscle fibers and 20-300µs for pathological muscle fibers [10]. The concentric needle electrode was placed at the midpoint of the distance between the NMJ and the muscle fiber end (tendon connection) of the MU. Standard muscle and fiber parameters used in simulation are given in Table 1.
Formulation of MUP
In this study, the volume conductor model was used to create SFAP [11]. The muscle fiber volume conductor model is generally expressed as the convolution of the transmembrane current of a cylindrical muscle fiber and the electrode transfer function. The muscle fiber is assumed to be straight and cylindrical. The extracellular environment is assumed to be infinite with cylindrical anisotropy. The origin of the cylindrical coordinate system is in the cross section of the fiber in the end plate (NMJ) and in the center of the fiber ( Figure 3A). The action potential is formed in the endplate and travels along the muscle fiber in both directions as a depolarization wave and ends in the tendon. Depolarization wave in a muscle fiber has to flow through the membrane and transmembrane current density is proportional to the second derivative of the intracellular potential [12]. Therefore, it can also be thought that the transmembrane current originates from the endplate and spreads towards the tendons. The end plate (z = 0) current, i0(t), as shown in Figure 3B and a potential generated by the point current source located at the beginning of the cylindrical coordinate system is expressed by a mathematical equation [13]; where I = Intensity of the current source, r = radial distance, = extracellular conductivity (0.063 Sm -1 ) and = intracellular conductivity (0.33 Sm -1 ).
It is assumed that the unit current source emerges in the end plate at t=0 and moves towards the tendons at a constant speed, the potential generated at the electrode by this source is h(t) ( Figure 3C). If i0(t) ( Figure 3B) is divided into n sources of different amplitude, the first source will appear at t = 0 and each subsequent source at the interval Δt. The amplitudes of these sources can be expressed as a1, a2, . . . an. The first current source emerging at t = 0 propagates and reaches the tendons and creates a1.h(t) potential at the electrode. The potential created by the second current source at the electrode will be a2.h(t-Δt). So, the total potential formed at the electrode can be expressed as; where Δt approaches to zero, n approaches to infinity. h(t) and i0(t) are expressed as convolution (*) as follows.
( ) = 0 ( ) * ℎ( ) As a result, SFAP can be expressed as the output of a linear system whose input is transmembrane current, ( ), and stimulus response, ( ).
MUP of a MU now can be expressed as summation of SFAP waveforms (8) created by muscle fibers and mathematical formulation of MUP;
Pathological MUP Simulation and Jiggle Calculation
While simulating consecutive MUPs, firstly, a muscle bundle containing the characteristics of a certain muscle group (fiber density, number of MUs, the number of muscle fibers contained in a MU, muscle fiber length, etc.) was created. The MUs and muscle fibers within this muscle bundle were randomly positioned within anatomical boundaries. Next, the probabilities of jitter generation or blocking of muscle fibers in these MUs were randomly adjusted. The electrode position was placed at a point between the NMJ and the tendon after the concentric electrode parameters [6] were adjusted. Then, the neurons in the MUs forming the muscle bundle were stimulated at a frequency of 6-8 Hz, and sequential MUPs were created.
To observe the effect of jitter on CAD and CCC values, MUs with 10% and 20% re-innervation rate were created. In addition, the probability of blocking of reinnervated muscle fibers was assigned as 0%, and the jitter level of these reinnervated muscle fibers was changed between 50-300µs in steps of 50µs. For normal muscle fibers, the jitter level was set as 20µs. After the MUP waves were created, 30dB intensity white noise was added, which is the most common background noise level in needle electrode recordings in the clinic.
When a motor nerve is severely damaged or lost its function, it cannot innervate the muscle fibers it is attached to. In this case, the neighboring motor nerve begins to form new NMJs by extending new terminal ends to the non-stimulated muscle fibers and it is called re-innervation, the stimulation of the muscle fiber also takes longer time than normal because the NMJ connection is not fully formed [14]. The number of reinnervated muscle fibers in a MU will also change the shape of MUP. In order to reveal the effect of re-innervation on the jiggle value, the percentage of re-innervated muscle fibers was changed between 10% and 50%, and pathological MUP waves were created. While forming pathological MUP waves, jitter levels of re-innervated muscle fibers were determined as 100, 200 and 300µs.
In the process of re-innervation formation, NMJ formation (the connection of motor neuron terminal end and muscle fiber) takes time. During this time, when there is no complete fusion in NMJ, some of the consecutive stimuli cannot create an action potential in the muscle fiber [15]. Therefore, while some of the muscle fibers in a MU form SFAP, some cannot form, and in this case, which is called blocking, shape change occurs in consecutive MUP waves. In order to reveal the effect of the blocking percentage on the jiggle value, the percentage of blocking of re-innervated muscle fibers was changed between 10% and 50%, and pathological MUP waves were created by simulation. While forming pathological MUP waves, jitter values of reinnervated muscle fibers were set as 20, 100, 200 and 300µs. The amount of re-innervated muscle fibers was set as 20% of all muscle fibers for the pathological MUs. For normal muscle fibers, jitter value was determined as 20µs, re-innervation rate 1% and blockage probability 1% [16].
The number of muscle fibers per unit area varies according to muscle groups or muscle bundle regions. Changing the muscle fiber density will increase the number of muscle fibers in the recording area of the concentric electrode. Moreover, with increasing muscle fiber density, MUP amplitude also increases [5]. Therefore, normal and pathological muscle models with different muscle fiber density were created to reveal the effect of muscle fiber density on jiggle parameters. In order to simulate the human muscle structure appropriately, the simulation was performed by changing the muscle fiber density between 5-30 fibers/mm 2 [17]. During the simulation, 20% of all muscle fibers were set reinnerved. Also, 10% of these re-innerved muscle fibers were assigned to have the possibility of blocking. Pathological MUP waves were obtained by assigning the jitter probabilities of re-innervated muscle fibers as 150µs, and jiggle parameters were calculated using equation 1 and 2.
RESULTS and DISCUSSION
Discrimination of normal and pathological consecutive MUPs can be easy for a clinician by looking at the shape of the potentials but quantification of shape variability of MUP waveforms provides accurate information about the examined MU. 1% and blocking ratio of re-innervated fibers is 1%. For pathological MUP: Jitter value is 100µs, re-innervation ratio is 10% and blocking ratio of re-innervated fibers is 10%. Red waves are the mean of the consecutive MUPs.
As can be seen in Figure 4, the CAD value of the normal MUP wave is 0.056 while pathological MUP wave is 0.294. Also, the CCC values representing the cross-correlation between consecutive MUP waveforms support the CAD values. If there is no difference between the consecutive potentials CCC value should be 1. But, increasing difference between potentials will decrease the CCC value. In Figure 4, while CCC value is 0.993 for normal MUP waveforms, it drops to 0.943 for pathological MUP because of the increasing consecutive wave shape difference.
With increasing jitter duration, jiggle value (CAD) increases significantly for the case of 10 and 20 % of re-innervation as expected ( Figure 5). The decrease in CCC with increasing jitter duration supports the acquired jiggle values. Increasing jitter value assigned to the SFAPs causes shape variability in consecutive MUPs because of the temporal dispersion between randomly created SFAPs. There is an increase between 20 and 50µs jitter durations for the 10% re-innervation rate but it is not significant.
With these results, increasing level of a neuromuscular pathology affecting NMJ or re-innervation rate of a MU will cause a directly proportional increase in CAD value. When the re-innervated fiber percentage in a MU increased, calculated jiggle values increase significantly for all jitter durations ( Figure 6). It is assumed that since the increased re-innervation percentage will cause new unstable NMJ formation, it is inevitable to have high jitter duration in MUs. The jitter duration, which was considered 300µs in the first stage of NMJ formation, will decrease to 100µs or lower as the NMJ connection becomes more stable [18]. Therefore, the obtained results support the assumption mentioned above because jiggle values for 100µs jitter duration (more stable NMJ) are smaller than 300µs jitter duration (unstable NMJ). One-Way ANOVA (Dunnett) test was used for significance. Each re-innervation group is compared with following increased re-innervation group. (*: p<0.05, significance level is same for 100µs, 200µs and 300µs).
In the formation process of NMJ, from the initial state (unstable) to the stable state, it is believed that muscle fibers are not always stimulated in the case of consecutive stimulation [10]. This is called blocking, which causes shape change in successive MUP waves and significantly affects the consecutive MUP wave shape. The simulation results supports this assumption. The percentage of blocking, which is accepted as 1% for normal muscle fibers, increases up to 50% for pathological muscle fibers. When the jitter time is kept constant and the blocking rate increases significantly up to 30%, the jiggle value also increases, and remains constant at 30% (Figure 7). There is no significant increase between 30 and 50 % of blocking. This situation can be explained as the high blocking rate of the fibers causes a drop in the number of SFAP waves that form MUP waves. Figure 7. Simulation of the relationship between jiggle and blocking percentage. Re-innervation rate is 20% for the blocking rates 10-50%. Bars represent the SEM values. One-Way ANOVA (Dunnett) test was used for significance. Each blocking percentage is compared with following increased blocking percentage. (*: p<0.05, significance level is same for 100µs, 200µs and 300µs, no significance for the 20µs jitter duration).
Muscle fiber density (the number of muscle fibers per unit cross section) varies in certain regions of a muscle bundle [5,17]. Changing muscle fiber density will change the number of muscle fibers in the recording area of the concentric electrode, and thus the number of SFAP that create the MUP wave. The increase in muscle fiber density in normal muscle groups does not make a statistical difference in jiggle value. Because the jitter duration and blocking percentage are very low in normal muscle groups, increasing the muscle fiber density does not create a significant change in consecutive MUP waves. However, as the muscle fiber density increases in pathological muscle groups, the jiggle value decreases statistically between 15 and 30 fibers/mm 2 according to the 5 fibers/mm 2 ( Figure 8). This is due to the increased number of muscle fibers in the concentric electrode recording area, which relatively decreases the reinnervated muscle fiber density. Figure 8. Simulation of the relationship between jiggle and fiber density. For normal group: Jitter value is 20µs, reinnervation ratio is 1% and blocking ratio of re-innervated fibers is 1%. For pathological group: Jitter value is 100µs, re-innervation ratio is 10% and blocking ratio of re-innervated fibers is 10%. One-Way ANOVA (Dunnett) test was used for significance. Each fiber density results are compared with the initial fiber density (5fiber/mm 2 ). (*: p<0.05, **: p<0.01).
CONCLUSION
Consecutive MUP simulation using real muscle parameters is necessary to understand the factors affecting the function of MU during neuromuscular pathology. In this study, quantification of consecutive MUPs was used to reveal the effects of jitter duration, re-innervation percentage, blocking percentage and fiber density to the shape of the MUP. Increasing jitter duration observed between pathological SFAPs of a MU composing MUP increases the jiggle value. Similarly, re-innervated motor fiber percentage in a MU also increases the jiggle value of consecutive MUPs. In the initial stage of the re-innervation process, the fusion of the NMJ is unstable and blocking phenomena occurs during consecutive stimulation of MU. Blocking of some fiber in a MU causes changes in MUPs and increases the jiggle value. Jiggle value increases when blocking ratio of re-innervated fibers rises up to the 30% but higher blocking ratios do not affect the jiggle value because of the decreasing number of re-innervated fiber percentage. Fiber density of a muscle fiber changes in some regions of the muscle bundle especially muscle fibers near the tendons are denser than middle of the muscle bundle. Simulation of this parameter showed that higher fiber density decreases the jiggle value because of the increasing number of fiber in the concentric electrode active area. Therefore, recording MUPs close to the muscle provides accurate and stable jiggle calculation. | 2020-12-31T09:07:59.752Z | 2020-12-28T00:00:00.000 | {
"year": 2020,
"sha1": "f1783b30055ede28282d63c02fdbd7a955db002c",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1413296",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b376da7f43444499e3873aa20cb7204072fbb3be",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267953805 | pes2o/s2orc | v3-fos-license | Assessment of the Efficiency of Tulsi Extract as a Locally Administered Medication Agent and Its Comparison With Curcumin in the Treatment of Periodontal Pockets
Introduction: The use of locally administered medication (LAM) agents such as minocycline, metronidazole, and tetracycline as antimicrobials has drawbacks, including the development of microorganism resistance, exorbitant pricing, and limited accessibility. Thus, there is a need for safer and more affordable alternatives. Numerous natural therapies have been found to be superior in this situation. In this study, the efficacy of tulsi extract as a LAM agent was assessed and it was compared with curcumin, which is currently used for the treatment of periodontal pockets. Methods and materials: There were three categories: each category had 30 sites. Category 1 sites underwent scaling along with root planing (SRP) solely, Category 2 sites received curcumin extract as LAM in the periodontal pocket in addition to SRP, and Category 3 sites received tulsi extract as LAM in the periodontal pocket in addition to SRP. The stent was used to ensure consistent and unbiased measurements on the 30th day after treatment. Clinical attachment level (CAL) and probing pocket depth (PPD) were measured at six points around each tooth. Results: The reduction in values of periodontal parameters such as BAPNA (Nα-benzoyl-DL-arginine-p-nitroanilide) assays, modified sulcus bleeding index (mSBI), gingival index (GI), plaque index (PI), CAL, and PPD in sites within Category 1, Category 2, and Category 3 was statistically significant. The decrease in BAPNA assay results indicates that tulsi extract is more effective than curcumin gel at eradicating red-complex bacteria. Although not significantly different, the decrease in PI and GI was observed to be greater when curcumin jelly was used. This suggests that curcumin jelly has a stronger impact on reducing plaque, which in turn decreases gingival inflammation. Conclusion: Based on the overall results of the study, it can be said that both tulsi and curcumin have similar effectiveness in reducing periodontal markers.
Introduction
There is a need for safer and more affordable alternatives to the use of antimicrobials like minocycline, metronidazole, and tetracycline as locally administered medication (LAM) agents.This is because these antimicrobials have drawbacks such as the emergence of microorganism resistance, exorbitant pricing, and limited accessibility [1,2].It has been discovered that several natural remedies are superior options in this case.When administered alone or in conjunction with other types of antibiotics, curcumin produced from turmeric (Curcuma longa) has demonstrated its anti-inflammatory and antibacterial effects [3][4][5][6].When administered as an external application, lozenges and LAM ingredients are introduced into the periodontal pocket.Additionally, when used as a mouth rinse, they are useful in the management of periodontitis [7,8].
The volatile oils, including curcumin, present in C. longa are known to have anti-inflammatory properties.Oral curcumin was discovered to be equally effective in treating acute inflammation as cortisone compounds or phenylbutazone compounds in treating chronic inflammation [9].When administered orally to the control group, C. longa significantly decreased the inflammatory swelling caused by Freund's adjuvant-induced arthritis in rats [6].Curcumin was found to reduce neutrophil aggregation, which is a marker of inflammation [5].C. longa, a plant with anti-inflammatory properties, can inhibit neutrophil activity and the generation of prostaglandins from arachidonic acid in inflammatory conditions [4].Curcumin can also be applied topically to reduce the signs and symptoms of allergic reactions and reactive skin disorders [10].At relatively low doses of 10-15 g/ml, curcumin almost entirely suppresses the proliferation of Treponema denticola, Fusobacterium nucleatum, Prevotella intermedia, and Porphyromonas gingivalis.It also has an antimicrobial effect on pathogenic bacteria associated with periodontal disease [10].It is more suitable as a topical medicine rather than for oral consumption due to its plasma half-life of approximately 6.77 hours after ingesting 10-12 mg of curcumin orally [11].
Ocimum sanctum (holy basil or tulsi), another herb that is frequently found in mouth rinses, is used for the management of periodontitis [11] due to its antibacterial, immunoregulatory, wound-healing, and antiinflammatory effects.Eugenol and methyl eugenol are the two essential oils found in tulsi.It has been suggested that Tulsi possesses immunomodulatory properties, potentially bolstering the host's response to infections by elevating levels of interferon, interleukin-4, and T helper cells.This mechanism implies a potential enhancement in the body's ability to combat pathogens through the regulation of immune factors [11].However, tulsi has not yet been studied to determine its efficacy as a LAM agent for treating periodontal pockets.Therefore, this study aims to evaluate the effectiveness of tulsi extract as a LAM agent and compare it to curcumin, which is currently being used to manage periodontal pockets.
Materials And Methods
This clinical study was conducted at the Department of Periodontics, Buddha Institute of Dental Sciences and Hospital, Patna, India, for two months from June 2021 to August 2021.The study was approved by the Institutional Ethics Committee of Buddha Institute of Dental Sciences and Hospital (approval number: IEC/BIDSH/2021/345A).A total of 30 patients were enrolled and informed consent was taken from them.Inclusion criteria were systemically healthy subjects between the ages of 35 and 55 years, with pocket depths ranging from 5 mm to 8 mm, and a minimum of 20 teeth that were intact.Three distant locations in different mouth quadrants were included.The study excluded patients who were undergoing systemic antibiotic therapy, using antibacterial mouth rinses, receiving orthodontic treatment, wearing prosthetic teeth, nursing or pregnant, or had an allergy to any of the medications used in the investigation.
To ensure homogeneity among study locations, the first molars of the mandibular arch and maxillary arch were used.In this study, a total of 90 regions were chosen.There were three categories; each category had 30 research locations.Category 1 sites underwent scaling along with root planing (SRP) solely, Category 2 sites received curcumin extract as LAM in the periodontal pocket in addition to SRP, and Category 3 sites received Tulsi extract as LAM in the periodontal pocket in addition to SRP.
Study phases
When used as a LAM agent, the dosage of tulsi extract's antibacterial activity is unknown.Consequently, the investigation was conducted in two stages.In stage one, the minimum inhibitory dosage of tulsi concentration was calculated for its antibacterial effect.The second phase was designed to use and compare curcumin cream (Curenext gel®) and an extract from tulsi as LAM medications for managing periodontal pockets.Curenext gel was used as it contains curcumin in its optimal concentration as of now.
The Initial Phase
With the aid of freshly distilled water in a gel foundation, various tulsi extract concentrations (2%, 4%, 6%, 8%, and 10%) were prepared and evaluated using the time-kill curve approach.The evaluation was carried out against pathogens commonly associated with periodontal diseases, such as Tannerella forsythia, Porphyromonas gingivalis, and Aggregatibacter actinomycetemcomitans [12,13].The findings demonstrated that when tulsi extract concentrations approached 10%, there was a significant antibacterial effect.Therefore, in the present investigation, 10% tulsi extract was administered as an LAM agent.
The Second Phase
An acrylic stent was used to ensure consistent and unbiased measurements on the 30th day after treatment; it had a notch to insert the UNC 15 probe (Hu-Friedy Mfg.Co., LLC, Illinois, United States).Clinical attachment level (CAL) and probing pocket depth (PPD) were measured at six points around each tooth.There were recordings of gingival index (GI), modified sulcus bleeding index (mSBI), and plaque index (PI) according to Loe and Silness (1963), Mombelli et al., and Loe and Silness (1964), respectively [14][15][16].
Using sanitized Gracey curettes (Hu-Friedy Mfg.Co., LLC), specimens of subgingival plaque were obtained [17] and transferred in reduced transportation solvent for N-benzoyl-L-arginine-p-nitroanilide (BAPNA) testing.By assessing the expression of trypsin-like enzymes in red complex periodontal bacteria, the BAPNA analysis was applied to examine the impact of LAM agents on decreasing the quantity of the red complex bacteria.The BAPNA test is capable of rapidly measuring and quantifying the activity of red complex periodontal bacteria that exist in plaque specimens.This is done by calculating the amount of trypsin-like enzyme in terms of nanomoles of the substance per 60 seconds per milligram of wet weight of dental plaque [18,19].
Without prior flushing with an antimicrobial mouth rinse, a complete single-session SRP was performed after collecting plaque specimens.Category 1 sites received only SRP treatment, Category 2 sites received SRP followed by Curenext gel (curcumin), and Category 3 sites received SRP followed by tulsi extract.Following the procedure, the pocket was sealed with periodontal packing.Participants were instructed to brush using a modified Bass approach and to refrain from using mouth rinse or any other medications for the duration of the study.On the 30th day after the procedure, each site underwent further evaluation for clinical and microbiological data.
Statistical evaluation
The data were analyzed using the trial version of SPSS Statistics for Windows, Version 17.0 (Released 2008; SPSS Inc., Chicago, United States).Calculations were made to determine the prevalence of the result variable and its 95% CIs.A Wilcoxon signed-rank test (as parametric test) was conducted assuming that the results of the BAPNA assays, mSBI, GI, PI, CAL, and PPD followed a normal distribution and that the data for all three categories were also normally distributed.Data were compared within categories from baseline to day 30 post procedure using the unpaired t-test and between categories from baseline to day 30 post procedure using the paired t-test.
Results
The mean value of PPD in Category 1 at baseline was 5.23±0.57,while it was 4.41±0.72 on the 30th day of follow-up.The reduction in PPD was statistically significant (p<0.001).The mean value of PPD in Category 2 at baseline was 5.40±0.67,while it was 4.39±0.84on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of PPD at baseline (p=0.23) and 30-day follow-up (p=1.01) between sites in categories 1 and 2, there was no statistically significant difference.The mean value of CAL in category one at baseline was 4.86±0.43,while it was 4.21±0.68 on the 30th day of follow-up.The change in CAL was statistically significant (p<0.001).The mean value of CAL in Category 2 at baseline was 4.99±0.84,while it was 4.17±0.79 on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of CAL at baseline and 30-day follow-up between sites in categories 1 and 2, there was no statistically significant difference.
The mean value of PI in Category 1 at baseline was 2.36±0.52,while it was 1.12±0.41 on the 30th day of follow-up.The change in PI was statistically significant (p<0.001).The mean value of PI in Category 2 at baseline was 2.26±0.63,while it was 0.79±0.42 on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of PI at baseline and 30-day follow-up between sites in categories 1 and 2, there was no statistically significant difference.The mean value of GI in Category 1 at baseline was 1.77±0.37,while it was 0.81±0.36 on the 30th day of follow-up.The change in GI was statistically significant (p<0.001).The mean value of PI in Category 2 at baseline was 1.69±0.30,while it was 0.76±0.30on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of GI at baseline and 30-day follow-up between sites in categories 1 and 2, there was no statistically significant difference.
The mean value of mSBI in Category 1 at baseline was 1.91±0.52,while it was 1.24±0.85 on the 30th day of follow-up.The change in mSBI was statistically significant (p<0.001).The mean value of mSBI in Category 2 at baseline was 1.84±0.56,while it was 0.84±0.60 on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of mSBI at baseline and 30-day follow-up between sites in categories 1 and 2, there was no statistically significant difference.The mean value of BAPNA in Category 1 at baseline was 3.65±1.32,while it was 1.63±1.01 on the 30th day of follow-up.The change in BAPNA was statistically significant (p<0.001).The mean value of BAPNA in Category 2 at baseline was 3.65±1.70,while it was 1.12±1.18on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of mSBI at baseline and 30-day follow-up between sites in categories 1 and 2, there was no statistically significant difference (Table 1).The mean value of PPD in Category 3 at baseline was 5.26±0.52,while it was 4.22±0.69on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of PPD at baseline and 30-day follow-up between sites in category one and category three, there was no statistically significant difference.The mean value of CAL in Category 3 at baseline was 4.90±0.52,while it was 3.92±0.44on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of CAL at baseline and 30-day follow-up between sites in categories 1 and 3, there was no statistically significant difference.
The mean value of PI in Category 3 at baseline was 1.97±0.61,while it was 0.94±0.51 on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of PI at baseline and 30-day follow-up between sites in categories 1 and 3, there was no statistically significant difference.The mean value of GI in Category 3 at baseline was 1.72 ±0.37, while it was 0.82±0.42 on the 30th day of follow-up.The difference was statistically significant (p< 0.001).On comparing the values of GI at baseline and 30-day follow-up between sites in categories 1 and 3, there was no statistically significant difference.The mean values of mSBI in Category 3 at baseline was 1.97±0.46,while it was 0.77±0.52 on the 30th day of follow-up.The difference was statistically significant (p<0.001).On comparing the values of mSBI at baseline and 30-day follow-up between sites in categories 1 and 3, there was no statistically significant difference.
The mean value of BAPNA in Category 3 at baseline was 3.61±1.53,while it was 0.57±0.52 on the 30th day of follow-up.The change in BAPNA was statistically significant (p<0.001).On comparing the values of mSBI at baseline and 30-day follow-up between sites in categories 1 and 3, there was statistically significant difference (p=0.001).It showed that SRP with tulsi was found to reduce the quantity of red complex bacteria more than SRP alone ( On comparing the values of PPD, CAL, PI, GI, mSBI, and BAPNA at baseline and 30-day follow-up between sites in categories 2 and 3, there was no statistically significant difference.However, the values were lower in SRP with curcumin as compared to SRP with tulsi (Table 3).It was found that there was an improvement in parameters like PPD, CAL, PI, GI, mSBI, and BAPNA in all three categories on intra-category comparison.
Discussion
The use of antimicrobials, such as minocycline, metronidazole, and tetracycline, as LAM agents, have drawbacks due to the development of microorganism resistance, exorbitant pricing, and inaccessibility [20,21].Thus, there is a need for safer and more affordable alternatives.Numerous natural therapies have been found to be superior in this situation.In this study, the efficacy of tulsi extract as a LAM agent was assessed and compared with curcumin, which is currently used for the treatment of periodontal pockets.
The reduction in values of periodontal parameters such as BAPNA assays, mSBI, GI, PI, CAL, and PPD in sites in Category 1 was statistically significant in the current study.These findings were similar to the results of research carried out by Cugini et al, Yang et al, and Dhalla et al. [21][22][23].They also indicated that SRP is effective in reducing parameters associated with periodontal disease on its own.
Curcumin derived from turmeric ( C. longa) has been proven to have anti-inflammatory and antibacterial effects when used alone or in combination with other antibiotics [24,25].This treatment regimen demonstrates efficacy in combating periodontitis through multiple modalities, including topical application, lozenges infused with LAM for direct application to periodontal pockets, and a complementary mouth rinse [8].The volatile oils in C. longa, which contain curcumin, are recognized for their anti-inflammatory properties.It has been shown that oral curcumin is just as effective at treating acute inflammation as cortisone or phenylbutazone at treating chronic inflammation [20,24].When taken orally, as opposed to the control group, C. longa significantly decreased the inflammatory swelling caused by Freund's adjuvantinduced arthritic conditions in rats.
Curcumin was found to reduce neutrophil aggregation, which is a marker of inflammation [5].In inflammatory conditions, C. longa, which possesses anti-inflammatory properties, can suppress neutrophil activity and prostaglandin production from arachidonic acid.To alleviate the signs and symptoms of allergic reactions and reactive skin conditions, curcumin can also be applied topically [25,26].Curcumin has an antibiotic effect on pathogenic bacteria linked to periodontal disease and effectively inhibits the growth of T. denticola, F. nucleatum, P. intermedia, and P. gingivalis at relatively low dosages of 10-15 g/ml [20].Due to its plasma half-life of approximately 6.77 hours following oral administration of 10-12 mg of curcumin, it is better suited as a topical medication rather than for oral consumption [20,24].
The reduction in values of periodontal parameters such as BAPNA assays, mSBI, GI, PI, CAL, and PPD in sites in Category 2 was statistically significant in the current study.These findings were similar to the results of studies carried out by Another plant that is frequently included in mouth rinses is O. sanctum (tulsi), which is used for its antibacterial, immunoregulatory, wound-healing, and anti-inflammatory properties in the treatment of periodontitis [27].Both methyl eugenol and eugenol are present as essential oils in tulsi.The effectiveness of tulsi as a LAM agent for treating periodontal pockets has not yet been investigated [28].The reduction in values of periodontal parameters such as BAPNA assays, mSBI, GI, PI, CAL, and PPD in sites Category 3 was statistically significant in the current study.These findings were similar to the results of research carried out by Gupta et al. and Hosamane et al. [27,28].They discovered that tulsi with SRP is effective in reducing parameters associated with periodontal disease.
The decrease in BAPNA assay results indicates that tulsi extract is more effective than curcumin gel at eradicating red complex bacteria.Although not significantly different, the decrease in PI and GI was observed to be greater when curcumin jelly was used.This suggests that curcumin jelly has a stronger impact on reducing plaque, which in turn decreases gingival inflammation.We can, therefore, conclude from the current study results that both herbs have comparable efficacy in reducing periodontal markers.On comparing the values of BAPNA assays, mSBI, GI, PI, CAL, and PPD at baseline and 30-day follow-up between sites of Category 1 and Category 2, there was a better response in terms of periodontal parameters in Category 2. However, there was no statistically significant difference.Upon comparison of BAPNA assay values, mSBI, GI, PI, CAL, and PPD between Category 1 sites and the other two categories, a more favorable response in terms of periodontal parameters was observed in Category 2. However, statistical analysis revealed no significant difference between the two categories, except for BAPNA, where a notable increase in activity against red complex bacteria was significantly observed with tulsi as LAM.
Our study has some limitations due to its small sample size and short duration.However, this study paves the way for additional investigations into herbal treatments for periodontitis.
Conclusions
All of the treatment approaches work to reduce periodontal pockets and enhance baseline clinical and microbiological markers.The significant improvement in mean plaque scores at the locations where curcumin gel LAM was applied suggests that curcumin has a more effective impact on plaque control compared to SRP alone.The fact that tulsi extract as LAM significantly improved BAPNA assay results and mSBI scores indicates that tulsi extract is beneficial in reducing the count of red complex bacteria and subsequent periodontal pocket bleeding.Both curcumin gel and tulsi extract have comparable clinical and microbiological control capabilities.
TABLE 1 : Comparison of periodontal parameters in study participants of Category 1 and Category 2 at baseline and 30th day post procedure
PPD: Probing pocket depth; CAL: Clinical attachment level; PI: Periodontal index; GI: Gingival index; mSBI: Modified sulcus bleeding index; BAPNA: Nbenzoyl-L-arginine-p-nitroanilide Values given as mean±SD
TABLE 2 : Comparison of periodontal parameters in study participants of Category 1 and Category 3 at baseline and 30th day post procedure
PPD: Probing pocket depth; CAL: Clinical attachment level; PI: Periodontal index; GI: Gingival index; mSBI: Modified sulcus bleeding index; BAPNA: Nbenzoyl-L-arginine-p-nitroanilide Values given as mean±SD
TABLE 3 : Comparison of periodontal parameters in study participants of category 2 and category 3 at baseline and 30th day post intervention
PPD: Probing pocket depth; CAL: Clinical attachment level; PI: Periodontal index; GI: Gingival index; mSBI: Modified sulcus bleeding index; BAPNA: Nbenzoyl-L-arginine-p-nitroanilide Values given as mean±SD | 2024-02-27T17:19:38.145Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "d51059fa006f8de4b212111d35c68e051edbe43a",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/189900/20240221-32185-8dca9i.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2aeb09d14ad34db0479a3238819c7517efad1c0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23389458 | pes2o/s2orc | v3-fos-license | Experiment-friendly kinetic analysis of single molecule data in and out of equilibrium
We present a simple and robust technique to extract kinetic rate models and thermodynamic quantities from single molecule time traces. SMACKS (Single Molecule Analysis of Complex Kinetic Sequences) is a maximum likelihood approach that works equally well for long trajectories as for a set of short ones. It resolves all statistically relevant rates and also their uncertainties. This is achieved by optimizing one global kinetic model based on the complete dataset, while allowing for experimental variations between individual trajectories. In particular, neither a priori models nor equilibrium have to be assumed. The power of SMACKS is demonstrated on the kinetics of the multi-domain protein Hsp90 measured by smFRET (single molecule F\"orster resonance energy transfer). Experiments in and out of equilibrium are analyzed and compared to simulations, shedding new light on the role of Hsp90's ATPase function. SMACKS pushes the boundaries of single molecule kinetics far beyond current methods.
The ability to reveal conformational state sequences at steady state is a unique feature of single molecule time traces.Conformational kinetics is detectable in or out of equilibrium, which enables direct calculation of thermodynamic quantities.Single molecule Förster resonance energy transfer (smFRET) is one of the most common methods to do so.According to the current standard analysis of kinetic smFRET trajectories, state sequences are deduced using hidden Markov models (HMM) (1-3) and rates are then obtained from single-exponential fits to the respective dwell time histogram of every observed state.This standard approach is feasible under the following two conditions: First, every state has a characteristic FRET efficiency.Second, all transition rates are similar.In this case, there is a sampling rate at which every state is reached many times before irreversible photo-bleaching.Both requirements are broken by regular proteins, which commonly exhibit rates on diverse timescales and conformations that are experimentally indistinguishable, but differ kinetically (kinetic heterogeneity) (4)(5)(6).As a consequence, multi-exponential dwell time distributions are obtained.The interpretation of such distributions may lead to erroneous conclusions (see below).With our new experiment-friendly approach, we overcome these problems by training one global HMM based on a set of experimental time traces.The procedure copes with experimental shortcomings and kinetic heterogeneity.Further, it provides several means of model evaluation including error quantification.Finally, we demonstrate how to deduce kinetics and thermodynamics of the heat-shock protein Hsp90.
Results
Rate extraction from an ideal model system.Holliday-junctions (7) have become a widely used model system for conformational dynamics studied by smFRET.These DNA four-way junctions alternate constantly between two equilibrium conformations (8).Such dynamics were recorded by a custom-built objective-type total internal reflection fluorescence (TIRF) microscope (Fig. 1A) with alternating laser excitation (ALEX) (9).An example trace is shown in Fig. 1B.As expected for a two-state system, the FRET histogram shows two peaks (Fig. 1C) and the dwell time histograms are well fit by single-exponential functions (Fig. 1D).In this case, all standard methods work well and the extracted rates will be correct.
Rate extraction from typical protein systems.In contrast, the situation is more complicated for proteins, which usually adopt significantly more than two states (10).As an example, we show equivalent single protein time traces revealing conformational changes of the heat-shock protein Hsp90 (11) (Fig. 1E).This homo-dimeric protein fluctuates between N-terminally open and closed conformations (12) resulting in two peaks in the FRET histogram (Fig. 1F).The fluctuations occur on a broad range of time-scales resulting in very long and short dwells, and generally fewer transitions per trace (here 3 on average).Despite the two apparent FRET populations, both dwell time distributions are multi-exponential (Fig. 1G).Yet, no systematical change in FRET efficiency from fast to slow dwells is observed (Fig. S1).Such behavior (hereafter referred to as degenerate FRET efficiencies) is indicative of truly hidden states that cannot be separated by FRET efficiency, but differ kinetically.The kinetic analysis is complicated by the limited detection bandwidth of smTIRF experiments.It is restricted by the exposure time, on the one side, and the mean observation time -limited by photo-bleaching -on the other side.While enzymatic antibleaching agents largely increase the observation time of DNAbased samples, those are much less effectual against bleaching of all-protein systems.Furthermore, their use with protein systems is problematic, as they might interact with the protein under study.Accordingly, the detection bandwidth spans less than a factor of 200 at a reasonable signal to noise ratio -independent of the sampling rate applied.In this situation, the classical dwell time analysis ignores large parts of the data, because only dwells with clearly defined start and end points are considered.As a consequence, predominantly long dwell times are missed, resulting in transition rates that are systematically overestimated.Already in a two state system, deviations of more than a factor of two occur (Fig. 2A).Importantly, even so-called static traces (without any transition) contain kinetic information.They occur in the experiment as a result of the finite observation time, especially if both fast and slow processes occur.In other words: the presence of at least two transitions per trace is an inappropriate and misleading criterion for trace selection.Yet it is the intrinsic requirement of dwell time analysis.Moreover, the connectivity of states is completely ignored by dwell time analysis.Please note that the limitations of dwell time analysis have been recognized in the patch clamp field more than 20 years ago (13).Nevertheless, it is still the standard analysis in the smFRET field today.
A better solution for typical protein systems.In view of the experimental reality, we developed a new Single-Molecule Analysis for Complex Kinetic Sequences -short: SMACKS.It combines all experimentally available information in one HMM, which allows us to investigate important thermodynamic concepts that go significantly beyond dwell time analysis.Such an HMM consists of invisible or "hidden" kinetic states that generate certain detectable signals (e.g.high FRET, low FRET) with a given probability.The sequence of states is assumed to be memory-less, i.e. the probability of a certain transition depends only on the current state.Any time-homogeneous Markovian analysis requires stationarity -but not thermodynamic equilibrium.An HMM is parameterized by one start probability per state ,transition probabilities between all hidden states assembled in the transition matrix , and a set of so-called emission probabilities that link the hidden states to the observables (14,15).By exploiting the original two observables -donor and acceptor fluorescence -instead of the FRET efficiency (only one observable), the robustness with respect to uncorrelated noise is significantly increased.These fluorescence signals are appropriately described by 2D Gaussian probability density functions (PDFs), (, ), parameterized by the means and the co-variance matrix -all in dimensions of donor and acceptor fluorescence.Representative emission PDFs are graphed at the right hand side of Fig. 1B,E.The mathematically available parameter space for emission probabilities is further restricted by physical knowledge about FRET.Namely, the mean total fluorescence intensity is required to remain constant within one trace (Eq. 1 and SI Methods), whereas experimental variations between individual molecules are tolerated.
[] Here the donor and acceptor intensities (with means ! and ! ) were corrected for background, experimental cross-talk and the gamma factor beforehand.The resulting allowed "FRET-line" is displayed in the emission graphs (Fig. 1B,E right).In the classical HMM implementation (14), the model ( , , ) is iteratively rated by the forward-backward algorithm and optimized by the Baum-Welch algorithm until convergence to maximum likelihood.The Viterbi algorithm is used to compute the most probable state sequence for every trace given the previously trained model.In contrast to earlier published ensemble approaches (16)(17)(18)(19), SMACKS works without additional (hyper-) parameters or prior discretization.The full procedure was tested on various synthetic datasets generated by known input models, in or out of equilibrium, with or without degenerate FRET efficiencies.Synthetic data contained noise, photo-bleaching, randomly offset individual traces and a realistic dataset size (see SI Methods and example data in Fig. S2).SMACKS resolved accurate transition rates despite degenerate FRET efficiencies, where neither dwell time derived rates nor error estimates were meaningful (Fig. 2B).
Demonstration of SMACKS using experimental data.We start with a set of smFRET time traces obtained with alternating laser excitation (ALEX) that were selected and corrected as previously described(20) (SI Methods).Namely, the donor and acceptor intensities ( !, ! ) satisfy Eq. 2. However, previous smoothing is not required.
An For static traces (here 34% of all traces), a model with more than one state will not converge sensibly.Therefore, static traces are included using the mean emission PDFs of the remaining dataset (see fifth molecule in Fig. 1E).
As a next step (Fig. 3A), an ensemble HMM run is performed to optimize the start and transition probabilities based on the entire dataset, while holding the predetermined, individual emission PDFs fixed.While different strategies have been tested, this solution worked equally well for experimental and simulated data.The kinetic heterogeneity found in Hsp90 is investigated by comparing different state models including duplicates and triplicates of the apparent states (Fig. 3B).Similar to others (1,3,21), we then use the Bayesian information criterion (BIC) (22) for model selection (see SI Methods).We find that Hsp90's conformational dynamics are best described by a 4state model with 2 high FRET (closed) and 2 low FRET (open) states.This is consistent with the bi-exponential dwell time distributions shown in Fig. 1G.
Once the optimal number of states is deduced, the model is further refined by inspecting the Viterbi paths.The transition map (Fig. 3C left) shows the quality of both, the original input data and the state allocation based on the obtained model.It reveals the clustering of the transitions in FRET space.Importantly, the transition map itself cannot report on the number of states in the model, because it is the consequence of a predetermined model.The occurrence of all transitions is shown in a 2D histogram (Fig. 3C middle).For a system functioning at thermodynamic equilibrium, detailed balance requires that the transition histogram is symmetric about the main diagonal.Out of 12 possible transitions in a fully connected 4-state model, only 8 cyclic transitions are populated for Hsp90 with ATP.Despite the reduced number of free parameters, a cyclic 4-state model fits the data with equal likelihood.This is in line with the maximal number of theoretically identifiable transitions (8 in the case of 2 open (o) and 2 closed (c) states ( 23)).
While being difficult to interpret in the context of Hsp90, a cyclico-c-o-c-model would theoretically fit the data equally well.Further information on the interpretation of degenerate state models is given in (23)(24)(25).
Model evaluation.In most previous kinetic studies on smFRET, the only reported error estimates were the uncertainties of fit coefficients from fitting dwell time distributions, disregarding systematic overestimation and variations throughout the dataset (Fig. 2).
In contrast, we propose three tests to assess the reliability of the results from the above procedure.First, the most illustrative test for the consistency of the trained model with the original data is "re-simulation" using the obtained transition matrix, the experimental bleach rate and degenerate states (here 2o, 2c).Fig. 3D (left) shows very good agreement between the re-simulated and the experimental dwell time distribution.FRET histograms can be re-simulated, too.Second, the convergence of the HMM to the global maximum is tested by using multiple random start parameters (26).In all attempts, the parameters converged to the same maximum likelihood In summary, the accuracy and precision of the kinetic state model deduced by our semi-ensemble HMM approach was demonstrated by threefold evaluation.A reliable state model is necessary to take the next step and resolve kinetic and thermodynamic information from proteins in or out of equilibrium.
The kinetic model of Hsp90.Hsp90 is a slow ATPase (28) and changes between open and closed conformations at room temperature.In Fig. 4, kinetic results for Hsp90's conformational dynamics are compared under different nucleotide conditions (2mM ADP, ATP, AMP-PNP) or without nucleotides (apo).The quality of the input data and the resulting state allocation is visible on the transition map in FRET space (Fig 4A).It is evident that Hsp90's conformational changes are less defined in the absence of nucleotides.The same FRET efficiencies are detected in all experiments: E low =0.1, E high =0.8 (Fig. 4D).Consistently, the transitions cluster around these efficiencies.The optimal state model contains four states under all conditions (Fig. 4E): states 0/1 are long-/shortlived low FRET states, states 2/3 are short-/long-lived high FRET states, respectively (Fig. 4C).While four links are required to describe Hsp90's conformational dynamics in the presence of ATP (Fig. 4B,C), under apo, ADP and AMP-PNP conditions models with three links are sufficient (detailed in SI Note 1) (23).The rates under apo and ADP conditions are similar to those in the presence of ATP (Fig. 4F).Only with the non-hydrolysable ATP-analogue, AMP-PNP, the rates between both short-lived states are inverted.This is in agreement with the pronounced shift towards the closed conformation observed in the FRET histogram, in the presence of AMP-PNP (Fig. 4D).
Exploring energy coupling.Protein machines, such as Hsp90, use external energy (e.g. from ATP hydrolysis) and therefore operate out of equilibrium.A central question is where (in the conformational cycle) energy consumption couples into protein function.
Based on SMACKS, we can address this question quantitatively.It boils down to determining the free energy difference over closed cycles (29,30) (in units of thermal energy, kT): As expected and required, we find that in the absence of an external energy source, Hsp90's conformational dynamics are at equilibrium.At first sight unexpected, we find for Hsp90 in the presence of ATP ∆ !"! = 0.9 ± 0.9 kT.This indicates that the energy of ATP hydrolysis is not coupled to the observed conformational changes, which is consistent with earlier results (12).
A schematic energy landscape is shown in Fig. 4G.The 3D illustration highlights SMACKS ability to split two observable FRET states into four states based on their distinct kinetic behavior.Quantitative energies are shown in Fig. 4H.
Experimental limits for resolving energy coupling.Clearly, the accuracy of the resolved ∆ !"! depends on the size of the dataset.Especially for systems away from equilibrium, very slow ("reverse") rates can occur.Due to the finite dataset, only few respective transitions are observed, resulting in large relative errors for these small rates.In this case, an alternative formulation of ∆ !"! using the number of transitions trans.found by the Viterbi algorithm is more robust: !" trans.∀ !!! (cycl.)[4] Eq. 4 represents a lower bound for the free energy difference, given the finite dataset (zero transitions are set to one to avoid poles).If all rates are well resolved, Eqs. 3 and 4 yield the same result.In the following, two limit cases for the coupling of conformational changes to ATP hydrolysis are considered systematically (∆ !"! = 30 kT for ATP to ADP hydrolysis assuming 1% ADP, 3mM Mg 2+ , 250mM KCl and 100% efficiency) (31).In the first case (Fig. 5A), the full 30 kT are introduced within one step.Whereas in the second case (Fig. 5B), the energy is successively released over four steps, comparable to contributions by ATP binding, hydrolysis and ADP or P i release, proposed e.g. for the human mitochondrial F 1 -ATPase (32).Although realistic mechanisms will be a mixture of the two, these ideal cases allow for a systematic calculation of the maximally observable free energy ∆ !"# as a function of the dominating forward rate (Fig. 5A,B bottom).Even in the absence of noise and degenerate states, the observed free energy difference is limited by the finite dataset size.The same is true for the more realistic model shown in Fig. 5C: Eq. 4 applied to discrete state sequences yields 20.5 kT of the original 30 kT.This is because very unlikely transitions do not occur throughout the dataset (Fig. 5C bottom).After including all the experimental shortcomings and degenerate FRET efficiencies, SMACKS recovered ∆ !"! = (12 ± 2) kT.This is 58% of the free energy, which was actually present in the synthetic data.In view of these results, we stimulated Hsp90's hydrolysis rate more than tenfold by its co-chaperone Aha1 (33).If we had missed out on the directionality due to the slow ATPase rate, this should ultimately allow us to resolve putative energy coupling.Fig. 5D shows that even highly stimulated hydrolysis does not induce conformational directionality in Hsp90: ∆ !"! = −0.4± 1.2 kT in the presence of 3.5 µM Aha1.Our results strengthen the notion that Hsp90's large conformational changes are mainly independent of ATP hydrolysis.
Discussion
SMACKS is a novel HMM approach, which resolves all relevant rates that characterize the observed conformational dynamics, from a set of (short) smFRET time traces.The underlying states are identified by their FRET efficiency or kinetic behavior or both.SMACKS is a tailor-made solution for the wide family of protein machines that are clearly more challenging than DNA prime examples.It represents a significant advance that enables direct quantification of the energy coupled to conformational changes.This progress is achieved by the following six key features: (i) SMACKS exploits the original fluorescence signal of the FRET donor and acceptor as 2D input.The FRET-specific anticorrelation provides significantly increased robustness with respect to uncorrelated noise.This unique information is lost in 1D FRET trajectories.
(ii) SMACKS tolerates experimental intensity variations between individual molecules, while at the same time, the transition rates are extracted from the entire dataset.(iii) SMACKS minimizes the bias of photo-bleaching, because it determines transition rates based on their occurrence in the dataset.Thus, the range of detectable timescales can be expanded by increasing the dataset.(iv) SMACKS performs the entire analysis on the experimental (i.e.noisy) fluorescence data.In fact, the knowledge about a given data point's reliability is used to weight its contribution accordingly.Therefore, SMACKS is robust enough to handle realistic noise levels in protein systems.(v) SMACKS identifies hidden states that share indistinguishable FRET efficiencies, but differ kinetically.(vi) SMACKS quantifies the precision of extracted rates.The precision is limited by the dataset size and signal quality, but it is not compromised by systematic overestimation, which contrasts with earlier studies.The ATP-dependent molecular machine Hsp90 served here as an illustrative test case.SMACKS shed new light on the enigmatic and controversially discussed ATPase function (34).Clearly, the N-terminal conformational dynamics are not coupled to ATP hydrolysis, even in the presence of the co-chaperone Aha1.Further reaction coordinates will be explored by SMACKS to elucidate driven conformational changes and finally uncover the role of Hsp90's slow ATPase function.In summary, our results demonstrate how SMACKS provides new power and confidence for the kinetic analysis of single molecule time traces in general.In particular for smFRET studies on sophisticated protein systems, SMACKS is unparalleled.We anticipate that SMACKS will reveal drive mechanisms in a large number of protein machines.
Methods
Hsp90 or Holliday junctions specifically biotinylated and labeled with fluorescent dyes (Atto550/Atto647N maleimide) were immobilized on a passivated and Neutravidin coated fused silica coverslip that shows no auto-fluorescence upon ALEX (532nm or 635nm) in TIRF geometry using an EMCCD for detection.Measurements were performed at 5Hz at 21°C.More detailed descriptions are given in SI Methods.Monte Carlo simulations and HMM calculations were run in Igor Pro (Wavemetrics) on an ordinary desktop PC.Synthetic data contained Gaussian noise (σ=0.3*signal),random offsets (±0.2*signal), degenerate FRET, efficiencies (two low / two high), a sampling rate of 5Hz and a bleach rate of 0.03Hz (see Fig. S2).All formulae utilized in semi-ensemble HMM (Forward-Backward, Baum-Welch and Viterbi algorithms) with continuous observables in 2D are included in SI Methods.The complete source code together with example data will be available shortly after publication at: http://www.singlemolecule.uni-freiburg.de/SMACKS .
General implementation of semi-ensemble HMM.In the following, we include all formulae required for the implementation of semi-ensemble HMM as demonstrated herein.For more general introductions to HMM, please refer to the respective literature (14,15).Forward-Backward, Baum-Welch and Viterbi algorithms were implemented for continuous observables and multiple dimensions.Numerical underflow or overflow is prevented by logarithmic renormalization.Recursive calculations are sped up by multi-threading (processing several time-traces in parallel).All software was written in IgorPro v6.3 (Wavemetrics) and calculations were run on an iMac (Apple, 2014, 2.9 GHz Intel i5 processor, 16GB RAM) or a comparable Windows PC.A typical optimization (4 states, >100 traces) took less than an hour.Implementation of the Baum-Welch algorithm.The basis for calculating the updated parameters are γt(i) and γt(i, j), the respective probabilities for a given state or transition at a given time point.
γt(i) = αt(i) βt(i)/P (O|λ) γt(i, j) = αt(i) aij bj(xt+1) βt+1(j)/P (O|λ) The parameter update equations are: The ensemble parameters Πi, Aij are updated based on all N time-traces: Implementation of the FRET-constraint.In a true FRET time-trace, the total fluorescence Itot (i.e. the sum of the corrected acceptor and donor signals, A xt and D xt, with respective means A µi and D µi) must remain constant in each state: This physical constraint is introduced into Baum's optimization formalism using Lagrange multipliers.Because the resulting update equations for Gaussian distributions are coupled in µi and Vi, we exploit the fact that the difference between Gaussian and Poissonian means is negligible for TIRF signals.Therefore, the update equation for constrained Poissonian distributions (35) can be utilized to optimize µi : Implementation of the Viterbi algorithm.The most probable state sequence s * with maximal production probability P * (O|λ) is deduced from the δ and ψ variables.initiation: back-tracking: The Bayesian information criterion (BIC).The BIC (22) selects for a model that describes the data well, while keeping the model complexity moderate.This is achieved by balancing the likelihood L(λ|O) against the number of free parameters k, with n, the number of data points: Its applicability in the context of SMACKS was confirmed using synthetic data of known input models (see below).
Simulations.
Discrete state sequences were obtained by a Monte-Carlo simulation based on a given transition matrix: photo-bleaching was included by exponential trace length distributions.For comparison with experimental data, corresponding minimal trace lengths were used (typically 30 data points).As in the experiment, synthetic data contained Gaussian noise (σ=0.3*signal),random offsets (±0.2*signal), degenerate FRET efficiencies (two low / two high), a sampling rate of 5Hz and a bleach rate of 0.03Hz.See example data in Figure S2.
Confidence Interval For each transition probability aij, the parameter space around the maximum likelihood estimator (MLE) is scanned while keeping the remaining parameters fixed at the MLE.The modified models λ are compared to the MLE models λMLE by successive likelihood ratio (LR) tests (3,27): The 95% confidence bound (CB) is reached where LR crosses the respective significance level for one degree of freedom (df).
Sample Preparation.Hsp90: Yeast Hsp90 dimers (UniPro-tKB: P02829) supplied with a C-terminal zipper motif were used to avoid dissociation at low concentrations (13).Previously published cysteine positions (36) allowed for specific labeling with donor (61C) or acceptor (385C) fluorophores (see below).Both constructs were cloned into a pET28b vector (Novagen, Merck Biosciences).They include an N-terminal His-tag followed by a SUMO-domain for later cleavage.The QuickChange Lightning kit (Agilent) was used to insert an Avitag for specific in vivo biotinylation at the C-terminus of the acceptor construct.E.coli BL21star cells (Invitrogen) were co-transformed with pET28b and pBirAcm (Avidity) by electroporation (Peqlab) and expressed according to Avidity's in vivo biotinylation protocol.The donor construct was expressed in E.coli BL21(DE3)cod+ (Stratagene) for 3h at 37 Off-axis beams are removed by an optical slit and achromatic slit lenses (achromatic doublets, Qioptiq).Finally donor and acceptor fluorescence is split, spectrally filtered again and individually focused side by side onto the EMCCD (iXonUltra, Andor) by best form silica lenses (Qioptiq).Translation stages (Newport) were used to fine-tune the lens positions.Dichroic mirrors and filters were purchased from AHF analysentechnik AG.Optical mounts, lenses, mirrors and further components were purchased from Thorlabs unless stated differently.Measurements were performed in a PEGylated fluid chamber built from two coverslips and Nescofilm by compression at 70 • C.
To avoid auto-fluorescence background in the red detection channel a silica coverslip was used (Spectrosil2000, Heraeus, manufactured by UQG Ltd.).The refractive index change (glass: 1.52, silica: 1.46) was adjusted for geometrically.Image acquisition and optical shutters were synchronized by a custom-built electronic circuit and a master trigger to achieve 100ms exposure to each ALEX channel (alternating laser excitation).
Selection & Correction of smFRET Time Traces.We exclude incomplete FRET pairs and photo-physical artifacts, such as blinking, by using ALEX for data acquisition.Selection criteria for single molecule time traces are flat plateaus in all three fluorescence channels (donor and acceptor fluorescence after donor excitation and acceptor fluorescence after acceptor excitation), as well as single step bleaching.Hence, a terminating low FRET region (before photo-bleaching) is no longer rejected.Fluorescence time traces are corrected for background offsets, leakage, direct excitation and the gamma factor (20).
Fig.
Fig. 1.Conformational dynamics of a DNA (left) and a protein system (right) measured by smFRET.(a) Schematic TIRF setup with ALEX and close-up of a single Hsp90 molecule, labeled with a FRET pair (donor dye orange, acceptor red) and immobilized within the evanescent field (see Methods).(b, e) FRET efficiency (E) trajectories (black) obtained from single molecule fluorescence time traces (donor green, acceptor orange, in arbitrary units).Fluorescence of the acceptor dye after direct laser excitation (gray) excludes photo-blinking artifacts.The HMM derived state sequence (Viterbi path) is displayed as overlays (low FRET white, high FRET gray).A zoom is included in (b).Emission PDFs in dimensions of donor and acceptor fluorescence are shown next to the corresponding time traces.The allowed "FRET line" (red) is determined for every molecule individually (cf.main text).(c, f) FRET histograms with Gaussian fits as indicated.(d,g) Cumulative dwell time histograms with single-exponential fits (black, frequency weighted).Static traces (e, bottom) are described using the mean emission PDFs of the complete dataset (cf.main text).Although the FRET histogram (f) shows two populations, the dwell time distributions (g) clearly deviate from a singleexponential fit (total of 163 molecules).
Fig. 2 .
Fig. 2. Accuracy of rates from dwell time analysis compared to ensemble HMM.(a) Discrete state sequences were simulated for 2-state models (left) with different rates, k01, k10 (200 state sequences with 5Hz sampling rate and 0.03Hz bleach rate).The deviation of the determined k01 from the input k01, as obtained by dwell time (DT) analysis and ensemble HMM, is shown as a function of both input rates (maximum relative deviation out of 5 simulations per data point).Relative errors of the rates along the indicated lines are shown as a function of the mean number of transitions per trace (right).Black lines serve as a guide to the eye.Clear systematic deviations occur already for the simplest model.(b) More complex models are more realistic for protein systems.The depicted 4-state model (left) was used to generate synthetic data with equal FRET efficiencies for states 0/1 and 2/3, respectively.See SI Methods for details and Fig. S2 for example data.The size of the circles in the state models (a and b, left) is proportional to the state population.The arrow widths are proportional to the transition rates.Right: comparison of the transition rates determined by DT analysis and SMACKS.DT analysis only provides half the transition rates, which are in addition far off the input values.
apparent state model can be deduced from visual inspection of the FRET time traces and the FRET histogram.This model (2 states for Hsp90) is used in a first trace-by-trace HMM optimization to train individual emission PDFs on each molecule separately.The trained parameters are examined visually by comparing the resulting Viterbi path to the input data.Notably, by searching for flat plateaus, HMM echoes a characteristic requirement for singlemolecule fluorescence data.
Fig. 3 .
Fig. 3. SMACKS workflow.(a) The model parameters (start probability , transition matrix , emission PDFs ) and a set of donor (green) and acceptor (orange) fluorescence time traces constitute the basis of ensemble HMM (overlays represent the Viterbi path).To allow for inter-molecule variations of the fluorescence signal, the individual emission PDFs are predetermined in a trace-bytrace HMM run and held fixed during optimization of the kinetic ensemble parameters.(b) Degenerate FRET states are included as multiples of experimentally discernible states (here low FRET and high FRET).The Bayesian information criterion (BIC) identifies the optimal model (here 4 states).(c) The transition map (left) relates the mean FRET values before and after each dwell found by the Viterbi algorithm (initial state 0, 1, 2, 3 in red, green, blue, pink, respectively).The most frequent transitions occur between the two least populated (short-lived) states.The transition histogram (middle) reveals the frequency of each transition in the dataset.Excluding transitions that do not occur leads to a cyclic model (right).(d) The obtained model is critically evaluated in three ways: First, dwell time distributions are reproduced by re-simulating the model (left, experimental data: green, simulated: gray).Second, random start parameters uncover potential local likelihood maxima, and random subsets reveal dataset heterogeneity (middle, subsets: green, complete set: black).Third, confidence intervals measure the precision of the obtained rates considering the finite dataset, experimental noise and dataset heterogeneity (right).A flow chart of SMACKS can be found in Fig. S3.
Fig. 4 .
Fig. 4. Kinetic and thermodynamic results under varied nucleotide conditions.(a) Transition maps locate transitions in FRET space (initial state 0, 1, 2, 3, in red, green, blue, pink).FRET E scatters less in the presence of nucleotides.(b) Occurrence of transitions.(c) Derived kinetic state models: states 0/1 (2/3) show low (high) FRET efficiencies, respectively.Arrow widths represent the size of the transition rates.Circle sizes represent relative state populations.A 4-link model is favored with ATP, while 3 links are sufficient for all other datasets.(Rates to remain in a given state are not depicted.)(d)The FRET histogram shows only small differences for apo (black and shaded black), ADP or ATP data, whereas with AMP-PNP (purple) a shift to the closed conformation is observed.(e) A 4 state model represents all four datasets best according to ΔBIC values, color code as in (f).(f) Deduced rates and confidence intervals.(g) Qualitative cartoon of the 3D energy surface of Hsp90 in the presence of ATP.SMACKS reveals "hidden" states that are kinetically different while sharing the same FRET efficiency.A large energy barrier hinders transitions through the midpoint.(h) Quantitative 2D projection of the energy landscape shows the differences between the nucleotides, color code as in (f).Energy levels were calculated from transition rates, whereas well widths are arbitrary.A typical attempt frequency for proteins, 10
Fig. 5 .
Fig. 5. Quantifying energy coupling.(a, b) Two limit cases of systems driven by the hydrolysis of 1ATP: in (a) the external energy is absorbed between states 0 and 3.All remaining rates are set to 0.05Hz.In (b) the external energy is introduced sequentially over 4 identical steps.Respective state models (top), energy scheme (center), and theoretical detection limit for free energies as a function of the forward rate (bottom).Simulated values (green) result from Eq. 4 applied to 200 discrete state sequences with 5Hz sampling rate and 0.03Hz bleach rate.They scatter about the expectation value of Δ !"# (black line) calculated as explained in the SI Note 2. (c, d) State model (top), transition map (center) and transition histogram (bottom) for synthetic data (c) simulating the flow introduced by coupling to the hydrolysis of 1ATP = 30kT, or for experimental data (d) of Hsp90+ATP stimulated by the co-chaperone Aha1.
=P
HMM conventions and parameters.The indices i, j denote states.t are discrete time steps and T is the total time of a trajectory.O is the set of observables and xt is a specific observable at time t in d dimensions (herein d = 2 for donor and acceptor fluorescence).The complete set of parameters, λ(π, A, B), consists of: Gaussian emission probability densities with means µi and covariance matrix Vi , both in d dimensions.TIRF data is appropriately described by Gaussian emissions (in place of Poissonian), because each time bin contains much more than ten photons including noise.bi(xt) denotes the emission density value for a specific observable value at a given time t (scalar, may be >1).Implementation of the forward-backward algorithm.The forward and backward variables, α and β, are auxiliary probabilities prerequisite for the Baum-Welch algorithm below.initiation: αt=1(i) = πi bi(xt=1) (O|λ) is called the production probability of the data given the model.It is equivalent to the likelihood of the model given the data L(λ|O).
Figure 1 :
Figure 1: FRET efficiency vs. dwell time plots for a Holliday junction or Hsp90 (163 molecules) as indicated.Low FRET dwells in red; high FRET dwells in blue.No correlation is visible in either plot.In contrast to the Holliday junction, Hsp90's conformational changes occur over a much broader time range.
Figure 2 :
Figure 2: Representative synthetic data generated by a 4-state model with 2x2 degenerate FRET efficiencies as described in the Online Methods.Fluorescence intensity of the donor (green), acceptor (orange) and acceptor after direct excitation (gray).Calculated FRET efficiency (E) in black and Viterbi path as overlays (two states: high FRET gray, low FRET white).Static traces (second trace) occur as a consequence of fast and slow rates together with a finite observation time due to photo-bleaching.FRET efficiency spikes occur in the simulations as well as in the experiment.In contrast to 1D HMM based on FRET, they have no effect on fluorescence based 2D HMM as can be seen from the Viterbi paths.
TIRF setup: An objective-type TIRF setup was built to measure smFRET.Green and red excitation lasers (532nm, Compass 215M, Coherent and 635nm, Lasiris, Stocker Yale) were aligned, expanded and focused onto the back-focal plane of an apochromat TIRF 100x objective (Nikon, NA=1.49).The collected fluorescence is separated from excitation light by notch filters. | 2016-05-27T12:44:24.000Z | 2016-05-27T00:00:00.000 | {
"year": 2016,
"sha1": "27c6466fc2940d0c86bf9b2b74ce616ccf39c7ac",
"oa_license": "publisher-specific-oa",
"oa_url": "http://www.cell.com/article/S0006349516307457/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "7f70b10973b95afb94c4de47c0d9f35c299fef6f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
]
} |
54638928 | pes2o/s2orc | v3-fos-license | If we build it , who will benefit ? A multi-criteria approach for the prioritization of new bicycle lanes in Quebec City , Canada
Many cities across the world are actively promoting cycling through investments in cycling infrastructure, yet ensuring that the benefits from these investments are distributed equally in a region and not benefiting only one group is an important social goal. The aim of this study is to develop a methodology that can help in identifying where new bicycle facilities can be built in a region while prioritizing investments for those who need them most. The study uses Quebec City, Canada, as an example since the city has recently made a strong commitment to provide safe and attractive bicycle infrastructure to its residents. It also uses a GIS-based grid cell model to identify priority areas for cycling investment in different parts of the city. This is followed by a proposal for a new set of facilities based on a multi-criteria approach. These proposed facilities are then evaluated through a level of usage analysis to determine which routes will provide the maximum benefit to existing and potential cyclists. Finally, an equity analysis is conducted to evaluate whether the new facilities will meet some of the travel needs of individuals residing in socially deprived neighborhoods. This step in the evaluation process proposes a new social equity component in bicycle planning processes. This research can be of value to planners, engineers and policymakers working toward investments in bicycle facilities because it shows the full process of planning and evaluating different cycling facilities while incorporating social equity
Introduction
The desirability of neighborhoods that facilitate the use of active modes of transportation is growing, however these neighborhoods may be out of reach for individuals who may highly benefit from the affordability, convenience and potential benefits to an individual's health offered by active travel.To reap the social, economic and environmental benefits associated with bicycle-friendly cities, many cities across North America are actively promoting investments in cycling infrastructure.However, recent evidence of an inequitable distribution of cycling infrastructure investment has been observed (Flana-If we build it, who will benefit?A multi-criteria approach for the prioritization of new bicycle lanes in Quebec City, Canada gan, Lachapelle, & El-Geneidy, 2016), and concerns of infrastructure improvements as a reflection of gentrification efforts have been expressed (Lubitow & Miller, 2013).The objective of this paper is to present a practice-ready methodology to identify and prioritize locations for cycling infrastructure investments in the Quebec City, Canada.It is important to note that the methodology presented in this study was developed in collaboration with the local planning authority in Quebec City.Using a GIS-based model, we develop a prioritization index to identify high priority areas for the provision of new bicycle lanes, where we consider multiple criteria related to the safety and connectivity of the existing network, and where the demand for cycling trips exists.We then identify the optimal locations for new facilities based on the high-priority areas.To measure which proposed facilities would provide the maximum benefit to existing cyclists as well as individuals who may choose to convert from other transportation modes to cycling, we rank each proposed facility according to how many users can be expected to use each facility.In the last section of the study, we conduct an equity analysis of the proposed bicycle facilities to evaluate how these facilities will serve socially deprived neighborhoods and demonstrate how we can identify a list of facilities that will have the maximum benefit to a community.
In the following section of this paper we discuss the relevant literature on bicycle planning and the concept of equity in transportation planning.This is followed by an overview of the study area.We then proceed to the analysis section where the prioritization index is explained and presented, the proposed bicycle lanes are presented and evaluated, and the equity analysis is presented.In the final section, we discuss the findings and draw conclusions.
2
Literature review
Bicycle infrastructure planning
A growing body of literature suggests that in order to foster a bicycle culture and increase bicycling rates, appropriate infrastructure is essential (Pucher & Buehler, 2016;Pucher, Dill, & Handy, 2010).Pucher and Buehler (2008) conducted a thorough review of cycling infrastructure, cycling rates and policies in the Netherlands, Denmark and Germany, concluding that a comprehensive approach to bicycling, including the provision of safe and stress-free infrastructure, integration with other modes, and policies to restrict driving, is necessary to make cycling a safe and highly attractive mode of transportation.While cities in North America face resistance to policies that restrict driving or reduce parking, evidence indicates that expanding the bicycle network and increasing bicycle parking can have significant effects on cycling levels (Buehler & Pucher, 2012).After studying 90 of America's largest cities, Buehler and Pucher (2012) concluded that there is a significant positive relationship between commuting rates and the availability of bike lanes and paths after controlling for land use, climate, socioeconomic factors, gasoline prices and public transit supply.A large body of research has pursued a greater understanding of how different facilities and environments affect cyclists' travel behavior.Findings on this topic appear to be dependent on a cyclists' trip purpose, characteristics among cyclists, as well as the type of bicycle facility (Aultman-Hall, Hall, & Baetz, 1997;Broach, Gliebe, & Dill, 2011;Howard & Burns, 2001;Larsen & El-Geneidy, 2011).Selecting what type of bicycle facility to build, should carefully consider the policy goal of a region (Larsen & El-Geneidy, 2011).If the goal is to encourage new and novice cyclists, planners should prioritize new facilities with greater physical separation from automobile traffic.However, while this research is an integral part of the planning of new bicycle infrastructure, planners first need to determine the locations where bicycle infrastructure interventions are needed.Literature that investigates how to locate and systematically prioritize where to build new cycling infrastructure has been limited to date.Rybarczyk and Wu (2010) posited that a comprehensive analysis of bicycle planning should consider both the supply, or the quality and safety of the network, as well as an understanding of the demand potential of a route.Accordingly, the authors integrated both these criteria in bicycle network planning in Wisconsin, USA.In a study of the cycle network in the Athens metropolitan region, Greece, Milakis and Athanasopoulos (2014) developed a methodology for cycle network planning using participative multi-criteria GIS analysis to incorporate the views of cyclists when choosing optimal network segments for new facilities.In a third study that presented a tool for bicycle facility planning, Larsen, Patterson, and El-Geneidy (2013) developed a GIS-based model that allows for flexibility to include data that is relevant to the study context and readily available, and that produces an easily interpretable grid-cell image of levels of priority across a region that is easily communicable to decision-makers and the general public.Using the Montreal, Canada, as a case study, the authors considered five indicators to prioritize cycling investment: observed and potential bicycle trips, stated preferences of cyclists regarding priority network improvements, bicycle-vehicle collision data and the network connectivity.The indicators were weighted equally and combined to create a prioritization index.However, none of the studies mentioned above looked at the equity impacts of these new facilities.In the present study, we develop a prioritization index similar to Larsen et al. (2013), using multiple criteria that are relevant to the Quebec case study.Building on this methodology, we present a method to rank proposed bicycle facilities according to their potential usage.We then evaluate the equity impacts of each proposed facility.However, first we will define the concept of equity in a transportation planning context.
Equity and planning
Equity can be broadly defined as the distribution of benefits and costs, and whether that distribution is considered fair.In 1996, Metzger defined equity planning as a responsibility that planners have to "influence opinion, mobilize underrepresented constituencies, and advance and perhaps implement policies and programs that redistribute public and private resources to the poor and working class in cities" (Metzger, 1996, p. 113).Failure to include social outcome goals in sustainability initiatives has been found to result in the inequitable distribution of the intended benefits (Flocks, Escobedo, Wade, Varela, & Wald, 2011).A social outcome goal in the setting of bicycle planning, may refer to favoring new bicycle infrastructure projects in socially disadvantaged neighborhoods, which is considered a vertical equity approach.Vertical equity follows a progressive means of planning, where the division of benefits should be directed towards those with the greatest potential need.Bicycles have great potential to provide an equitable, efficient and affordable means of transportation.However, with an inadequate supply of bicycle lanes and paths, users will likely experience a riskier network for bicycling.While perceptions of cycling as a fringe mode of transport or a last alternative to driving remain, cycling is being adopted for recreational purposes for the affluent, and for commuting purposes by millennials (Flanagan et al., 2016).Following an investigation of the distribution of cycling infrastructure investment in two American cities, Chicago and Portland, Flanagan et al. (2016) revealed disparities in cycling infrastructure investments in both cities.The authors observed that low income census tracts have been less likely to receive investment than more privileged areas.To mitigate these disparities greater consideration to how infrastructure is distributed is warranted.
To evaluate how well transportation plans will help to attain certain goals, various performance indicators such as accessibility measures have been considered (El-Geneidy, Cerdá, Fischler, & Luka, 2011;Manaugh & El-Geneidy, 2012).Manaugh and El-Geneidy (2012) evaluated how the 2007 Montreal transportation plan provides benefits for socially isolated and disadvantaged neighborhoods by evaluating accessibility and time-savings by public transit to job locations.To our knowledge no studies have incorporated equity principles throughout the bicycle planning process.For governments interested in providing public transit infrastructure in an equitable manner, Manaugh and El-Geneidy (2012) recom-mend asking three questions.First, where are the under-served populations located?Second, where are their places of employment?Third, how can they be better served?In this study, we consider these three questions, yet in a bicycle planning framework, to demonstrate how the inclusion of equity principles in the provision of new cycling facilities can be used when prioritizing among projects to ensure that transportation plans are equitable.
Study area
Quebec, Canada, is the second most populated city in the province of Quebec with a population of approximately 541,000 in 2015.In 2015 the City revealed its Vision Bicycle Travel (Ville De Quebec, 2016), where the City envisions a safe and connected bicycle network to attract and encourage cycling for everyday purposes, such as commuting.As a reflection of the densely-built city, 64% of trips made in Quebec are less than 5 km, however around 65% of these trips are made by personal automobile (Ville De Quebec, 2016).
Prior to 2008, a majority of cycling paths in Quebec were intended for recreational purposes.However, in 2008 the City began to further develop the cycling network with the intention of promoting cycling for utilitarian trips.In 2016, the cycle network in Quebec consisted of over 424 km of network, and the location of the facilities is presented in Figure 1.A major challenge for the City is to design and build new cycling infrastructure in a densely-built and historic city.While the historic design is part of the charm of Quebec, there are many issues and challenges that the City must address to design new bicycle facilities and improve the current network.Quebec City is often referred to as lower and upper Quebec, where it is divided by an escarpment, and this change in elevation can be difficult for cyclists.Despite these challenges, the City is currently looking to extend the bicycle network.To best address these challenges, a systematic method to locate new cycling facilities is necessary.In this study, we use a multi-criteria approach to prioritize the locations of new bicycle lanes.Note, in this study we do not consider facility type in the recommendation, rather we focus on where the city should build new infrastructure that will be most useful to current residents.
Methodology, application, and results
The objective of this study is to devise a methodology that is practice-ready, to guide the planning of new cycling facilities in an urban region.As there currently is 424 km of existing bicycling infrastructure in Quebec, we intend to provide recommendations of locations to expand the current cycling network in Quebec as well as identify areas that are currently underserved.The analysis is organized around four main steps, first we develop a prioritization index using a Geographic Information System, to identify regions for infrastructure investments.Second, we identify where to propose the new bicycle infrastructure based on the prioritization index results.Third, to prioritize among proposed projects, we evaluate the potential usage of each facility.Lastly, we consider the proposed routes from a vertical equity perspective to measure and prioritize the proposed facilities according to their potential impact on socially disadvantaged groups.To begin our analysis, we developed a multi-modal network, that merged the bicycle network with the street network, which also accounts for topography and preferences of cyclists.
Development of bicycle network
The network used in this study was created to include all links in the street network that are available for bicycle travel, as well as off-street bicycle and multiuse paths and private roads open to bicycles.Using the DMTI Spatial streets dataset elevation nodes we were able to determine elevation gains or losses for each street link.As previous research has found that cyclists have a strong sensitivity to slope (Broach et al., 2011), we modeled changes in elevation to our network, with the assumption that cyclists would alter their route, if possible, to avoid large gains in elevation.We applied the following formula to each street link: We also modified each road segment length to route a trip on a bicycle lane or cycle track, when possible.While there appears to be no consistent threshold in the literature regarding how far an individual is willing to diverge from their shortest path, Larsen and El-Geneidy (2011) found that respondents added on average 34% to their trip distance, however this varied based on facility type.Knowing that not all bicycle lanes and bicycle paths are perceived to be equal in quality and safety and that not all cyclists will choose to divert their route for bicycle infrastructure, we modeled routes so that an individual may travel an extra distance of 30% to have part or a majority of their trip on an off-street bicycle path, and allowed a route divergence of 20% from the shortest path for an on-street bicycle lane.Furthermore, when generating routes, one-way streets were open to bi-directional travel for cyclists, as select bicycle lanes in Quebec City are designed to allow for bi-directional travel.Following these adjustments to the street segment lengths, we applied this network in the Network Analyst tool in ArcGIS, using the shortest path algorithm, for determining all trips in the subsequent analyses.
Prioritization index for new bicycle infrastructure
To identify high priority areas for the provision of new bicycle lanes, we consider a range of indicators related to where cycling trips are expected to take place in the city, the connectivity of the current network and stated opinions and safety concerns from existing cyclists in Quebec.In all, seven indicators are evaluated and are discussed below.This method was adapted from (Larsen et al., 2013) who presented a method for bicycle facility planning in Montreal, Canada.The selection of the indicators used in this study builds on the criteria used by the authors above, however later in the process the selection was slightly modified according to the Quebec City context and based on feedback from the local planning authority as this exercise was developed in collaboration with them.
Existing and potential cycling trips
The first series of indicators used in the analysis were from the routes of commuting trips by existing and potential cyclists.In this study, we focus on commuting trips, as this aligned with the goal of the region, being to increase the share of bicycle use for commuting trips.Furthermore, detailed data on commute trips is widely collected and readily-available.We observed existing cycling trips from two data sources.The first source of data was a survey conducted by the Transportation at McGill Research (TRAM) group in 2015 on bicycle transportation in Quebec City.From this survey we selected individuals who had made a cycling trip in the past month, and who provided the coordinates of their home and work or school location.We then used these coordinates to assign the approximate route that each individual had used to commute to work using network analyst in ArcGIS.The network described above was used to route the trips, and considered changes in elevation as well as the existing bicycle lanes and cycle tracks.We excluded any home or work/education locations that were outside of the study area, which was the Quebec Census Metropolitan Area (CMA).A total of 1071 trips were evaluated, with an average trip length of 6.9 km.The second data source used to evaluate commuting trips made by bicycle in Quebec was from the 2011 Quebec Regional origin-destination (OD) survey provided by the Réseau de transport de la Capitale (Réseau de Transport de la Capitale, 2011), which is the public transit provider in the Quebec City area.This survey is conducted over the phone, and sampled 7% of the population.We observed the route of 379 trips with a mean trip length of 4.5 km, after removing home or work locations outside of the CMA.
The second type of trips that we were interested in were short commute trips by travel modes other than bicycle.According to the 2011 OD survey, 71 percent of recorded trips were made by automobile, as either the driver or passenger, which demonstrates the auto-dominant travel behavior in Quebec City.We define potential cycling trips as non-bicycle trips that are a short distance and could be converted to bicycle.Specifically, these were trips under 5.8 km in length, which was the 75th percentile distance of all commuting bicycle trips evaluated from the 2011 OD survey.A trip of 5.8 km would take on average 22 minutes by bicycle at an average pace of 16 km/h (El-Geneidy, Krizek, & Iacono, 2007).
Two sources of trip data were used to evaluate potential cycling trips: the 2011 OD survey and the 2011 National Household Survey (NHS) Commuter Flows data (Statistics Canada, 2011).In total 19,511 home-work trips that were under 5.8 km from the 2011 OD survey were analyzed.The 2011 NHS Commuter Flows data records the census tract (CT) pair of an individual's home and work location, and indicates how many people have the same commute pattern.For this data, we used the centroid of each CT and determined the route between centroids.However, we do not know the travel mode used for these trips, therefore we can assume that some of these trips were made by bicycle, however the proportion of trips made by bicycle would be low given the low commuting mode share in Quebec City by bicycle (1.3% in 2011(Statistics Canada, 2015)).A total of 1140 trips were under 5.8 km and included in our analysis.
Feedback of existing cyclists and network connectivity
The next two indicators we incorporated were from feedback of existing cyclists about the current network.First, cyclists were asked to specify where they would like the city to install a new bicycle lane.Cyclists were asked "What street in Quebec City is in most need of a bicycle path or lane?"In subsequent questions the respondent was asked to specify the segment of that street they were referring to.Second, cyclists were asked to identify the most dangerous intersection in the city from their experience of cycling in Quebec.In the survey, participants were asked "What intersection in Quebec City is in most need of improvements for cyclists?"These survey questions provide invaluable feedback with regards to where to prioritize investments in bicycle infrastructure in Quebec City.
The final indicator used in the prioritization index was related to the connectivity of the existing bicycle network.A large portion of recent bicycle facilities in Quebec City were built in isolation, and built in succession with other road infrastructure projects and as a consequence the current bicycle network is very fragmented (as seen in Figure 1).Therefore, we identified locations in the bicycle network that are not connected to other facilities, which may be referred to as dangling nodes, and are priority locations to develop increased connectivity of the bicycle network.
Combining, weighting and spatially aggregating indicators to a grid
Once the data for the seven indicators was prepared, the next step was to spatially aggregate this information to a 300m by 300m grid cell which overlaid the study area.For example, we spatially joined the existing and potential cycling trips to the grid cells, to determine how many unique trips passed through each grid cell.A higher number of trips passing through a particular grid cell would suggest that this area is a suitable location for infrastructure investments.While in the case of the dangerous intersections indicator, we measured the number of individuals that identified an intersection located in that respective grid cell.Once all indicators were spatially aggregated to the grid cells, each indicator was standardized.
The standardized score of each of the seven indicators is presented in Figure 2.
To combine all seven indicators into one priority index we used a weighting scheme, where we applied a higher weight to indicators related to where existing cyclists identified dangerous intersections and what streets are in greatest need of bicycle lanes, and to the connectivity of the street.These three indicators were given a weight of 2. In regard to observed and potential cycling trips, we applied a weighting of 0.5 to potential cycling trips and a weighting of 1 to existing bicycle trips.Applying weights can vary based on the regions' priorities and can help in reducing some of the bias that might be present due to counting the potential cyclists from two data sources.While each of the indicators were deemed important for the use of developing the prioritization index, the weighting scheme applied here allowed for the stated preferences of cyclists' to be given a higher priority in the index given the utility of user feedback in understanding the optimal location for bicycle infrastructure investments (Milakis & Athanasopoulos, 2014).The final result is presented in Figure 3. Looking closely at the final priority index, there is a corridor located in downtown Quebec that could be considered as a priority zone, where future cycling investments would likely be beneficial to the greatest number of existing and potential cyclists.Additional high priority corridors include northsouth connections between downtown and residential neighborhoods located north of downtown, such as Charlesbourg and Les Rivieres.The high priority for infrastructure investment in the downtown part of Quebec is evident across all seven indicators.It is particularly evident that cyclists want additional bicycle facilities downtown, where the most frequently identified dangerous intersections are located.Furthermore, the high demand of both observed and potential cycling trips downtown is evident in Figure 2, where there is a strong concentration of employment and is where Quebec City's major university is located.The use of a 300-meter grid cell enables us to locate an approximate area where facilities are needed.The next step is to focus in on these high priority grid cells and recommend along which street passing through the grid cells will a bicycle facility be proposed.This section of the methodology is explained in the next section in detail.
Proposed new bicycle lanes
After closely examining the prioritization index, we recommended locations for new bicycle lanes.The proposal of new facilities was based on where grid cells with the highest priority index were located, and by determining where new bicycle lanes are needed within the cells.In the high priority downtown corridor, the current network is highly fragmented and in need of a more direct east-west route as well as better north-south connections.Therefore, the proposed bicycle lanes downtown were based on the high priority grid cells and by considering where new lanes would improve the connectivity of the current bicycle network.We then considered corridors in the upper city where the existing network is fragmented and there appears to be a strong need for new bicycle infrastructure.Finally, we carefully examined the network visually to identify where gaps in the network are and where connections to the existing network are needed in order to increase the safety of the network.Figure 4 presents the proposed bicycle lanes overlaid on the grid cell prioritization index.To verify the proposed bicycle lanes, we consulted individuals with local knowledge of the city and with cycling experience in Quebec, as well as with the local planning authority.Feedback for the proposed lanes was predominantly related to a lack of feasibility of proposed streets (i.e., streets that are too narrow or busy bus corridors).Furthermore, the feedback given indicated how to best utilize the current infrastructure, for example by connecting the existing elevator to a bike lane (as there is a very steep gradient between the lower and upper city).After revising our proposed bicycle lanes, a map of the proposed new network is displayed in Figure 5.While the proposed network is an ideal one, the Quebec City has a limited budget and can only accommodate a certain number of projects every year.Accordingly, setting a priority among the proposed links is an important step towards constructing these new facilities in phases.
Measuring usage of proposed facilities
Following the proposal of new bicycle lanes, we estimated the potential usage of each facility.In other words, we wanted to predict which new bicycle investment would serve and thus benefit the greatest number of people, which will allow us to prioritize among the proposed facilities.To estimate potential usage, we considered trips of existing cyclists who are either currently using these roads without bicycle lanes or who may alter their route to use these new facilities.Secondly, we considered the number of potential cyclists who may use these new bicycle lanes.The data for these trips was from the 2011 origindestination survey of home-work or educational trips.Here we used the expansion factor of each trip provided in the survey, which is derived from the number of occupied dwellings of the 2011 census by stratum and by household characteristics.Note, to evaluate each lane, we in some cases combined small street segments into one lane for analysis, or divided long streets into multiple segments to understand which street segments are most important.
To determine the route of these trips we accounted for elevation as well as on street bicycle lanes and off-street bicycle tracks, consistent with the previous analysis of routing existing and potential cycling trips used in the prioritization index.Note, we adjusted our network to include the proposed bicycle lanes and adjusted the length of these segments according to the 20 percent diversion rate, assuming these are on-street routes.Using the existing and potential cycling trips, we measured how many more trips would occur on these street segments if a bicycle lane was added (since the diversion rate would route cyclists on these lanes if the distance of the diversion allowed), and calculated the increase in trips on each segment.In Table 1 we present the estimated number of additional trips that are predicted to take place on each proposed bicycle lane.The proportion of additional trips made on each proposed lane by existing cyclists is displayed in the table, to contextualize the number of existing cyclists that are either currently using the street without a bicycle lane or who we predict would divert their path to use the new bicycle lane.We presented 10 of the proposed lanes with the highest expected increase in users in Table 1, although the estimated number of trips on all proposed lanes are displayed in Figure 6, where graduated symbols were used to represent the estimated usage of each proposed bicycle lane.
The results indicate that for example on Av Cartier, 378 cyclists are estimated to use this street segment (without a bicycle lane), however if a bike lane was built on this segment the usage is expected to be 2298 users, which is a six-fold increase in usage and should be considered a high priority bicycle lane.The expected usage increase on Rue St Vallier E is 1531 additional trips, which is more than double the existing usage.The 10 proposed bicycle lanes presented are all predicted to be highly used by cyclists.However, some of the proposed bicycle lanes had a very low estimated usage increase, all of which were located in suburban neighborhoods.This is potentially a result of larger trip distances for individuals traveling from suburban neighborhoods to downtown.Alternatively, low increased usage may be because these proposed lanes are located on small street segments that connect existing bicycle paths that cyclists are already using, or these proposed lanes are located along the most direct path and similarly, cyclists are already using these streets.Needless to say, these projects with low predicted increases in usage would improve the bicycle network and likely have positive impacts on cycling rates in Quebec City.
Please note that the predicted numbers are for one-way home to work/school trips, which mostly represent early morning usage.This methodology is only applied to these morning trips to demonstrate its potential as a tool for bicycle infrastructure planning.To derive a complete measure of all day usage, returning trips and trips for other purposes can be included in future studies.A limitation in the analysis described above is the lack of equity considerations which will be addressed in the next section.
Equity analysis
To identify CTs with high proportions of socially deprived households, who we suggest are individuals who may particularly benefit from improved cycling facilities if they are useful for their daily commuting needs, we developed a social disadvantage index comprised of four indicators.Collected from 2011 Census data, the four variables include: median household income, unemployment rate, percentage of population that has recently immigrated (past five years), and percentage of households that spend 30 percent or more of their total household income on shelter costs.To ensure that these four variables correctly identify similar groups, a Pearson correlation matrix was used, and found high correlation estimates of over (0.42) among the selection variables.The selection of these four variables followed a method used by (El-Geneidy et al., 2016;Foth, Manaugh, & El-Geneidy, 2013;Sánchez-Cantalejo, Ocana-Riola, & Fernández-Ajuria, 2008).The four variables were then standardized (Z-score) and combined with equal weight to derive an index of social disadvantage.This index of social disadvantage identifies CTs with high proportions of low income households, unemployed individuals, recent immigrants and households spending a high proportion of their income on rent.However, as we used aggregate census data, we must be cautious when interpreting these findings, as not all individuals residing in these socially deprived areas are in fact socially deprived, while also not all socially disadvantaged individuals live in an area classified as socially deprived (Townsend, Phillimore, & Beattie, 1998).The index values were then grouped into deciles, where each decile contained 10% of the CTs.The decile containing the CTs with the highest level of social disadvantage was identified for further analysis.These CTs are displayed in Figure 7.What was of particular interest to us, was where individuals from these CTs travel to for their morning commute.Using 2011 origin-destination data, we considered all trips from home for the purpose of work or education that are currently made by bicycle (existing cyclists), as well as all home-work or education trips less than 5.8 km that are currently by other modes of travel (potential cyclists).To route these trips, the same network was used that was modified to include the proposed bicycle lanes, as well as the same penalties for elevation and incentives for use of bicycle facilities.
Similar to the above analysis, using all existing and potential cycling trips we estimated how many additional trips would be made on the proposed bicycle lanes.However, here we are interested specifically in home-work trips originating in a CT in the highest social indicator decile (10 percent most socially disadvantaged CTs).By using the home-work trip of these individuals, we are able to better understand their travel behavior and what bicycle facilities would best serve these individuals.Based on the potential number of additional cycling trips made by socially deprived individuals on each new bicycle facility, we then devised a new ranking of proposed facilities, and the 10 lanes with the highest estimated use is presented in Table 2. Figure 8 displays the predicted usage of each proposed bicycle lane, where the line thickness represents the estimated usage increase of trips originating in the most socially deprived decile.Examining the proposed bicycle lanes according to which lanes would best serve socially deprived groups, we see modest changes in the ranking scheme when compared to the first ranking of the bicycle lanes according to all potential trips (both existing and potential cycling trips).For example, the pro- .In other words, the proposed bicycle lane on Rue St Vallier E is estimated to be highly used by all existing and potential cyclists as well as to benefit socially disadvantaged individuals in their commute to work.In this study context, most of the proposed bicycle lanes that are estimated to be highly used by residents, will also serve CTs of high proportions of socially disadvantaged individuals.The proportion of the increased usage of each lane by existing cyclists relative to potential cyclists is very low compared to the estimated usage of all cycling trips.This suggests that a very limited number of commuting trips originating in socially deprived neighborhoods are currently being made by bicycle.
The above ranking can be easily applied by city officials to set priorities according to which bicycling facilities will be used the most by all existing and potential cyclists as well as to prioritize new infrastructure that will be used the most by socially disadvantaged groups to avoid previous issues that other regions faced by only advancing projects in socially advantaged neighborhoods.
Discussion and conclusion
For cities that are looking to foster a cycling culture through expanding their bicycle network, this study presents a methodology that can guide the planning and provision of new bicycle infrastructure.Using Quebec as case study for this paper, we used a multi-criteria approach to identify areas suitable for intervention.The data considered in this study included: observed and potential bicycle trips, the presence of dangling nodes or disconnected segments in the existing bicycle network, and suggested streets for new bicycle facilities as well as dangerous intersections identified through a cycling survey.The weighting scheme we applied assigned a higher weight to disconnected segments in the bicycle network and cyclists' feedback on where new facilities are needed and what intersections cyclists perceive as dangerous.No weight adjustment was applied to existing cycling trips, however to prioritize the importance of existing trips over potential cycling trips (as not all potential cyclists either will choose or are able to switch modes), a lower weighting was assigned to potential cycling trips.In future applications of this method, the weights applied to each indicator can be adjusted according to goals or priorities of the project or respective stakeholders, and a sensitivity analysis to measure the influence of different weighting schemes is recommended.Furthermore, while we recognized the importance of responses from cyclists in our study similar to previous work (Milakis & Athanasopoulos, 2014), future studies of this sort should also actively involve infrequent or novice urban cyclists, because this group of cyclists is growing and failing to recognize the growth of this new cohort of cyclists may be detrimental for North American cities (Burke & Scott, 2016).Furthermore, cyclists are heterogeneous (Damant-Sirois, Grimsrud, & El-Geneidy, 2014), and cannot be expected to react similarly to efforts to increase cycling (Larsen & El-Geneidy, 2011).Once the seven indicators were combined and the final prioritization index was prepared, we were able to identify areas in Quebec where facilities are needed.This method worked particularly well for identifying priority corridors, which was similarly observed by Larsen et al. (2013).
In the second phase of our analysis, we recommend where new bicycle lanes should be constructed according to the prioritization index.While suggesting what bicycle facility type to build was outside the scope of this study, the proposed bicycle lanes are intended to guide planners to where infrastructure investments are needed.However, in order to rank the proposed lanes according to estimated usage of existing and potential cyclists, we estimated how many more trips would occur on these street segments if a bicycle lane was added.Based on the change in trips on each segment, we ranked the proposed bicycle lanes accordingly.The ranking of proposed lanes can be very beneficial for prioritizing projects, especially when faced with budgetary constraints, coordination with other capital projects, and limited time during the year that is suitable for construction.The use of performance measures to evaluate trans-portation projects has been demonstrated in previous studies using accessibility indicators (El-Geneidy et al., 2011) and reductions in travel time (Manaugh & El-Geneidy, 2012).However, the impact of bicycle lanes is a less tangible measure.While the number of users that will benefit from new bicycle lanes is a valuable measure for prioritizing projects, additional measures should be explored in the future.
After ranking projects according to potential usage by cyclists, we conducted an equity analysis to evaluate how the proposed bicycle facilities will benefit socially disadvantaged neighborhoods.We considered areas where high proportions of socially disadvantaged residents live, and where these individuals travel for work, and subsequently estimated how the proposed facilities will benefit socially deprived areas.According to the estimated increase in trips from socially deprived areas, we devised a new ranking of proposed facilities according to potential benefit to these users.In this study, we determined that most of the proposed bicycle lanes that were estimated to be highly utilized by potential and existing cyclists would also be highly beneficial to areas with high proportions of socially disadvantaged individuals.Considering disadvantaged areas early in the planning process will have considerable benefit to socially deprived groups to ensure that they are carefully considered in all stages of planning, rather than evaluating how well a transportation plan meets equity goals (Manaugh & El-Geneidy, 2012).
Overall, this work is intended to provide planners and engineers with an empirical methodology to determine where new bicycle facilities are needed and how to measure their potential benefit to both existing cyclists and potential cyclists.The city of Quebec envisions cycling as an outlet to developing an attractive, dynamic, sustainable and efficient city.As exemplified in other North American cities, a greater supply of bicycle paths and lanes in cities is linked to higher rates of commuting by bicycle (Buehler & Pucher, 2012).Consequently, the main contribution of this paper is to present a methodology for determining where bicycle lanes should be added and to provide a measure of the number of individuals who will benefit from the implementation of these new cycling projects.For the application of this approach in different contexts, we recognize that data availability can vary.While access to a rich source of data will enhance this approach to prioritizing bicycle infrastructure, in the absence of data such as user identified priority corridors, the methodology presented in this paper can still effectively guide planners or practitioners through the full process of planning and evaluating different cycling facilities while incorporating social equity principles.
As an important final step to the planning of new bicycle lanes, this work must be followed by a complete traffic analysis prior to moving forward with these proposed bicycle lanes.To ensure that the impact of new bicycle infrastructure on vehicle traffic is carefully understood prior to implementation, a study to evaluate the impacts of installing bicycle facilities on roads similar to Burke and Scott (2016) is required.The traffic study may alter the configuration of these bike lanes.
Figure 1 :
Figure 1: Context map of the existing bicycle network
Figure 2 :Figure 3 :
Figure 2: Standardized result of each indicator
Figure 4 :
Figure 4: Display of proposed facilities over top the priority results
Figure 5 :
Figure 5: Proposed new bicycle network
Figure 6 :
Figure 6: Map of predicted usage of new bicycle facilities
Figure 7 :
Figure 7: Identification of socially disadvantaged areas
Figure 8 :
Figure 8: Map of predicted usage of new bicycle facilities of trips originating in the most socially deprived decile
Table 1 :
Estimated increase in trips using proposed bicycle lanes
Table 2 :
Estimated increase in trips originating in socially deprived neighborhoods
Rank All users rank Existing usage Predicted usage Estimated usage increase Proportion of new usage from existing cyclists
bicycle lane on Rue St Vallier E, is estimated to generate the highest number of trips originating in socially deprived CTs, however was ranked second based on total estimated usage (an additional 1531 trips) posed | 2018-12-07T19:49:07.751Z | 2018-02-09T00:00:00.000 | {
"year": 2018,
"sha1": "fa772a0230e5675bd63e49553fcbc26095d8ca78",
"oa_license": "CCBYNC",
"oa_url": "https://www.jtlu.org/index.php/jtlu/article/download/1115/1004",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fa772a0230e5675bd63e49553fcbc26095d8ca78",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
216594025 | pes2o/s2orc | v3-fos-license | Deeping in the Role of the MAP-Kinases Interacting Kinases (MNKs) in Cancer.
The mitogen-activated protein kinase (MAPK)-interacting kinases (MNKs) are involved in oncogenic transformation and can promote metastasis and tumor progression. In human cells, there are four MNKs isoforms (MNK1a/b and MNK2a/b), derived from two genes by alternative splicing. These kinases play an important role controlling the expression of specific proteins involved in cell cycle, cell survival and cell motility via eukaryotic initiation factor 4E (eIF4E) regulation, but also through other substrates such as heterogeneous nuclear ribonucleoprotein A1, polypyrimidine tract-binding protein-associated splicing factor and Sprouty 2. In this review, we provide an overview of the role of MNK in human cancers, describing the studies conducted to date to elucidate the mechanism involved in the action of MNKs, as well as the development of MNK inhibitors in different hematological cancers and solid tumors.
Introduction
The Mitogen-activated protein kinase (MAPK)-interacting kinases (MNKs), serine/threonine kinases encoded on two genes (MKNK1 and MKNK2), were identified simultaneously in mice [1] and in humans [2] in 1997. In humans, each gene gives rise to two isoforms from an alternatively spliced transcript, named MNK1a/MNK1b and MNK2a/MNK2b [3,4] (Figure 1). All the MNK isoforms have a nuclear localization signal (NLS) in the N-terminal region that mediates the binding with α-importin protein, which allows the kinase to enter into the nucleus. MNK1a and MNK2a are both primarily cytoplasmic, whereas MNK1b and MNK2b localize partially to the nucleus. MNK1a contains a nuclear export signal (NES) that ensures its cytoplasmic localization [5], whereas MNK1b lacks this NES, being cytoplasmic and nuclear [4]. MNK2a and MNK2b lack this NES but other mechanisms ensure the cytoplasmic localization of MNK2a, whereas MNK2b is largely nuclear [5,6]. The N-terminal region also contains the binding domain to the eukaryotic initiation factor 4G (eIF4G), protein that is part of the eukaryotic initiation factor 4F (eIF4F). The eIF4F complex consists of three factors: eIF4E, eIF4A, an ATP-dependent RNA helicase, and eIF4G, a scaffold protein for the assembly of eIF4E and eIF4A. MNKs efficiently phosphorylate their substrate eIF4E when both proteins are bound to the eIF4G scaffold protein, which facilitates their nearness [7]. MNK1a and MNK2a present a canonical MAP kinase binding motif in the C-terminal region, although their sequences differ slightly (LARKR and LAQRR sequence, respectively) such that MNK1a binds both extracellular signal-regulated kinases (ERK) 1/2 and p38 kinases, while MNK2a associates only with ERK1/2 [1]. However, MNK1b and MNK2b, truncated isoforms with distinct C-terminal regions to the longer isoforms, lack the MAPK binding domain and seem to be independent of the upstream kinases (ERK1/2 and p38 MAP kinases) [4]. In spite of the presence or not of the MAPK kinase binding motif, the four isoforms present different basal activities. The basal activity of MNK2a is higher than MNK1a due to the sustained association of MNK2a with ERK1/2 whereas MNK1a has low basal activity but is responsive to the ERK1/2 and p38 activation [8]. MNK1b and MNK2b show to have high and low basal activity, respectively [4,6,9].
The central region of MNK1 and MNK2 corresponds to the catalytic domain of the protein with a similarity in the amino acid sequence of 78% between them. The active sites are highly conserved, with two threonine residues (209 and 214 in MNK1, and 244 and 249 in MNK2) that make up the activation loop of the kinase activity. These threonines of the activation loop are followed by prolines that function as phosphorylable residues, so that they are susceptible to being phosphorylated by MAPKs, characteristic that MNKs have in common with MAPK-activated protein kinases (MK2, MK3 (or pK3) and MK5), with the p90S6 protein kinase (RSK) and with the mitogen-activated and stress-activated protein kinase (MSK). However, they also have particularities in the catalytic domains. MNKs have a DFD (Asp-Phe-Asp) motif that marks the beginning of the activation loop, instead of DFG (Asp-Phe-Gly) that other kinases have. In addition, the domains of MNK1 and MNK2 contain two insertions, one located between the DFD motif and the activation loop and another next to the APE motif (Ala-Pro-Glu) [10]. Studies of the threedimensional structures of MNK1 and MNK2 indicate that the activation loop of MNK2 acquires an unusual open conformation and the DFD motif interferes with ATP binding [10]. On the other hand, the activation loop of MNK1 is self-inhibiting, since it contains a Phe230 that moves the Phe192 from the DFD to the ATP binding site, preventing its binding [11].
In mice, only MNK1a and MNK2a isoforms have been identified and both proteins are expressed in all adult tissues, except in the brain where MNK2 levels are very low. In comparison with the rest of tissues, the expression of both proteins is rather abundant in skeletal muscle [1]. In humans, the expression of MNK1a is higher in the liver, pancreas, heart and placenta. MNK1b is expressed in all tissues studied at similar extent except skeletal muscle, where its levels are low [4]. MNK2 isoforms have a wide distribution in all the tissues studied, except in brain and heart where MNK1a and MNK2a present a canonical MAP kinase binding motif in the C-terminal region, although their sequences differ slightly (LARKR and LAQRR sequence, respectively) such that MNK1a binds both extracellular signal-regulated kinases (ERK) 1/2 and p38 kinases, while MNK2a associates only with ERK1/2 [1]. However, MNK1b and MNK2b, truncated isoforms with distinct C-terminal regions to the longer isoforms, lack the MAPK binding domain and seem to be independent of the upstream kinases (ERK1/2 and p38 MAP kinases) [4]. In spite of the presence or not of the MAPK kinase binding motif, the four isoforms present different basal activities. The basal activity of MNK2a is higher than MNK1a due to the sustained association of MNK2a with ERK1/2 whereas MNK1a has low basal activity but is responsive to the ERK1/2 and p38 activation [8]. MNK1b and MNK2b show to have high and low basal activity, respectively [4,6,9].
The central region of MNK1 and MNK2 corresponds to the catalytic domain of the protein with a similarity in the amino acid sequence of 78% between them. The active sites are highly conserved, with two threonine residues (209 and 214 in MNK1, and 244 and 249 in MNK2) that make up the activation loop of the kinase activity. These threonines of the activation loop are followed by prolines that function as phosphorylable residues, so that they are susceptible to being phosphorylated by MAPKs, characteristic that MNKs have in common with MAPK-activated protein kinases (MK2, MK3 (or pK3) and MK5), with the p90S6 protein kinase (RSK) and with the mitogen-activated and stress-activated protein kinase (MSK). However, they also have particularities in the catalytic domains. MNKs have a DFD (Asp-Phe-Asp) motif that marks the beginning of the activation loop, instead of DFG (Asp-Phe-Gly) that other kinases have. In addition, the domains of MNK1 and MNK2 contain two insertions, one located between the DFD motif and the activation loop and another next to the APE motif (Ala-Pro-Glu) [10]. Studies of the three-dimensional structures of MNK1 and MNK2 indicate that the activation loop of MNK2 acquires an unusual open conformation and the DFD motif interferes with ATP binding [10]. On the other hand, the activation loop of MNK1 is self-inhibiting, since it contains a Phe230 that moves the Phe192 from the DFD to the ATP binding site, preventing its binding [11].
In mice, only MNK1a and MNK2a isoforms have been identified and both proteins are expressed in all adult tissues, except in the brain where MNK2 levels are very low. In comparison with the rest of tissues, the expression of both proteins is rather abundant in skeletal muscle [1]. In humans, the expression of MNK1a is higher in the liver, pancreas, heart and placenta. MNK1b is expressed in all tissues studied at similar extent except skeletal muscle, where its levels are low [4]. MNK2 isoforms have a wide distribution in all the tissues studied, except in brain and heart where their levels are low. However, like MNK1a, the levels of both isoforms, MNK2a and MNK2b, are higher in the pancreas [3].
MNKs Substrates
The only well-characterized substrate for MNK1 is the eukaryotic initiation factor 4E (eIF4E). eIF4E was first identified as substrate of the MNKs both in vitro and in vivo [1,12]. Later, studies in vivo with knock-out mice for Mnk1 and Mnk2, in which phosphorylation of Ser209 was not detected under any type of activation, corroborate that MNK1/2 are the only kinases for eIF4E [13]. The eIF4E is part of the eIF4F complex and specifically binds to the 5 cap structure of the eukaryotic cytoplasmic mRNAs and subsequently recruits the mRNA to the ribosome [reviewed in [14,15]].
The effect of eIF4E phosphorylation on eIF4F activity is controversial. Initially, it was thought that phosphorylation of Ser209 of eIF4E by MNKs, which normally occurs in response to agents that activate protein synthesis, increased its affinity for the cap of mRNAs to form a more stable eIF4F complex. However, several reports support that eIF4E phosphorylation markedly reduces its affinity for capped mRNA [16][17][18]. In the nucleus, eIF4E promotes nuclear export of a subset of specific mRNAs [19]. Borden's laboratory has demonstrated that the phosphorylation of nuclear eIF4E seems to be an important step in the control of the mRNA transport [20]. Consistently, several findings support that eIF4E phosphorylation can play a role in the transport of cyclin D1 from the nucleus to the cytoplasm which drives to cell transformation.
To date, other MNKs substrates, in addition to eIF4E, have been identified-hnRNP A1, PSF, cPLA 2 and Spry2-but their relevance in vivo is not yet established ( Figure 2). The Heterogeneous Nuclear Ribonucleoprotein A1 (hnRNP A1) is a very abundant nuclear protein that plays an important role in mRNA metabolism. Although it is a nuclear protein, it shuttles between the nucleus and the cytoplasm through the non-classic nuclear localization domain, called M9 motif. MNKs phosphorylate hnRNP A1 in Ser192 and Ser310 in response to T-cell activation. The inhibition of MNK activity with CGP57380 decreases the release of the tumor necrosis factor (TNF) α and the interleukins IL-6 and IL-1β in response to anisomycin (a p38 MAPK agonist) in human keratinocyte cultures [21] and causes a decrease in TNFα production by macrophages after treatment with multiple agonists of the Toll-like receptor (TLR) family [22]. MNK1 participates in the regulation of TNFα synthesis through phosphorylation of the hnRNP A1 protein, which decreases its ability to bind to AREs regions (3'UTR regions of mRNA rich in residues A and U) in the TNFα mRNA that causes derepression of its translation [23]. In addition, hnRNP A1 is phosphorylated in response to osmotic stress through p38 MAPK, so MNKs would also regulate the translation of specific mRNAs in cell stress response [23,24]. The Polypyrimidine tract-binding protein-associated Splicing Factor (PSF) is a nuclear protein involved in RNA transcription and processing. Together with p54nrb, another DNA/RNA binding protein, it forms a transcription/processing factor involved in multiple nuclear processes and in tumorigenesis [25]. It also regulates positively the translation of the Myc family of oncogenes, among other processes [26]. Buxade et al. identified PSF as a new intracellular substrate of MNK in vitro [27]. They identified two phosphorylation sites in PSF, Ser8 (preferably phosphorylated by MNK2) and Ser283. PSF interacts with mRNAs containing AREs and phosphorylation by MNK increases its binding to TNFα mRNA in vivo, although it does not affect the stability or nuclear/cytoplasmic localization of PSF or TNFα mRNA [27]. A more recent study has revealed the role of MNK in TNFα synthesis by controlling the abundance of its mRNA [28], although the involvement of PSF and/or hnRNP A1 has not been determined. The cytoplasmic phospholipase A2 (cPLA 2 ) plays a key role in the production of eicosanoids that participate in immunity and inflammation processes. MNK1 phosphorylates cPLA 2 in Ser727 in vitro [29], which is regulated by the p38 MAPK signaling pathway. This phosphorylation causes the activation of cPLA2, which releases arachidonic acid from glycerophospholipids for the production of eicosanoids. Sprouty (Spry) proteins are a group of membrane-associated proteins that suppress the activation and/or signaling of ERK. MNK1 phosphorylates Spry2 in Ser112 and Ser121 stabilizing Spry2 and lengthen its ability to inhibit ERK signaling [30].
. MNK and Cancer
The relationship between eIF4E and cell growth control and neoplastic transformation was first published in 1990 [31]. These authors demonstrated that overexpression of eIF4E in the NIH3T3 cells inhibits the growth of agar colonies and produces tumors when inoculated into mice. In addition, inhibition of eIF4E reduces tumor growth and malignancy in experimental models [32]. The increased expression of eIF4E preferentially induces the translation of proteins involved in Figure 2. Mechanism of Action of MNKs. Activation of MNKs occurs through the activation of the Ras/Raf/ERK cell signaling pathway and p38 MAPK pathway. Likewise, the activation of the PI3K/AKT/mTOR pathway in response to growth factors, among others, stimulates the binding of MNK to mTORC1, regulating the formation of the mTORC1/TELO2/DDB1 complex. MNKs phosphorylate eIF4E and other substrates controlling the expression of specific proteins involved in cell growth, apoptosis and metastasis.
MNK and Cancer
The relationship between eIF4E and cell growth control and neoplastic transformation was first published in 1990 [31]. These authors demonstrated that overexpression of eIF4E in the NIH3T3 cells inhibits the growth of agar colonies and produces tumors when inoculated into mice. In addition, inhibition of eIF4E reduces tumor growth and malignancy in experimental models [32]. The increased expression of eIF4E preferentially induces the translation of proteins involved in cancer such as vascular endothelial growth factor (VEGF) and fibroblast growth factor (FGF) that facilitate angiogenesis, Bcl-2 that participates in cell survival, metalloproteases (MMP) involved in invasion and c-Myc, cyclin D1, ornithine decarboxylase (ODC) and the human double minute 2 homolog (HDM2) that regulate cell growth [19,20,[33][34][35][36].
It has been shown eIF4E overexpression in a variety of cancers including breast, bladder, colon, head and neck, kidney, lung, skin, ovarian and prostate compared to healthy tissues and its relationship with disease progression (reviewed in [14]). In addition, elevated levels of phosphorylated eIF4E have been found in human cancer tissues obtained from patients with lung, head, colorectal, and gastric cancers and primary pancreatic ductal adenocarcinoma [37,38]. Several studies established that the phosphorylation of eIF4E on Ser209 by MNK1/2 is an absolute requirement for the oncogenic action of eIF4E. The inhibition of MNK activity reduces colony formation in human breast cell lines [39]. On the other hand, overexpression of the oncogene HMD2 in cancer cells is regulated by eIF4E, so that the overexpression of eIF4E promotes the export of the HDM2 mRNA in a MAP kinase-and MNK1-dependent manner [35]. In addition, Wendel et al. have shown that the overexpression of a constitutively active MNK1 diminishes the apoptosis and accelerates the development of tumors in an experimental model of mice while an inactive mutant reduces the development of these tumors [36]. Ueda et al. have demonstrated that the absence of MNK1/2 does not alter the normal development of mice, although it delays mouse tumor progress [40].
The activity of eIF4E is also regulated by its availability to participate in the initiation of translation through binding with 4E-BP proteins which form an inactive complex with eIF4E, inhibiting the binding thereof to eIF4G and thereby preventing the formation of the eIF4F complex required for initiating protein synthesis [41]. The complex 1 of the mammalian target kinase protein of rapamycin (mTORC1) regulates the assembly of the eIF4F complex through the phosphorylation of 4E-BPs, which involves the disassociation of eIF4E and the binding to eIF4G, where it becomes available for being phosphorylated by MNKs.
The PI3K/AKT/mTOR signaling cascade is among one of the most frequently deregulated mechanisms in cancer, often as a result of genetic alterations and/or mutations [42]. This pathway plays a key role in tumor cell proliferation, survival and development, and its deregulation is closely linked to tumorigenesis and to the sensitivity and resistance to cancer therapies. Growth factors, mitogens and cytokines activate the phosphatidylinositol-3 kinase (PI3K), which initiates a cascade of cellular events. The 3 phosphoinositol-dependent kinase-1 (PDK1) activates the protein AKT (AKT enzyme, also called Protein Kinase B) which, by means of the inactivation of tumor suppressor complex 1 and 2 (TSC1/2), activates the mammalian target kinase protein of rapamycin (mTOR) complex 1, mTORC1. The activation of PDK1 and AKT by PI3K is negatively regulated by PTEN, a tumor suppressor gene which is usually mutated or silenced in human cancers [43,44]. The loss of the phosphatase and tensin homolog (PTEN) causes the activation of AKT and of mTORC1 signaling. mTORC1 phosphorylates the 4E-BPs and also promotes the activation of the kinase S6K which phosphorylates ribosomal protein S6 [45]. There are evidences suggesting a compensatory feedback mechanism that links PI3K/AKT/mTOR with MNK/eIF4E pathway. Thus, it has been demonstrated that a prolonged treatment of tumor cell lines or patients with rapamycin, an mTOR inhibitor, and its synthetic analogs (temsirolimus and everolimus) inhibits mTOR function but controversially, increases eIF4E phosphorylation and AKT activation, resulting in a mTOR-targeted therapy resistance through a compensatory feed-back mechanism between the AKT/mTOR pathway and MNK/eIF4E pathway [46][47][48][49].
MNK1 seems to play an important role in the interplay between both pathways (PI3K/AKT/mTOR and MNK/eIF4E) ( Figure 2). Some studies suggest that MNKs have an additional role in post-transcriptional gene expression by controlling 7-methyl-guanosine cap-independent translation at internal ribosomal entry sites (IRES) located in the 5'-UTR (untranslated region) of some mRNAs in response to mTOR inhibitors. These inhibitors make tumor cell growth dependent on IRES-mediated cap-independent translation. Shi et al. demonstrated that MNK is a key regulator of rapamycin-induced IRES activity [50]. In other report, these authors demonstrated that MNK1, but not MNK2, regulates the IRES-dependent c-Myc translation in multiple myeloma (MM) cells during endoplasmic reticulum (ER) stress [51]. These authors propose that MNK1 stimulates the binding between two of the Myc ITAFs, hnRNP A1 and RPS25, during ER stress, facilitating ribosomal loading to the c-Myc IRES and leading to IRES-dependent translation. Brown et al. have shown that MNK promotes viral IRES dependent translation of polio/rhinovirus recombinant (PVSRIPO) [52], a fundamental requirement for the cytotoxic efficacy of this new oncolytic immunotherapy, currently in phase II clinical trials in adult patients with recurrent grade IV malignant glioma (NCT02986178). Authors propose that MNK, via the stimulation of mTORC1, exerts the inhibition of mTORC2 and AKT, which negatively regulates the Ser/Arg (SR)-rich protein kinase (SRPK) and its substrates, the SR-rich proteins, which are involved in mRNA splicing, export and translation, including viral IRES translation [52,53]. Recently, Brown and Gromeier have demonstrated that MNKs stimulate mTORC1 [54]. MNK binds to mTORC1, which promotes association with TELO 2 (PI3K related kinase (PIKK) stabilizer), and this interaction modulates mTORC1:substrate binding. It is interesting to point out that both mechanisms described above represent a point of convergence between PI3K/AKT/mTOR and MAPK/MNK pathways independent of eIF4E phosphorylation.
MNK isoforms are overexpressed in several types of cancer such as glioblastoma, lung, liver, ovarian and breast cancer [46,[55][56][57][58][59] and their high expression levels are associated with worse prognosis [56][57][58][59]. To extend these results, we have performed Kaplan-Meier analysis on mRNA expression data from The Cancer Genome Atlas (TCGA) project (https://tcga-data.nci.nih.gov/tcga/). There is a significant correlation between MNK1 high expression and unfavorable overall survival in kidney, liver and prostate cancer and between MNK2 high expression and unfavorable overall survival in low grade glioma and prostate cancer patients ( Figure 3). The importance of the overexpression of MNK1 or MNK2 in progression and survival in cancer could depend on the balance between both protein kinases in each tissue, as well as the ratio between the spliced isoforms a and b. Thus, Maimon et al. have found that the expression of MNK2a is decreased in breast, lung, and colon tumors, while MNK2b is correspondingly increased [60]. Interestingly, these authors reported that MNK2 splice variants have opposing roles in tumor development, MNK2a acts as a tumor suppressor while MNK2b has a pro-oncogenic role [60]. The antagonism between MNK2a and b could also occur for MNK1 isoforms. In triple-negative breast cancer, MNK1b overexpression was associated with shorter overall and disease-free survival times and its overexpression with gene transfection facilitates proliferation, migration, and invasion in breast cell lines [59].
Although previous studies were aimed at the use of eIF4E as a therapeutic target, the fact that this protein has a fundamental biological role in protein synthesis in normal cells is an obstacle to these strategies. Given that eIF4E and its phosphorylation are associated with processes linked to tumor progression and metastasis in a broad range of tumor types, and that MNKs are not essential [13], pharmacological inhibitors directed against MNK appear to provide an effective anti-tumor strategy non-detrimental for non-tumor cells. Furthermore, the combination of MNK and mTOR inhibitors increases anti-tumor response by inhibiting cell proliferation and inducing apoptosis compared to monotherapy, which has increased the studies driven to the use of combined therapies. We summarize the inhibitors against MNK1/2 described for cancer therapy (Table 1) and those clinical trials currently in progress with MNKs inhibitors ( Table 2). Although previous studies were aimed at the use of eIF4E as a therapeutic target, the fact that this protein has a fundamental biological role in protein synthesis in normal cells is an obstacle to these strategies. Given that eIF4E and its phosphorylation are associated with processes linked to tumor progression and metastasis in a broad range of tumor types, and that MNKs are not essential [13], pharmacological inhibitors directed against MNK appear to provide an effective anti-tumor strategy non-detrimental for non-tumor cells. Furthermore, the combination of MNK and mTOR inhibitors increases anti-tumor response by inhibiting cell proliferation and inducing apoptosis compared to monotherapy, which has increased the studies driven to the use of combined therapies. We summarize the inhibitors against MNK1/2 described for cancer therapy (Table 1) and those clinical trials currently in progress with MNKs inhibitors ( Table 2). LGG: brain lower grade glioma; PRAD: Prostate adenocarcinoma.
MNK in Hematological Cancers
Hematological malignancies as a whole occupy the third place in the global cancer classification, after lung and breast cancer. Among the several hematological cancer types, leukemias, lymphomas, and myelomas are the most frequent [113].
The World Health Organization (WHO) defines chronic myeloid leukemia (CML) as a chronic myeloproliferative neoplasm characterized by the presence of Philadelphia chromosome and the fusion oncogene Bcr-Abl. The inhibition of Bcr-Abl kinase by imatinib results in durable responses in early-stage CML patients, but less in late-stage disease, so patients can develop drug resistance. The joint inhibition of MNK and Bcr-Abl with the MNK inhibitor CGP57380 and with imatinib inhibits polysome assembly, decreasing proliferation and survival [119]. In patients who develop blast crisis (BC-CML), life expectancy is still less than 12 months [120]. The use of imatinib with MNK inhibitors prevents eIF4E phosphorylation in vivo with an antiproliferative effect that could help to combat late-stage disease and to understand other pathways and cellular processes that are dysregulated by Bcr-Abl [119]. In addition, pharmacologic targeting of MNK and mTORC1 kinases, employing rapamycin together with novel MNK inhibitors (MNK1/2 53-54 or MNKI-4 and MNKI- 57) or niclosamide (an anthelminthic drug), abolished cell growth by triggering cell apoptotic death and abrogated eIF4E phosphorylation, which may offer a new therapeutic opportunity [76,96,110,121].
Acute lymphocytic leukemia (ALL) consists of the uncontrolled proliferation of an immature cell clone of lymphoid lineage (lymphoblasts) invading the bone marrow and infiltrates multiple organs and tissues. In this type of hematological cancer, it has been described that MNK1 overexpression and phosphorylation activates eIF4E, up-regulating downstream molecules such as MCL-1, c-Myc, survivin and the cyclin-dependent kinase (CDK) 2. MNK1 inhibition with CGP57380 prevents these events and can also overcome eIF4E activation induced by everolimus, sensitizing T-ALL cells to apoptosis [63].
On the other hand, pharmacological inhibition of the Bruton's tyrosine kinase (BTK) is effective against a variety of B-cell malignancies. In 2016, a dual BTK/MNK inhibitor called QL-X-138 was developed with anti-proliferative effects in vitro and in patient-derived primary cells but for the chronic lymphocytic leukemia (CLL) treatment [108]. Diffuse large B cell lymphoma (DLBCL) is one of the most common types of lymphoma and accounts for approximately 30%-40% of non-Hodgkin lymphoma cases. It is a fast growing lymphoma with a high proliferation rate and aggressive behavior with a 30% of patients relapsing or refractory to first-line treatment [122]. Most of the MNK inhibitors used in this type of hematological cancer also block eIF4E phosphorylation, as it happens in others. Recently, Reich et al. have discovered an MNK inhibitor, eFT508 that blocked eIF4E phosphorylation and proinflammatory cytokine production without affecting proliferation in vitro and in vivo [107]. This inhibitor is being evaluated in a phase II clinical trial in lymphoma (Table 2). Likewise, Prohibitin (PHB) overexpression is associated with tumor aggressiveness. MNK inhibition by FL3, a synthetic flavagine and ligand of PHBs, determined antitumor activities in vitro and in vivo by inhibiting MNK-dependent eIF4E phosphorylation. This MNK1 inhibition reduced Bcl-2 and c-Myc expression, inducing apoptosis that would allow the treatment of rituximab resistant diseases [109].
Multiple myeloma (MM) is a malignant plasma cell disorder that is characterized by the presence of clonal plasma cell proliferation in bone marrow and over production of monoclonal paraprotein in the blood and/or urine [123]. In 2013, Mehrotra et al. established the regulatory role Niclosamide, an anthelminthic drug, affects eIF4E phosphorylation acting upstream ERK/MNK/eIF4E pathway. FL3 inhibits PHB affecting MNK/eIF4E and also eIF4A and therefore eIF4F complex formation. QL-X-138 targets BTK affecting PI3K/AKT/mTOR pathway and MNK1 phosphorylation. CGP57380, pyridine derivatives and SEL201 are also used in combination with rapamycin to enhance mTOR-targeted therapy. Red squares: inhibitors.
Diffuse large B cell lymphoma (DLBCL) is one of the most common types of lymphoma and accounts for approximately 30%-40% of non-Hodgkin lymphoma cases. It is a fast growing lymphoma with a high proliferation rate and aggressive behavior with a 30% of patients relapsing or refractory to first-line treatment [122]. Most of the MNK inhibitors used in this type of hematological cancer also block eIF4E phosphorylation, as it happens in others. Recently, Reich et al. have discovered an MNK inhibitor, eFT508 that blocked eIF4E phosphorylation and pro-inflammatory cytokine production without affecting proliferation in vitro and in vivo [107]. This inhibitor is being evaluated in a phase II clinical trial in lymphoma (Table 2). Likewise, Prohibitin (PHB) overexpression is associated with tumor aggressiveness. MNK inhibition by FL3, a synthetic flavagine and ligand of PHBs, determined antitumor activities in vitro and in vivo by inhibiting MNK-dependent eIF4E phosphorylation. This MNK1 inhibition reduced Bcl-2 and c-Myc expression, inducing apoptosis that would allow the treatment of rituximab resistant diseases [109].
Multiple myeloma (MM) is a malignant plasma cell disorder that is characterized by the presence of clonal plasma cell proliferation in bone marrow and over production of monoclonal paraprotein in the blood and/or urine [123]. In 2013, Mehrotra et al. established the regulatory role of MNK pathways as positive effectors in the generation of antineoplastic effects of type I IFNs in myeloproliferative neoplasms (MPNs) [97]. However, it remains to be elucidated whether other downstream effectors of MNK kinases apart from eIF4E are involved in the generation of IFN responses.
MNK in Breast Cancer
According to the WHO, breast cancer is the most frequent cancer among women, impacting 2.1 million women each year, and also causes the greatest number of cancer-related deaths among them. In 2018, it is estimated that 627,000 women died from breast cancer-that is approximately 15% of all cancer deaths among women [113]. Thus, the need for new therapies is revealed.
MNKs have a key role in breast cancer. MNK1a and MNK1b proteins and MNK1b mRNA levels are higher in breast tumors samples than in healthy tissues. In fact, MNK1b levels are significantly increased in triple-negative breast cancer tumors (TNBC) and are associated with poorer overall and disease-free survival [59]. MNK inhibition resulted in reduced proliferation and cell viability, supported by the downregulation of cyclin D1 and cell cycle arrest in MDA-MB-231 cells in response to resorcylic acid lactone (RAL) analogs, CGP57380, rhodanine analogs or pyridine derivatives and/or to a combination of MNK and PI3K inhibitors [55,98,[124][125][126]. However, the use of different inhibitors reveals other processes affected by MNK blockade. VNLG-152, a retinamide derivative, and its racemic form VNLG-152R degraded MNK1 and blocked eIF4E phosphorylation producing a decrease in colony formation, migration and invasion, inducing cell death by apoptosis and affecting the cell cycle in MDA-MB-231 and MDA-MB-468, and suppressed the growth of MDA-MB-231 tumor xenografts and in TNBC patient-derived xenograft (PDX) model [84][85][86]. Compound MNK-7g is able to inhibit eIF4E phosphorylation and to block the migration of MDA-MB-231 without affecting proliferation [99]. However, eIF4E involvement does not always occur after MNK inhibition, so the effect of the different inhibitors on the kinase could affect or not affect eIF4E. Aptamers (ssDNA molecules that adopt tertiary structures) apMNK2F and apMNK3R, which inhibit translation and decrease cell viability, migration, and colony formation in MDA-MB-231 cells [112], and Ferrocene analogs, MNK inhibitors that inhibit cell viability and spheroid growth [127], do not produce any effect on eIF4E phosphorylation. Thus, the study of possible MNK substrates different from eIF4E is still ongoing. A group of researchers identified MNK1 as a YB-1 target, which is overexpressed in trastuzumab-resistant cell lines and that the role of MNK1 would be independent of eIF4E phosphorylation. They demonstrated that MNK1 levels are regulated by phospho-YB-1, which is downstream from RSK signaling and, also, RSK directly phosphorylates MNK1. Therefore, inhibiting YB-1 function increased sensitivity to trastuzumab by reducing the expression of MNK1, EGFR, MET, CD44 and anti-apoptotic proteins such as MCL-1, cIAP1 and cIAP2 [128]. In another study, a novel mechanism by which MNKs could control the expression of specific protein has been proposed, showing that MNK inhibition increased the association of CYFIP1 with eIF4E. MNKI-1 inhibited eIF4E phosphorylation and migration of mouse embryonic fibroblasts (MEFs), MDA-MB-231 and SCC25 cells with little or no effect on cell viability or proliferation [100]. Using the same inhibitor, Tian et al. demonstrated that MNK1 inhibition decreased the phosphorylation of the metastasis suppressor NDRG1 in MDA-MB-231 cells [64] (Figure 5).
XIAP protein is an apoptosis inhibitor that is overexpressed in high-grade breast cancer and in inflammatory breast cancer (IBC) patient tumors. XIAP is necessary for the constitutive activation of the NFkB pathway in IBC and the XIAP-NFkB axis directly correlates with the tumor growth rate in vivo. Interruption of MNK signaling led to a reduction in XIAP expression. XIAP mRNA contains IRES and MNK regulation may function to facilitate XIAP translation in IBC. Thus, XIAP acted as a link between MAPK and NFkB signaling to control IBC proliferation and tumor aggression [129]. MNK1/NODAL has been identified as a key signaling axis regulating the progression and breast cancer recurrence as metastatic disease. MNK1 controlled NODAL protein levels, possibly on the level of mRNA translation. The data showed a positive correlation between MNK1 activity and the expression of NODAL and vimentin, regulators of invasion and metastasis. MNK1 inhibition with SEL201 could block NODAL signaling to suppress disease [103]. However, all the possible downstream factors and MNK-interactions are yet to be discovered.
The activation of the MNK/eIF4E/β-catenin axis is involved in breast cancer cell response to chemotherapy. A study has proposed β-catenin as a new eIF4E-targeted tumor promoting genelike MCL-1 and cyclin D. These authors showed that CGP57380 and cercosporamide prevent chemotherapy-induced eIF4E phosphorylation and β-catenin activation, inhibiting proliferation and inducing apoptosis of breast cancer cells in vitro and in vivo [65].
MNK in Lung Cancer
Lung cancer is the most common cancer and the world-leading cause of cancer-related death [113]. Histologically, lung cancer is divided into two main types: small-cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC). SCLC is not very common (15% of all lung cancer cases) but is more metastatic and aggressive than NSCLC. NSCLC is less aggressive than SCLC but more frequent (85% of all cases) and it can be divided into three subtypes: adenocarcinoma (50%), squamous cell carcinoma (30%) and large cell carcinoma (10%) [130].
The active form of MNK1, p-MNK1 (Thr197/202), and phosphorylated eIF4E, p-eIF4E, are increased in lung cancer and correlate with poor overall survival of NSCLC patients [131,66]. In addition, high levels of p-MNK1 might act as an independent poor prognostic biomarker for these patients [66]. On the other hand, MNK2 is overexpressed in NSCLC promoting cell proliferation, migration and invasion in vitro and in vivo through 4E-BP1/eIF4E and ERK/eIF4E pathways. High expression of MNK2 correlates with lymph node metastasis and poor overall survival rates in patients with NSCLC [56]. The isoform MNK2a is a tumor suppressor mechanism that is lost in Figure 5. MNK in breast cancer. Most of the MNK inhibitors act inhibiting eIF4E phosphorylation, although there are some exceptions. Novel retinamides act through MNK degradation, and aptamers apMNK2F and 3R and Ferrocene analogs through an unknown mechanism independent of eIF4E phosphorylation. MNK inhibitors induce an increase in the association CYFIP1/eIF4E and the decrease of diverse pro-tumorigenic proteins. PP242, an mTOR inhibitor, and pyridine derivatives are also used in combination to enhance mTOR-targeted therapy. Red squares: inhibitors. MNK1/NODAL has been identified as a key signaling axis regulating the progression and breast cancer recurrence as metastatic disease. MNK1 controlled NODAL protein levels, possibly on the level of mRNA translation. The data showed a positive correlation between MNK1 activity and the expression of NODAL and vimentin, regulators of invasion and metastasis. MNK1 inhibition with SEL201 could block NODAL signaling to suppress disease [103]. However, all the possible downstream factors and MNK-interactions are yet to be discovered.
The activation of the MNK/eIF4E/β-catenin axis is involved in breast cancer cell response to chemotherapy. A study has proposed β-catenin as a new eIF4E-targeted tumor promoting genelike MCL-1 and cyclin D. These authors showed that CGP57380 and cercosporamide prevent chemotherapy-induced eIF4E phosphorylation and β-catenin activation, inhibiting proliferation and inducing apoptosis of breast cancer cells in vitro and in vivo [65].
MNK in Lung Cancer
Lung cancer is the most common cancer and the world-leading cause of cancer-related death [113]. Histologically, lung cancer is divided into two main types: small-cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC). SCLC is not very common (15% of all lung cancer cases) but is more metastatic and aggressive than NSCLC. NSCLC is less aggressive than SCLC but more frequent (85% of all cases) and it can be divided into three subtypes: adenocarcinoma (50%), squamous cell carcinoma (30%) and large cell carcinoma (10%) [130].
The active form of MNK1, p-MNK1 (Thr197/202), and phosphorylated eIF4E, p-eIF4E, are increased in lung cancer and correlate with poor overall survival of NSCLC patients [66,131]. In addition, high levels of p-MNK1 might act as an independent poor prognostic biomarker for these patients [66]. On the other hand, MNK2 is overexpressed in NSCLC promoting cell proliferation, migration and invasion in vitro and in vivo through 4E-BP1/eIF4E and ERK/eIF4E pathways. High expression of MNK2 correlates with lymph node metastasis and poor overall survival rates in patients with NSCLC [56]. The isoform MNK2a is a tumor suppressor mechanism that is lost in some lung tumors [60]. All these data reflect that targeting MNK-proteins might be a potential therapeutic strategy for treatment in NSCLC patients.
Some MNK inhibitors have been developed in the last few years with satisfactory results such as BAY 1143269 [61], a potent MNK1 inhibitor identified by high-throughput screening. BAY 1143269 is more potent than CGP57380 and cercosporamide, the classical MNK1/MNK2 inhibitors [67,77]. Treatment with BAY 1143269 inhibits eIF4E phosphorylation and leads to cell cycle deregulation in NSCLC cell lines through a G0/G1 arrest and a reduced expression of several cell cycle factors, including different cyclins. Treating NSCLC cell lines with BAY 1143269 also decreases its migratory potential, induces apoptosis and causes a reduction in several key factors in the epithelial-mesenchymal transition (EMT). In addition, this MNK1 inhibitor shows anti-cancer activity as monotherapy in different NSCLC cell lines and PDX models. Combinational therapy with chemotherapeutics such as docetaxel significantly improves anticancer activity compared to monotherapy in vivo ( Figure 6). BAY 1143269 started to be evaluated in clinical trials as a combinational therapy with docetaxel in NSCLC in 2015 (Table 2). Moreover, merestinib (LY2801653), a multi-kinase inhibitor with activity against MNKs among other protein kinases, inhibits tumor growth and metastasis in NSCLC models [78][79][80], and is currently used in clinical trials (phase II)-not only in NSCLC, but also in AML, biliary tract and colorectal cancer ( some lung tumors [60]. All these data reflect that targeting MNK-proteins might be a potential therapeutic strategy for treatment in NSCLC patients. Some MNK inhibitors have been developed in the last few years with satisfactory results such as BAY 1143269 [61], a potent MNK1 inhibitor identified by high-throughput screening. BAY 1143269 is more potent than CGP57380 and cercosporamide, the classical MNK1/MNK2 inhibitors [77,67]. Treatment with BAY 1143269 inhibits eIF4E phosphorylation and leads to cell cycle deregulation in NSCLC cell lines through a G0/G1 arrest and a reduced expression of several cell cycle factors, including different cyclins. Treating NSCLC cell lines with BAY 1143269 also decreases its migratory potential, induces apoptosis and causes a reduction in several key factors in the epithelial-mesenchymal transition (EMT). In addition, this MNK1 inhibitor shows anti-cancer activity as monotherapy in different NSCLC cell lines and PDX models. Combinational therapy with chemotherapeutics such as docetaxel significantly improves anticancer activity compared to monotherapy in vivo ( Figure 6). BAY 1143269 started to be evaluated in clinical trials as a combinational therapy with docetaxel in NSCLC in 2015 (Table 2). Moreover, merestinib (LY2801653), a multi-kinase inhibitor with activity against MNKs among other protein kinases, inhibits tumor growth and metastasis in NSCLC models [78][79][80], and is currently used in clinical trials (phase II)-not only in NSCLC, but also in AML, biliary tract and colorectal cancer ( Table 2). Targeting different proteins than MNKs, both mTORC1/4E-BP1 and MNK1/eIF4E axis are inactivated in NSCLC cell lines as occurs with exportin 1 (XPO1) inhibitor KPT-330, which disrupts the eIF4F translation initiation complex due to a downregulation of mTOR and MNK1 and phosphorylation of mTOR, p70S6K, 4E-BP1, MNK1 and eIF4E [132]. Moreover, TPDHT is a bromophenol-thiazolylhydrazone hybrid that inhibits proliferation in a variety of tumor cells, especially in the human lung cancer cell A549, in which also induces apoptosis, G0/G1 phase arrest and antitumor activity in vivo. TPDHT targets eIF4E disrupting the interaction of eIF4E/eIF4G through ERK/MNK/eIF4E pathway [133].
On the other hand, PI3K/AKT/mTOR pathway is frequently activated in NSCLC and involved in lung tumorigenesis [134,135]. Currently, several inhibitors of the PI3K pathway have been developed and are undergoing evaluation in preclinical and clinical studies (reviewed in [136]), Targeting different proteins than MNKs, both mTORC1/4E-BP1 and MNK1/eIF4E axis are inactivated in NSCLC cell lines as occurs with exportin 1 (XPO1) inhibitor KPT-330, which disrupts the eIF4F translation initiation complex due to a downregulation of mTOR and MNK1 and phosphorylation of mTOR, p70S6K, 4E-BP1, MNK1 and eIF4E [132]. Moreover, TPDHT is a bromophenol-thiazolylhydrazone hybrid that inhibits proliferation in a variety of tumor cells, especially in the human lung cancer cell A549, in which also induces apoptosis, G0/G1 phase arrest and antitumor activity in vivo. TPDHT targets eIF4E disrupting the interaction of eIF4E/eIF4G through ERK/MNK/eIF4E pathway [133].
On the other hand, PI3K/AKT/mTOR pathway is frequently activated in NSCLC and involved in lung tumorigenesis [134,135]. Currently, several inhibitors of the PI3K pathway have been developed and are undergoing evaluation in preclinical and clinical studies (reviewed in [136]), specially mTOR inhibitors such as rapamycin and its analogs (rapalogs). However, the prolonged treatment of human lung cancer cells with rapalogs results in an mTOR-targeted therapy resistance through the compensatory feed-back mechanism between AKT/mTOR pathway and MNK/eIF4E pathway [48,49]. Therefore, combination therapy with both mTOR and MNK inhibitors might be an effective therapeutic strategy to enhance mTOR-targeted cancer therapy in NSCLC. Thus, Wen et al. showed that the MNK inhibitor CGP57380 abrogates the eIF4E phosphorylation induced by the mTOR inhibitor RAD001 (everolimus) in NSCLC cells and the combination of both inhibitors exerts synergistic inhibitory effects on cell proliferation, colony formation and inhibits tumor growth of lung cancer xenografts [66]. In addition, apoptosis is induced by decreasing several anti-apoptotic factors, including MCL-1, whose expression is remarkably increased and correlated with poor prognosis in NSCLC patients and is reduced after CGP57380 treatment in lung cancer cells [68]. Thus, MCL-1 might act as a novel biomarker in these patients.
MNK in Prostate Cancer
Prostate cancer (PCa) is the second most frequent cancer in males and the fifth leading cause of cancer-related death in this gender [113]. Although there is no direct evidence for MNKs overexpression in prostate cancer patient samples, a few studies have shown that elevated eIF4E levels and hyperphosphorylation, which reflect high MNKs activity, were correlated with disease progression [137,138].
MNK/eIF4E pathway involvement in prostate tumor progression has been also reported in vitro and in animal models. Furic et al. have shown that knock-in mouse expressing a non-phosphorylable form of eIF4E was resistant to prostate tumor development in a mouse model based on loss PTEN function [137]. Polysome-profiling of MEFs isolated from knock-in mouse and PCa cell lines treated with CGP57380 allowed the identification of MNK/eIF4E-dependent mRNAs as VEGFC, BIRC2, MMP3, and NFKBIA, pro-tumorigenic proteins whose decrease was associated with tumor development resistance. Microarray analysis of polysome-associated mRNAs of a different PCa cell line using CGP57380 has demonstrate that MNKs are required for translation of mRNAs involved in hypoxia response (specifically HIF1α transcription factor), and cell cycle progression (CDK2, CDK8, CDK9, KAP1, RASSF1, PCNA and PIAS1), which may explain the antiproliferative effect of MNK inhibition in vitro [47] (Figure 7). MNK1 silencing also caused a decrease in viability of PCa cells [87].
The androgen receptor (AR) is a main driver of PCa development and progression. Initially, patients are treated with androgen-deprivation therapy, but PCa often progresses to a castration-resistant prostate cancer (CRPC), with worse prognosis. D'Abronzo et al. have shown that AR inhibition led to an increase in eIF4E phosphorylation and cap-dependent translation that confers resistance to AR antagonists as bicalutamide [139]. Moreover, CGP57380 or eIF4E depletion sensitizes CRPC to anti-androgen therapies. Since AR plays a pivotal role in prostate cancer and MNK/eIF4E pathway is involved in cancer resistance and progression, dual AR/MNK inhibition seems to be a promising PCa therapy. Two families of dual AR/MNKs inhibitors have been proven to be effective in PCa, novel retinamides and galeterone analogs.
Galeterone and galeterone analogs are AR antagonists/degradation inducers and CYP17 inhibitors that lead to the ubiquitin-proteasomal degradation of MNKs and consequently decrease eIF4E phosphorylation. Galeterone and its analog VNPT55 inhibit PCa cells migration and invasion through the downregulation of EMT markers such as Snail, Slug, N-Cadherin, vimentin and MMP-2/-9 and up-regulation of E-cadherin [89]. The Next generation Galeterone analogs, VNPP414, and VNPP433-3b, induce apoptosis, inhibit proliferation, migration, and invasion by modulating EMT and stem cell markers in PCa cells and suppress tumor growth in a CRPC xenograft mouse model. Furthermore, gal and NGGAs are effective in enzalutamide, docetaxel and mitoxantrone-resistant PCa cells and have a synergistic effect with docetaxel and enzalutamide [90]. Many other MNKs inhibitors have shown promising antitumor effects in prostate cancer models such as 3-azido withaferin A [140], a dual modulator of Ras/MNK and PI3K/AKT/mTOR pathways, 2 H-spiro[cyclohexane-1,3 -imidazo[1 ,5-a] pyridine]-1 ,5 -dione derivatives [98] and eFT508, currently in phase II trials for patients with advanced CRPC ( . MNK in prostate cancer. Novel retinamides and galeterone and its analogs promote MNK proteasomal degradation, whereas CGP57380 and pyridine derivatives inhibit eIF4E phosphorylation. Inhibition of MNK by CGP5738 decreases translation of diverse pro-tumorigenic proteins and TOP mRNAs, which is further reduced by rapamycin concomitant treatment. mTOR and AR inhibitors increase MNK activity as a resistance mechanism. Red squares: inhibitors; Green square, activators. Bianchini et al. have demonstrated that the compensatory feedback between PI3K/AKT/mTOR and RAS/MAPK/MNK pathways occurs in prostate carcinomas and preserves tumor progression [47]. PCa cell lines with low AKT/mTOR activity have high levels of eIF4E phosphorylation and a stronger antiproliferative response to MNK inhibition than to rapamycin, while PCa cells with mutated PTEN and constitutively activated AKT/mTOR pathway have lower eIF4E phosphorylation levels that can be robustly induced with mTOR inhibition and counteracted by cotreatment with CGP57380. In CRPC, resistance to rapamycin was also associated with upregulated eIF4E phosphorylation induced by AR inhibition [139]. In PCa cells, rapamycin-induced eIF4E phosphorylation is mediated by an increase in MNK2-activity dependent on phosphorylation at Ser437 [141]. MNK inhibition alone reduced polysomal recruitment of terminal oligopyrimidine messenger RNAs (TOP) mRNAs, which are mRNAs with a common sequence at the 5′ that encodes ribosomal proteins and components of translational complex. The translation of these mRNAs is mainly regulated by mTORC1 activity in response to growth factors. Concomitant treatment with CGP57380 and rapamycin has additive effects in reducing polysomal recruitment of TOP mRNAs. This result suggests the additional translation control of TOP mRNAs by the MNK/eIF4E pathway. Moreover, simultaneous mTOR and MNK inhibition suppress protein synthesis, cell proliferation and cell cycle, with a decrease in cyclin D1, cyclin A and cyclin B [47]. Figure 7. MNK in prostate cancer. Novel retinamides and galeterone and its analogs promote MNK proteasomal degradation, whereas CGP57380 and pyridine derivatives inhibit eIF4E phosphorylation. Inhibition of MNK by CGP5738 decreases translation of diverse pro-tumorigenic proteins and TOP mRNAs, which is further reduced by rapamycin concomitant treatment. mTOR and AR inhibitors increase MNK activity as a resistance mechanism. Red squares: inhibitors; Green square, activators.
MNK in Gastrointestinal Cancer
Bianchini et al. have demonstrated that the compensatory feedback between PI3K/AKT/mTOR and RAS/MAPK/MNK pathways occurs in prostate carcinomas and preserves tumor progression [47]. PCa cell lines with low AKT/mTOR activity have high levels of eIF4E phosphorylation and a stronger antiproliferative response to MNK inhibition than to rapamycin, while PCa cells with mutated PTEN and constitutively activated AKT/mTOR pathway have lower eIF4E phosphorylation levels that can be robustly induced with mTOR inhibition and counteracted by co-treatment with CGP57380. In CRPC, resistance to rapamycin was also associated with upregulated eIF4E phosphorylation induced by AR inhibition [139]. In PCa cells, rapamycin-induced eIF4E phosphorylation is mediated by an increase in MNK2-activity dependent on phosphorylation at Ser437 [141]. MNK inhibition alone reduced polysomal recruitment of terminal oligopyrimidine messenger RNAs (TOP) mRNAs, which are mRNAs with a common sequence at the 5 that encodes ribosomal proteins and components of translational complex. The translation of these mRNAs is mainly regulated by mTORC1 activity in response to growth factors. Concomitant treatment with CGP57380 and rapamycin has additive effects in reducing polysomal recruitment of TOP mRNAs. This result suggests the additional translation control of TOP mRNAs by the MNK/eIF4E pathway. Moreover, simultaneous mTOR and MNK inhibition suppress protein synthesis, cell proliferation and cell cycle, with a decrease in cyclin D1, cyclin A and cyclin B [47].
MNK in Gastrointestinal Cancer
Gastrointestinal cancer includes malignant conditions of the gastrointestinal tract and other organs involved in digestion, including the esophagus, stomach, biliary system, pancreas, small intestine, large intestine, rectum and anus (colorectal tract). Altogether, these tumors are responsible for more death from cancer than other body systems.
Pancreatic ductal adenocarcinoma (PDAC) is the most common type of pancreatic cancer and is the seventh leading cause of cancer death, with one of the highest mortality/incidence ratios between solid tumors [113]. The advanced stage at diagnostic and limit or ineffective chemo and radio-therapies contribute to poor outcome. Gemcitabine, one of the first-line chemotherapies used in PDAC, has limited efficacy due to the development of chemoresistance, partially associated with the dysregulation of mRNA translation. Adesso et al. showed that gemcitabine upregulates the oncogenic splicing factor SRSF1 and promotes splicing of the MNK2 increasing MNK2b isoform, which confers drug resistance by increasing eIF4E phosphorylation [38]. In accordance, MNK inhibition by CGP57380, and MNK2 or SRSF1 silencing synergistically enhance the anti-tumor effect of gemcitabine by promoting apoptosis. Moreover, MNK inhibition increases the cytostatic effect of cisplatin and rapamycin in PDAC cells. MNKs may be involved in PDAC pathogenesis and progression, as MNK1 is highly expressed in pancreatic acinar cells in mice and is activated upon induction of pancreatitis, a major risk factor for the development of PDAC [69]. Furthermore, increased eIF4E phosphorylation is associated with a higher tumor grade, early disease onset and worse prognosis [38]. MNKs also play an important role in regulating EMT in PDAC cells. Pharmacological inhibition and genetic depletion of MNKs, mainly MNK2, reduces protein expression of the EMT-activating transcription factor ZEB1, leading to the reversion of EMT and decrease in migration and invasion of PDAC cells. In addition, MNK inhibition decreases cell growth and reverses EMT increasing E-cadherin mRNA levels and decreasing vimentin mRNA levels in human PDAC organoids. Kumar et al. also showed that 5-fluorouracil-chemoresistant PDAC cells, which have undergone EMT and have increased ZEB1 levels, are sensitive to MNK inhibition and have fewer cancer stem cells after CGP57380 treatment [70]. Kwegyr-Afful et al. demonstrated that galeterone and its analogs (VNPT55, VNPP414, and VNPP433-3β) synergize with gemcitabine and inhibit PDAC migration, invasion and proliferation of gemcitabine-naïve and resistant PDAC cells through downregulation of MNK1/2 and suppress tumor growth of PDAC xenografts in mice [91] (Figure 8). Based on these results, galeterone is currently in clinical trials (phase II) in advanced PDAC alone and in combination with gemcitabine (Table 2).
Colorectal Cancer (CRC) is the fourth diagnosed cancer and the second cause of cancer death in the world [113]. There are some MNK1/2 inhibitors tested in CRC, such as cercosporamide, which suppresses eIF4E phosphorylation in colon cancer cell lines blocking the growth of HCT116 colon carcinoma xenograft tumors [77]; 6-hydroxy-4-methoxy-3-methylbenzofuran-7-carboxamide derivatives compounds 5o and 8k, that also exhibit anti-proliferative activity and block eIF4E phosphorylation in the CRC HCT-116 cell line [82]; 42i, a pyridine-aminal derivative synthesized as MNK1/2 inhibitor that significantly blocks eIF4E phosphorylation in colon cancer CT-26 cell line and inhibits tumor growth in CT-26 allograft model [83] and 4t, a pyridine derivative with anti-proliferative activity against CRC cell lines among others [98]. Targeting different proteins than MNKs, two compounds are found to downregulate MNK1 in CRC, CDKI-73 [142] and metformin [143]. Regarding MNK inhibitors in clinical trials, the MNK1/2 inhibitor eFT508 is being evaluated in phase II in colorectal cancer patients alone or in combination with Avelumab. Merestinib is being evaluated in phase I in combination with Ramucirumab in colorectal cancer patients (Table 2). High expression of MNK1 is more frequent in hepatocellular carcinoma (HCC) tissues than in normal liver tissues, which correlates with poor overall survival. This MNK1 overexpression enhances proliferation, migration and invasion in HCC cell lines [58]. Cercosporamide blocks eIF4E phosphorylation inhibiting proliferation and angiogenesis in HCC cell lines and, in combination with cisplatin, results in greater efficacy than a drug alone in vitro and in vivo [144]. Regarding MNK-inhibitors in clinical trials, eFT508 was evaluated in a phase II trial in hepatocellular carcinoma patients. Merestinib is being evaluated in phase I in combination with cisplatin and gemcitabine in cholangiocarcinoma and biliary tract carcinoma patients, among others ( Table 2).
MNK in Brain and CNS Tumors
Gliomas are the most common primary brain tumors in adults and arise from the glial tissue. Based on histological criteria, WHO has classified diffuse gliomas into lower-grade astrocytomas or oligodendrogliomas and high-grade astrocytomas, also known as glioblastoma multiforme (GBM), the most prevalent and aggressive type of brain cancer [145]. Clinical studies have demonstrated that there is a higher expression of MNK1 at protein levels in GBM tumor samples and glioma cell lines compared with non-tumorous brain tissue and normal human astrocytes, respectively. Microarray analysis also revealed upregulated MNK1 transcript levels in GBM and lower grade astrocytomas, without changes in MNK2 expression [46]. Moreover, there are higher levels of p-MNK1 and its substrate p-eIF4E in astrocytoma tissues compared to normal brain tissues, which were associated with tumor recurrence. Furthermore, p-MKN1 levels were positively correlated with p-eIF4E levels, which in turn are associated with the grade of astrocytomas, tumor size, and unfavorable prognosis, and inversely correlated with overall survival rates [146]. Across GBM subtypes, both MKNK1 and MKNK2 genes are highly expressed in mesenchymal subtype GBM, while only the MKNK1 expression correlates with the mesenchymal glioma stem cells marker CD44 and predicts poor survival in GBM when both genes are upregulated [147,81]. Several studies have shown an oncogenic role for MNK1 and MNK2 in glioma development. MNK1 knockdown as well as inhibition by CGP57380 decrease in vitro and in vivo oncogenic activity along with a significant reduction in eIF4E phosphorylation in human glioma cell lines [40,46]. Primary central nervous system lymphoma (PCNSL) is an uncommon subtype of extranodal non-Hodgkin's lymphoma that arises inside the central nervous system. Muta et al. reported an overexpression of both phosphorylated and unphosphorylated eIF4E in samples from PCNSL patients and demonstrated the contribution of MNK1/eIF4E pathway in tumoral growth of brain malignant lymphoma cells in vitro and in a mouse xenograft model using CGP57380 [71]. High expression of MNK1 is more frequent in hepatocellular carcinoma (HCC) tissues than in normal liver tissues, which correlates with poor overall survival. This MNK1 overexpression enhances proliferation, migration and invasion in HCC cell lines [58]. Cercosporamide blocks eIF4E phosphorylation inhibiting proliferation and angiogenesis in HCC cell lines and, in combination with cisplatin, results in greater efficacy than a drug alone in vitro and in vivo [144]. Regarding MNK-inhibitors in clinical trials, eFT508 was evaluated in a phase II trial in hepatocellular carcinoma patients. Merestinib is being evaluated in phase I in combination with cisplatin and gemcitabine in cholangiocarcinoma and biliary tract carcinoma patients, among others ( Table 2).
MNK in Brain and CNS Tumors
Gliomas are the most common primary brain tumors in adults and arise from the glial tissue. Based on histological criteria, WHO has classified diffuse gliomas into lower-grade astrocytomas or oligodendrogliomas and high-grade astrocytomas, also known as glioblastoma multiforme (GBM), the most prevalent and aggressive type of brain cancer [145]. Clinical studies have demonstrated that there is a higher expression of MNK1 at protein levels in GBM tumor samples and glioma cell lines compared with non-tumorous brain tissue and normal human astrocytes, respectively. Microarray analysis also revealed upregulated MNK1 transcript levels in GBM and lower grade astrocytomas, without changes in MNK2 expression [46]. Moreover, there are higher levels of p-MNK1 and its substrate p-eIF4E in astrocytoma tissues compared to normal brain tissues, which were associated with tumor recurrence. Furthermore, p-MKN1 levels were positively correlated with p-eIF4E levels, which in turn are associated with the grade of astrocytomas, tumor size, and unfavorable prognosis, and inversely correlated with overall survival rates [146]. Across GBM subtypes, both MKNK1 and MKNK2 genes are highly expressed in mesenchymal subtype GBM, while only the MKNK1 expression correlates with the mesenchymal glioma stem cells marker CD44 and predicts poor survival in GBM when both genes are upregulated [81,147]. Several studies have shown an oncogenic role for MNK1 and MNK2 in glioma development. MNK1 knockdown as well as inhibition by CGP57380 decrease in vitro and in vivo oncogenic activity along with a significant reduction in eIF4E phosphorylation in human glioma cell lines [40,46]. Primary central nervous system lymphoma (PCNSL) is an uncommon subtype of extranodal non-Hodgkin's lymphoma that arises inside the central nervous system. Muta et al. reported an overexpression of both phosphorylated and unphosphorylated eIF4E in samples from PCNSL patients and demonstrated the contribution of MNK1/eIF4E pathway in tumoral growth of brain malignant lymphoma cells in vitro and in a mouse xenograft model using CGP57380 [71].
MNKs might regulate a specific set of genes depending on the cancer type or the particular signaling triggered by different therapies. Some specific MNK1 targets have been described in glioma. Microarray polysome-associated RNAs analysis in MNK1-depleted BS125 GBM cell line revealed that MNK1 regulates the translation of proteins involved in TGFβ (Transforming growth factor β) signaling. In particular, SMAD2, one of the main TGFβ signal transducers was found to be decreased after MNK1 knockdown or inhibition by CGP57380 and had a positive correlation with MNK1 expression in GBM samples. SMAD2 protein synthesis inhibition led to a decreased vimentin expression that could be associated with MNK1 control of cellular motility [46]. There is another point of convergence between TGFβ and MNK1 pathways. Several studies have described the activation of MNK1 by increasing phosphorylation of upstream p38 and ERK1/2 kinases after TGFβ treatment [46,148], which facilitates the translation of pro-metastasis mRNAs such as snail and MMP3 [149]. In PCNSL, treatment with CGP57380 reduced cyclin D1 expression, which was associated with lower proliferation rates [71] ( Figure 9).
As stated above, targeting mTOR and MNK simultaneously may increase efficacy against cancer. Thus, combination of CGP57380 and rapamycin or RAD001 (everolimus) has an additive antiproliferative effect in glioma cell lines and reduces tumor growth in an orthotopic GBM mouse model [46,72]. In the same way, depletion of MNK1 by specific siRNAs enhanced sensitivity to rapamycin, while MNK1 overexpression reduced its inhibitory effects, with a positive correlation between MNK1 protein levels and rapamycin resistance [46]. Concomitant treatment of RAD001 with CGP57380 or MNK1 knockdown abrogates RAD001-induced eIF4E phosphorylation and additionally inhibits MNK1-dependent phosphorylation of 4EBP1 at serine 65. Consequently, the binding ratio of 4EBP1-eIF4E is increased, which results in a markedly decrease in global protein translation that could be related to the additive growth-inhibitory effects [46,72].
Medulloblastoma is an embryonal tumor of the cerebellum among the most frequent malignant childhood brain tumors [150]. In medulloblastoma cell lines, rapamycin-induced MNK2-mediated eIF4E phosphorylation enhanced antineoplastic effect, and is independent of MAPKs canonical MNK-activating pathway [73].
Additional studies had reported MNK-mediated altered regulation of translation initiation as a resistance mechanism to other antitumor glioma therapies as Temozolamide (TMZ) [151], arsenic trioxide (ATO) [147] or ionizing radiation [152], all of them related with increments in cap-dependent translation. TMZ, a chemotherapeutic DNA-damaging drug currently used in the standard treatment of GBM, increases eIF4E phosphorylation in glioma cells. ERK and MNK inhibitors, as well as MNK1 specific depletion, inhibit TMZ-induced eIF4E phosphorylation and enhanced TMZ antiproliferative effects, suggesting a key role of ERK/MNK1/eIF4E in TMZ resistance. Quantitative phosphoproteomics analysis in the presence of TMZ has shown that MNK affects phosphorylation status of proteins involved in the cellular response to stress and DNA damage and has allowed to identify MNK-dependent eIF4G1 phosphorylation sites, which are necessary for eIF4E phosphorylation and TMZ resistance mechanism of glioma cells [151]. ATO, an FDA-approved drug for leukemia, is currently under phase I/II clinical trial in GBM. In GBM cells, ATO increases eIF4E phosphorylation and translation of anti-apoptotic mRNAs by directly binding and activation of MNK1. In addition, resistance to ATO was positively correlated with eIF4E phosphorylation in an intracranial GBM PDX model, while patients from an ATO clinical trial with lower ATO response had higher MNK activity. CGP57380 and MNK1 silencing prevent ATO-induced eIF4E phosphorylation and increase ATO antiproliferative effect, which suggests the MNK1/eIF4E pathway as a resistance mechanism to ATO [147]. These results suggest an attractive approach to overcoming resistance mechanisms or sensitizing cancer to chemotherapy, by targeting MNKs activity in a combination of drug therapies. Figure 9. MNK in brain and CNS tumors. CGP57380, cabozantinib and merestinib act inhibiting eIF4E phosphorylation. MNK1 knockdown or inhibition by CGP57380 reduces SMAD2 expression and consequently TGFβ canonical signaling. MNK activity is increased by TGBβ non-canonical signaling and by antitumor glioma therapies as TMZ, ATO or mTOR inhibitors. CGP57380 in combination with everolimus inhibits MNK1-dependent phosphorylation of 4EBP1 and enhance mTOR-targeted therapy. Red squares: inhibitors; Green square, activators.
Some studies evidenced the synergistic effect of MNK inhibition and other targeted therapies in central nervous system tumors. In malignant peripheral nerve sheath tumors (MPNSTs), a rare and aggressive sarcoma subtype of neural origin, Lock et al. have demonstrated high MNK/eIF4E activity in primary human tumors and an enhanced antineoplastic effect of MEK inhibitor PD901 combined with MNKs knockdown or inhibition in vitro and in vivo in a mechanism dependent of eIF4E phosphorylation levels [111]. Furthermore, MNK inhibition has additive antitumor effects in combination with alpelisib, a PI3Ka inhibitor, in medulloblastoma mouse xenograft models [153].
Merestinib inhibits tumor growth and has anti-angiogenic and anti-proliferative effects in a subcutaneous and intracranial GBM xenograft model [78,81]. Antitumor effects of merestinib in GBM lines and patient-derived mesenchymal glioma stem cell lines are related to the inhibition of eIF4E phosphorylation and a decrease in global protein synthesis and cyclin D1/D2 translation [81]. Cabozantinib, another MET/multikinase inhibitor with MNKs as direct targets, exhibited antitumor activity in vitro and in a genetically engineered mouse MPNST model by suppressing eIF4E phosphorylation [111].
MNK in Other Solid Tumors
MNKs activity in thyroid cancer has been mainly associated with drug resistance mechanisms. In thyroid cancer cell lines, an increase in MNK-dependent eIF4E phosphorylation is observed after DNA damage by cisplatin [154] or by radionuclide therapy with radiolabeled gastrin analogs [151] and after inhibition of BET proteins, which function as transcriptional co-activators of oncogenic genes [155]. MNK inhibition or knockdown blocks the increased eIF4E phosphorylation and enhances the antitumor effects of these therapies in vitro [151] and in vivo [154,155]. In addition, MNK inhibition or genetic depletion alone inhibits proliferation and induces apoptosis of anaplastic thyroid cancer cells, the most aggressive type of thyroid cancer, and reduces tumor growth in a mouse xenograft model [154].
In skin cancer, melanoma is the most fatal due to its metastatic and aggressive characteristics. MNK1 activity is associated with invasion and metastasis in different types of melanoma, including KIT-mutant [104] and BRAFv6000Emutant melanoma [105]. The MNK1/2 inhibitor SEL201 inhibits Figure 9. MNK in brain and CNS tumors. CGP57380, cabozantinib and merestinib act inhibiting eIF4E phosphorylation. MNK1 knockdown or inhibition by CGP57380 reduces SMAD2 expression and consequently TGFβ canonical signaling. MNK activity is increased by TGBβ non-canonical signaling and by antitumor glioma therapies as TMZ, ATO or mTOR inhibitors. CGP57380 in combination with everolimus inhibits MNK1-dependent phosphorylation of 4EBP1 and enhance mTOR-targeted therapy. Red squares: inhibitors; Green square, activators.
Some studies evidenced the synergistic effect of MNK inhibition and other targeted therapies in central nervous system tumors. In malignant peripheral nerve sheath tumors (MPNSTs), a rare and aggressive sarcoma subtype of neural origin, Lock et al. have demonstrated high MNK/eIF4E activity in primary human tumors and an enhanced antineoplastic effect of MEK inhibitor PD901 combined with MNKs knockdown or inhibition in vitro and in vivo in a mechanism dependent of eIF4E phosphorylation levels [111]. Furthermore, MNK inhibition has additive antitumor effects in combination with alpelisib, a PI3Ka inhibitor, in medulloblastoma mouse xenograft models [153].
Merestinib inhibits tumor growth and has anti-angiogenic and anti-proliferative effects in a subcutaneous and intracranial GBM xenograft model [78,81]. Antitumor effects of merestinib in GBM lines and patient-derived mesenchymal glioma stem cell lines are related to the inhibition of eIF4E phosphorylation and a decrease in global protein synthesis and cyclin D1/D2 translation [81]. Cabozantinib, another MET/multikinase inhibitor with MNKs as direct targets, exhibited anti-tumor activity in vitro and in a genetically engineered mouse MPNST model by suppressing eIF4E phosphorylation [111].
MNK in Other Solid Tumors
MNKs activity in thyroid cancer has been mainly associated with drug resistance mechanisms. In thyroid cancer cell lines, an increase in MNK-dependent eIF4E phosphorylation is observed after DNA damage by cisplatin [154] or by radionuclide therapy with radiolabeled gastrin analogs [151] and after inhibition of BET proteins, which function as transcriptional co-activators of oncogenic genes [155]. MNK inhibition or knockdown blocks the increased eIF4E phosphorylation and enhances the antitumor effects of these therapies in vitro [151] and in vivo [154,155]. In addition, MNK inhibition or genetic depletion alone inhibits proliferation and induces apoptosis of anaplastic thyroid cancer cells, the most aggressive type of thyroid cancer, and reduces tumor growth in a mouse xenograft model [154].
In skin cancer, melanoma is the most fatal due to its metastatic and aggressive characteristics. MNK1 activity is associated with invasion and metastasis in different types of melanoma, including KIT-mutant [104] and BRAFv6000Emutant melanoma [105]. The MNK1/2 inhibitor SEL201 inhibits invasion in both melanoma cells and in vivo melanoma models [102,104,105]. Yang et al. have demonstrated that MNK1 regulates melanoma metastasis through the upregulation of the angiopoietin-like 4 (ANGPTL4) protein, a regulator of MMPs, thus enabling the subsequent expression of MMPs that promote melanoma cell invasion [105].
MNKs are also involved in gynecologic cancers. Thus, higher levels of MNK1 in epithelial ovarian cancer indicate poorer clinical outcomes. In addition, MNK1 knockdown and inhibition decreased ovarian cancer cell viability [57]. Liu et al. demonstrated that MNK1 is involved in the resistance of ovarian cancer cells to chemotherapy [156]. They observed increased phosphorylation levels of ERK, MNK1, and eIF4E in ovarian cancer cells exposed to chemotherapy, as well as in ovarian cancer patients. p-eIF4E overexpression resulted in resistance, whereas eIF4E depletion sensitized ovarian cancer cells. In cervical cancer, MNK regulates Wnt/β-catenin pathway. Thus, activated MNK/eIF4E induced the activation of Wnt/β-catenin in cancer, but not in normal cervical cells, through an increase in eIF4E overexpression and phosphorylation, which promoted growth and migration. In parallel, MNK inhibition prevented eIF4E-mediated Wnt/β-catenin activation, leading to decrease cervical cancer growth, migration, and survival. The combination of CGP57380 and Paclitaxel achieved greater activity than a single drug alone [101].
There is an overexpression of p-eIF4E and p-MNK1 in nasopharyngeal carcinoma (NPC) compared to non-cancerous nasopharyngeal epithelial tissues, which is associated with lymph node metastasis and poor survival [157,158]. In NPC, β-catenin signaling is aberrantly activated which is a factor of poor prognosis. Furthermore, p-eIF4E has been reported to activate the Wnt/β-catenin pathway. CGP57380 decreases proliferation, colony formation, migration and invasion in NPC cell lines and in vivo through the downregulation of β-catenin in the nucleus. Accumulation of β-catenin in the cytoplasm enhances intercellular adhesions and reduces the expression of EMT markers such as vimentin, N-cadherin and slug [74]. Pyrimidine derivatives have also acquired relevance in NPC, as 12j, the MNK1 inhibitor that inhibits kinase activity and cell proliferation and induces apoptosis in NPC and renal cell carcinoma (RCC) [75]. In clear cell RCC, one of the most common neoplasms of the kidney, high levels of p-eIF4E are associated with a longer recurrence-free interval after nephrectomy. In human RCC cell lines, eIF4E phosphorylation is mainly dependent on MNK2a isoform and its inhibition with CGP57380 or specific MNK2a genetic depletion enhanced migration and invasion and vimentin expression, which suggest that MNK2a may function as a metastasis suppressor [159].
Conclusions
MNKs proteins are key actors in tumor progression and metastasis in many human tumors playing an important role controlling the expression of specific proteins involved in cell cycle, cell survival and cell motility. In the last years, new key proteins implicated in tumor biology have been included among those directly regulated by MNKs. Thus, MNKs regulate translation of proteins involved in cell growth such as c-Myc, cyclin D1, β-catenin or CDK2, in antiapoptotic processes such as MCL-1, XIAP and survivin or in metastatic processes, migration, invasion, and EMT, like NODAL, SMAD2, NDRG1, ANGPTL4 or ZEB1. The study of the exact mechanism by which MNKs cause a tumorigenic effect in the different cancer types has been highly relevant to consider these proteins as potential therapeutic targets. In fact, it has been shown that, in addition to the phosphorylation of eIF4E, MNKs are capable of producing their effect through other substrates such as hnRNP A1, PSF or Sprouty 2 ( Figure 2).
Moreover, MNKs seem to play an important role in the interplay between the Ras/MNK and PI3K/AKT/mTOR pathways, two critical signaling pathways involved in tumorigenesis and chemoresistance that are frequently deregulated in a broad variety of cancers.
From these results, regulating the expression or activity of MNKs has been a therapeutic strategy that has acquired enormous relevance. For this reason, in recent years there have been many investigations aimed at developing MNK inhibitor molecules that allow neutralizing the tumorigenic effect of these proteins. Inhibitors developed recently, some of which are already in different phases of clinical trials, open a window of hope for the pharmacological treatment targeting MNKs, in monotherapy or in combined therapy, of many tumors. | 2020-04-29T13:03:26.758Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "1e8e309fd689c9bdd39257884e2bb5ba21072861",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/8/2967/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bb002c259c08f6214fb402c967ecceaa0e55ef5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14198913 | pes2o/s2orc | v3-fos-license | Cryptic or pseudocryptic: can morphological methods inform copepod taxonomy? An analysis of publications and a case study of the Eurytemora affinis species complex
Interest in cryptic species has increased significantly with current progress in genetic methods. The large number of cryptic species suggests that the resolution of traditional morphological techniques may be insufficient for taxonomical research. However, some species now considered to be cryptic may, in fact, be designated pseudocryptic after close morphological examination. Thus the “cryptic or pseudocryptic” dilemma speaks to the resolution of morphological analysis and its utility for identifying species. We address this dilemma first by systematically reviewing data published from 1980 to 2013 on cryptic species of Copepoda and then by performing an in-depth morphological study of the former Eurytemora affinis complex of cryptic species. Analyzing the published data showed that, in 5 of 24 revisions eligible for systematic review, cryptic species assignment was based solely on the genetic variation of forms without detailed morphological analysis to confirm the assignment. Therefore, some newly described cryptic species might be designated pseudocryptic under more detailed morphological analysis as happened with Eurytemora affinis complex. Recent genetic analyses of the complex found high levels of heterogeneity without morphological differences; it is argued to be cryptic. However, next detailed morphological analyses allowed to describe a number of valid species. Our study, using deep statistical analyses usually not applied for new species describing, of this species complex confirmed considerable differences between former cryptic species. In particular, fluctuating asymmetry (FA), the random variation of left and right structures, was significantly different between forms and provided independent information about their status. Our work showed that multivariate statistical approaches, such as principal component analysis, can be powerful techniques for the morphological discrimination of cryptic taxons. Despite increasing cryptic species designations, morphological techniques have great potential in determining copepod taxonomy.
Introduction
Cryptic species are usually understood to be species that are difficult to distinguish using traditional systematics methods (Knowlton 1993), or species that "are classified as a single nominal species because they are at least superficially morphologically indistinguishable" (Bickford et al. 2007). Traditionally, taxonomists have utilized morphological analysis for the description of species, but new genetic methods have significantly increased interest in cryptic species in recent decades (J€ orger and Schr€ odl 2013). Understanding cryptic biodiversity is important for resolving practical conservation questions, in studies of pathogenic organisms, and for addressing theoretical problems of speciation (Bickford et al. 2007). It is also relevant for ecology, particularly for understanding fundamental relation between species and their ecological niches (Marrone et al. 2013).
Different researchers have different opinions about the nature of cryptic species. Some authors consider cryptic species to represent the initial stage of speciation, after newly originated forms have obtained reproductive isolation, but before they have developed detectable morphological differences, that is, cryptic species are evolutionary young forms that are more similar genetically than ordinary species in a group. Other authors consider genetic distances between cryptic species to be similar to distances between ordinary species; they do not represent the initial stage of speciation. Empirical examples may support both opinions. For instance, studies have found that numbers of coccolithophores, considered to be cryptic, are genetically very close to each other (Saez and Lozano 2005;Saez et al. 2008). At the same time, other studies of crustaceans and fish show less genetic similarity among cryptic species (Colborn et al. 2001;Lefebure et al. 2006). In our study, we will not focus on the biological nature of cryptic species, but, rather, consider methodological questions.
There is quite a high probability that many species considered to be cryptic are, in fact, pseudocryptic, that is, they are included in this category because of "the inadequacy of the morphological analysis" (Knowlton 1993). This inadequacy is not because of fundamental limitations of morphological methods, but due to insufficient thoroughness in their application during species description. Careful morphological analysis of species originally considered cryptic, based on morphological similarity in conjunction with genetic, ecological, or behavioral differences, can often establish morphological traits sufficient for distinct identification (Gomez et al. 2004;Dayrat 2005;Saez and Lozano 2005;Will et al. 2005;Cardoso et al. 2009). To find such traits, one may need to study different life stages. This is clearly demonstrated by the butterfly Astraptes fulgerator, in which lineages are indistinguishable in the adult stage, but clearly detectable in caterpillars (Hebert et al. 2004). Such cases are more properly termed pseudocryptic species.
Why is it important to differentiate between true cryptic and pseudocryptic species? The existence of true cryptic species shows that morphological analysis has fundamental limitations for discriminating among species. As it is insufficient for describing biodiversity at the species level, nonmorphological techniques such as genetic analysis and investigations of behavioral, physiological, and other traits must be employed. Mayr (1963) assigned great importance to cryptic species (or sibling species in his original terminology) in his attack on the morphological concept of species. However, the existence of pseudocryptic species means that morphological methods may be capable of resolving fine-scale differences among species if their potential is fully utilized. Therefore, the "cryptic or pseudocryptic" dilemma speaks to the resolution of morphological analysis in taxonomical studies, in other words, to the utility of morphological methods for identifying species. According to Knowlton (1993), one might expect that cryptic species are more common in marine environments because it is more difficult to study marine organisms than terrestrial ones. In addition, marine organisms rely on chemical signals for gamete recognition and mate choice, and depend less on vision during reproduction than terrestrial organisms. The copepods selected for this study are typical aquatic organisms possessing these features, and cryptic species are common among them.
We pay special attention to the copepod Eurytemora affinis species complex where cryptic species have been intensively studied. E. affinis is distributed in brackish waters of the North Pacific and Atlantic Oceans and in some freshwater lake basins. Until recently, most authors considered it a single species (Rylov 1922;Croskery 1978;Dussart and Defaye 2002). However, subsequent genetic studies and the crossbreeding of animals from different regions showed significant genetic differences (Knowlton 1993;Lee 1999Lee , 2000Lee and Frost 2002;S. Souissi, pers. comm.). Four main clades -European, Asian, North American, and North Pacificwere observed within the species, with maximum pairwise divergences of 10% in 16S rRNA and 19% in COI genes (Lee 2000). Genetic heterogeneity of 01.7-12.4% in COI and 4-6% in 16S rRNA within each clade was also found (Winkler et al. 2008;Winkler et al. 2011). The genetic differences in mitochondrial genes among E. affinis clades correspond to species-level differences in Eurytemora and in Copepoda in general (Bucklin et al. 1998;Lee 2000;Lef ebure et al. 2006;Mcmanus and Katz 2009). Lee and Frost (2002) carried out a morphological analysis of samples collected from 43 locations throughout their range to describe the main patterns of morphological variation and compare them with genetic variation. The authors concluded that samples of E. affinis were significantly more heterogeneous genetically than morphologically. They explained this in terms of morphological stasis within the group and concluded that E. affinis represented a complex of cryptic species. However, recently, two separate species E. carolleeae Alekseev et Souissi and E. caspica Sukhikh et Alekseev have been described within the E. affinis complex using genetic and morphological techniques (Alekseev and Souissi 2011;. Therefore, this complex represents a convenient model for studying relationships between cryptic and pseudocryptic species. Along with traditional analysis of mean values of morphological characters, we also studied fluctuating asymmetry (FA)random deviations from perfect symmetry, a measure of developmental instability (Zakharov 1989), which represents a stochastic component of phenotypic variance (Lajus et al. 2003). FA now is often used for monitoring stress of different origins (Graham et al. 2010;Beasley et al. 2013). This study has two objectives. Because no protocol currently exists to assign cryptic status to a species, we will first analyze the available literature on cryptic species in Copepoda to discover what justifications appear in practical scientific work. Secondly, we will perform a detailed morphological analysis of forms suggested earlier as cryptic species within the E. affinis complex, based on trait mean values and principal component analysis, and traditional indices to search for heterogeneity within the groups to determine or confirm their actual status and show robustness and potential of used methods. Also, we compared the samples by fluctuating asymmetry (FA) as an additional morphological marker.
Terminology
The terms used in relation to morphologically similar forms are diverse and numerous. Whenever possible in this study, we use the term "cryptic species" in its most generic sense. However, we must also mention other terminology. "Twin species" or "sister species" are morphologically similar species with minimal genetic distance between them and sharing a common ancestor unique to them (Borkin et al. 2004;Bickford et al. 2007). "Sibling species" are characterized by greater genetic distances than twin species, distances that are similar to those of "usual" species. "Semispecies" are slightly divergent geographical replacement species that may hybridize infrequently where they overlap (Mallet 2001). Clearly, when quantitative criteria of divergence are absent, these terms are used very subjectively. The term "form" in this study refers to taxa of different or unknown rank. "Clade" is used to emphasize monophyletic origin of a taxon.
Systematic review of Copepoda cryptic species
A taxonomical revision may produce different results: (1) confirmed status of the "old" form due to insufficient differences between intraspecific forms (if any), (2) subdivision of the "old" species into "usual" species if differences between forms are great, and (3) designation as cryptic species. Although the total number of revisions was notably greater, here, we considered only revisions to the third group.
We analyzed the available data on cryptic species of Copepoda found in literature published in English from 1980 to 2013, performing our last search on 31 January 2014. For public domain Internet and database searches (OvidSP, ScienceResearch, ScienceDirect, eLibrary.ru, Google Scholar, HighWire Press and home pages of scientific publishers Springer, PLOS One, and Blackwell Publishers), we used the keywords "cryptic species," "twin species," "sibling species," "sister species," "semispecies," and "clade." Only publications in peer-reviewed international journals were included.
Initially, the literature search identified 33230 potentially relevant abstracts, from which 100 were retrieved and 24 were included in this review after full examination. Two researchers performed literature searches and data extraction. The first researcher extracted data from listed sources; the second author double-checked this work. Disagreements between researchers were resolved by consensus.
For each publication, we identified the information used to prove that the forms under consideration differed at the species level. Then, we listed any morphological studies that had also been performed. If no morphological analyses had occurred at that time, we identified previous analyses referred to in the study. We used only the authors' terminology. Note that, given the subjectivity of these terms, forms can have different biological natures even when they have the same names.
Sampling and preservation of the Eurytemora affinis
The material for this study was collected from aquatic surface layers (1-2 m deep) using a 100-lm plankton net deployed from a boat or from shore. Most samples were preserved in 95% ethanol, but samples from the Caspian Sea were preserved in a 4% formalin solution.
Identification of samples
Genetic identification of samples was accomplished using the mitochondrial CO1 gene. In Baltic Sea locations (Gulf of Finland, Gulf of Riga, and the Vistula Lagoon) where different species, E. affinis and E. carolleeae, coexist, most individuals were identified genetically as described in our previous study . This identification was based on published data (Lee 1999(Lee , 2000Lee and Frost 2002) where European, American, and Asian forms were described. We provided comparisons of our data with the published sequences of European and American forms (Alekseev et al. 2009) and deposited sequences in GenBank (HM368364, HM473958-HM474035). In locations where overlap in the ranges of different forms is unknown, we relied on published studies (Lee 2000;Winkler et al. 2008Winkler et al. , 2011 where detailed analyses of CO1 gene sequences have been performed.
Based on genetic data, taxonomical keys using morphological traits were used to identify newly described species (Alekseev and Souissi 2011;, and the same keys were then used to identify the rest of the individuals.
Morphological analyses
Samples used for morphological analysis are described in the Table 1. We performed analyses of two datasets using structures of caudal rami, protopodite of the swimming legs 5 and protopodite of the swimming legs 4, that are typically employed in taxonomical studies of Eurytemora (Gaviria and Forro 2000;Su arez-Morales et al. 2008;Dodson et al. 2010;Alekseev and Souissi 2011;. Copepod adults were measured under a dissection microscope (Olympus, SZX2, Tokyo, Japan) with an ocular micrometer (5-lm resolution). Only females were used for analyses. Type material of E. carolleeae and E. caspica was studied in the type collection of the Zoological Institute Russian Academy of Sciences under reference numbers 55052-55054 and 55060-55063.
The first dataset was analyzed to obtain an overall picture of the morphological heterogeneity of the E. affinis species complex. We used 6 traits -CrL, CrW (on caudal rami), LongSp, Sp1, Sp2, and Sp3 (on P5) ( Fig. 1)and included 231 specimens from nine populations and three species -E. affinis, E. carolleeae, and E. caspica. Samples collected at the same location in different years were pooled. Also, we pooled samples from the Loire River and Gironde estuaries based on their morphological similarity and geographical proximity.
The second dataset was used for in-depth morphological comparison of two species E. affinis and E. carolleeae described by Alekseev and Souissi (2011), which coexist in the Gulf of Finland (Table 1). Analysis involved 58 specimens, and each species was represented by two populations. The number of traits was expanded to 16 ( Fig. 1). This comparison primarily focused on different species (former cryptic species), while the first dataset focused on the groups from different localities. Also, the larger set of traits in the second dataset allowed us to study variation in traditional indices. Our morphological analyses include specimens used in previous studies (Alekseev et al. 2009;Alekseev and Souissi 2011; All the traits were measured from both the left and right sides of body. Multiple traits (average between left and right) were processed using principal component analysis. Pairwise comparisons were performed using the Student t-test. Heterogeneity among multiple samples was analyzed using one-way ANOVA. In case of multiple comparisons, we used Bonferroni correction (Rice 1989;Armstrong 2014).
To understand how small sample sizes of 3-5 specimens, which were used in some previous studies (Lee and Frost 2002), may affect the results of morphological discrimination of samples, we performed a simulation using our first dataset. We calculated a number of significant (P < 0.05) pairwise comparisons of several PCs for sample size ranging from 2 to 13. For each sample size, we used different number of trials avoiding, as much as possible, use of the same specimens. For sample sizes N = 2 and 3, we used five trials, for N = 4 -4 trials, for N = 5 -3 trails, for N = 6-10 -2 trails, and for N = 11--13one trial. After that, we averaged the results of individual trials and divided the obtained averages by number of significant pairwise comparisons in the initial dataset (N = 231) to standardize them among PCs (theoretical maximum for dataset of nine samples is 36). Obtained data were averaged among different PCs.
Analysis of fluctuating asymmetry was performed using techniques developed earlier (Lajus and Alekseev 2000). FA was calculated with the following formula:
Results
Cryptic species of Copepoda described in the last three decades Our literature search showed that during the last three decades, 24 revised copepod species were subdivided into cryptic forms (we use term "form" here because some authors do not assign species status) (Table S1). Two studies (Conradi et al. 2004; B€ ottger-Schnack 2005) described sibling species solely based on morphological analysis: The authors interpreted differences between forms to be below the resolution threshold describing ordinary species. Five studies used only genetic techniques (threeonly biochemical genetic techniques, oneexperimental hybridization, and onecombination of the two methods). In most cases, authors referred to previous studies that showed the absence of morphological differentiation of forms from locations where samples were collected. However, no additional morphological analysis of studied samples was performed. Seventeen studies explored genetic and morphological techniques simultaneously.
Empirical morphological study of Eurytemora species
Overall picture of the morphological heterogeneity of the E. caspica, E. affinis, and E. carolleeae We tested normality of distributions of mean values using skewness and kurtosis. Kurtosis and skewness showed significant (P < 0.05) departures from normal distribution in 8 and 4 cases, respectively, of totally 54 cases in each dataset. Use of Bonferroni correction resulted in insignificance of all these departures from normality. Based on this, we used parametric statistics in further analysis of mean values.
To partition out the effect of size, it is common to apply principal component analysis. Our analyses of 6 traits on 231 specimens from nine samples (Table 1) showed that PC1 explained 82.5% of total variance, PC2 -10.0%, PC3 -3.2%, PC4 -2.0, and PC5 and PC6 -1.1% each. Given that PC1 explained a very high percentage of total variance and that all traits show high loadings on this PC (loadings exceed 0.92 for five traits except length of caudal rami, for which the loading is 0.73), PC1 was interpreted as general size. We interpreted the other PCs as describing different aspects of shape (Bookstein et al. 1985). All PCs except for PC6 showed significant differences between samples (P < 0.001) when using oneway ANOVA. These differences remained significant (P < 0.01) after Bonferroni correction. This indicated that the samples were differentiated not only by size but also by shape. Discrimination among samples 1-9 by shape is clearly evident when specimens are arrayed against PC2 and PC3 coordinates (Fig. 2). Furthermore, three indices -P5Sp2/Sp3, P5Sp1/Sp2, and caudal rami L/Wshowed significant differences, also after Bonferroni correction (P < 0.01), and the majority of pairwise comparisons of samples are significant: Of 36 pairwise comparisons of 9 samples with each other in PC2 and PC3 by Student's t-test, 33 showed significant differences in PC2 and 25 in PC3 (P < 0.05). After Bonferroni correction, 17 and 29 comparisons remained significant (P < 0.05).
Using a t-test to compare the left and right values on six traits from four samples (samples 2, 3, 5, and 7) showed no evidence of directional asymmetry. Therefore, FA analysis of the other samples did not differentiate between the left and right sides (i.e., both left and right structures were measured, but were not differentiated). Kruskal-Wallis ANOVA for FA indices of all six traits and for the overall FA index showed significant differences between samples (P < 0.01, P < 0.05 after Bonferroni correction) (Fig. 3). Pairwise comparisons were significant (P < 0.05) in 19 of 36 cases. Eleven of them remained significant (P < 0.05) after Bonferroni correction.
In-depth morphological comparison of two species E. affinis and E. carolleeae Analysis of distribution of mean values of the second dataset did not show significant departures from normality. Kurtosis significantly (P < 0.05) departures from normality in two of 20 cases (ten traits that were not included in the first dataset in each sample), but none of them was significant after Bonferroni correction. None of traits showed significant skewness. Analysis of the second dataset showed quite large differences in mean values of the morphological traits. T-tests showed that 10 of 16 traits significantly differed at the 95% confidence level, and nine differed at the 99% confidence level (Table 2). After Bonferroni correction, 9 and 5 traits showed significant differences for 95 and 99% confidence levels, respectively. Such differences could be interpreted as specieslevel differences. Traditional indices also demonstrated significant differences between samples, and, in general, differences were more pronounced than among our initial set of traits (Table 3). Eight of 10 indices showed significant differences (P < 0.01), and Bonferroni correction resulted in shift of P-level to P < 0.05. While using principal component analysis, significant differences among species were obtained for only PC2, but those were statistically significant at almost any level. Thus, all differences between American and European samples appeared to be aggregated in PC2 (explaining about 10% of total variance). This situation is rare, and in analysis of correlated morphological traits, differences between samples are usually distributed among several PCs (e.g., Lajus and Alekseev 2000). Traits P4LongSp, CrL, and P5TSp had the highest loadings on PC2 (ranging from 0.6 to 0.7). Differences in these traits among samples were also significant, also after Bonferroni correction (P < 0.01) ( Table 2). This suggests that PC2 is not only based on the above three traits but is also correlated with other traits. Discrimination among samples arrayed on coordinates PC2 vs PC5 (showing minimal values of t-tests) was easier than using other indices that also showed minimal t-test values (P5 TSp/Sp1 vs. P4 LongSp/Sp2) (Fig. 4A,B).
The number of significant pairwise differences, calculated based on simulations using PCs that showed significant effect based on one-way ANOVA (PC2-PC5), clearly shows its dependence on sample size (Fig. 5). At sample sizes of 3-5 specimens, the number of significant pairwise comparisons is about 40% of number of significant comparisons while using the full dataset.
In analyzing fluctuating asymmetry, we compared two samples using Mann-Whitney U-test and Kolmogorov-Smirnov test and found significant differences in FA (P < 0.05) for one of 16 traits (Table 2)long spine 1 (P4). Differences became nonsignificant after Bonferroni correction. At the same time, according to Mann-Whitney U-test, in all 16 traits, sum of ranks of individual fluctuating asymmetry of the European sample was higher than the American one, showing statistically significant differences by sign test (P < 0.01).
Cryptic and pseudocryptic species and their relationships with other taxa
To better understand relationships between different taxa, as well as the nature of cryptic and pseudocryptic species, Designation of the traits on Figure 2. T-test (n = 31 and 27)value of t-test at maximal sample size (n = 31 and 27). T-test (n = 3)t-test value at sample size = 3 (mean for 9 trials using different specimens taken from the initial samples). T-test FA (n = 31 and 27)t-test value comparing samples by fluctuating asymmetry (n = 31 and 27). Abbreviation of the traits in Table 2. T-test (n = 31 and 27)value of t-test at maximal sample size (n = 31 and 27). T-testt-test value (mean for 9 trials using different specimens taken from initial samples). it is useful to represent them graphically as coordinates of genetic and morphological distance. Figure 6 is a schematic of this process. The genetic distance axis marks the "average distance between species" and the "average distance between genera." We do not specify the type of genetic distance, which can differ (Cavalli-Sforza and Edwards 1967;Nei 1972;Reynolds et al. 1983); rather, we use the term broadly, assuming that the average genetic distances between species and genera are known approximately for a particular group. Also, we avoid discussing relationships between genetic distance and reproductive isolation (a key parameter of the biological concept of species), but assume they are correlated. Forms with less genetic separation than is characteristic of species are considered intraspecific groups; those with genetic distances on par with typical genera and species are considered species.
(A) (B)
The morphological distance axis has three markers. The first two are "average distance between species" and "average distance between genera." Much less formalized than genetic distances, these usually result from a consensus among taxonomists working with that particular group of organisms. The third marker, the "resolution of morphological analysis," is a function of instrumental and statistical error. It may also represent a difference threshold between samples, which could be useful in helping groups of researchers describe intraspecific relationships. For instance, Mann and Evans (2007, p. 248) noted ". . . some of the differences are so slight that the species are effectively cryptic," meaning that in some cases, morphological differences can be detectable, but insufficient for assignment as ordinary species.
In this coordinate system, intraspecific forms occupy the lower left-hand corner. To the right are forms first described as species based on morphological analysis, but which genetic analysis did not confirm. The upper righthand corner is occupied by "usual" species described via morphological analysis and confirmed as such by reproductive isolation or genetic distances. Cryptic and pseudocryptic species are situated in the lower right-hand Figure 5. Results of simulation of sample size reduction on significance of differences: percentage of significant (P < 0.05) pairwise differences between different Eurytemora samples in principal components based on six morphological traits in relation to number of significant differences in the full dataset (9 samples, 231 specimen) averaged for trials and four principal components vs sample size (see text for more detailed explanations). corner, with pseudocryptic species on top. It is important to stress that pseudocryptic species are above the "resolution of morphological analysis," as our work has shown.
How to reduce subjectivity in the assignment of cryptic species status Our critical analysis of the literature showed that a variety of criteria are used to assign cryptic status to a species. We distinguished three groups of studies. In the first and most common group, 17 of 24 revisions applied an integrative approach combining both genetic and morphological techniques and genetic differences at the species level were compared with minor (or absent) morphological differences. This is the soundest way to assign cryptic status.
The second group, 5 of 24 revisions, based determination solely on genetic analysis, relying on morphological results from previous studies, and even on original descriptions that, for most copepods, occurred in the 19th century (Table S1). This is a weaker basis for discrimination because older taxonomical and statistical methods were more primitive and subtle distinctions between species poorly known. This is evident in the many recent revisions that have identified new species through the use of improved morphological and statistical techniques alone. Reviewing the species concept in diatoms, Mann (1999) noted that all species initially referred as cryptic eventually were found to be morphologically distinguishable using in-depth analysis. It seems that the most correct decision, which could be based on genetic studies without morphological analysis, is to conclude the existence of either cryptic or pseudocryptic species as has been done by Cornils and Held (2014).
The third group included two studies that were based exclusively on detailed morphological analysis and argued that the minor morphological differences observed were not sufficient for status as ordinary species (Boxshall and Self 2011). Broadly speaking, these cases do not contradict existing definitions of cryptic species, which may include criteria that are "difficult to distinguish" (Knowlton 1993) or "at least superficially indistinguishable" (Bickford et al. 2007), but without agreement among experts working with particular forms, such criteria are too subjective and are not distinguishable from resolution of morphological analysis (Fig. 6).
Combining morphological and genetic analysis is the best way to study taxon, but even this does not guarantee that a suggested cryptic species is truly cryptic. An example is provided by Rocha-Olivares with co-authors (2001), where cryptic species were supposed as result of huge genetic differences and the first morphological studies showed morphological stasis. However, more deep mor-phological analysis revealed sufficient differences among studied Harpacticoida and a number of species were described (Gomez et al. 2004). Similar picture was observed in E. affinis species complex, which was given a status of cryptic species using integrative approach (Lee and Frost 2002). These examples present that integrative approach by itself is not a guarantee of reliable conclusion due to insufficient use of morphological analysis.
The absence of morphological analysis in the second groups of studies considerably increased the chance for pseudocryptic species status, while the use of only morphological methods made differences between cryptic and ordinary species quite subjective. Thus, in our analysis of published data, the criteria for assigning cryptic status to a species differed by analytical method and cannot be expected to produce consistent results.
Nevertheless, cryptic species are considered a significant component of biodiversity compared with the "elephant in the room" (Adams et al. 2014). Knowledge about cryptic biodiversity is not only an important scientific question but also has great implications for nature management in general and for conservation biology in particular (Witt et al. 2006). Therefore, it is important to standardize as much as possible the procedure of assigning cryptic status to a species. Clearly, combining genetic and morphological analysis in the framework of integrative taxonomy (Dayrat 2005;Will et al. 2005;Cardoso et al. 2009) would reduce the number of pseudocryptic species, whereas abandoning morphological analysis would notably increase chances for eventually changing species status from cryptic to pseudocryptic. On the whole, our examination of cryptic species in Copepoda generally confirmed Knowlton's opinion (Knowlton 1993) about the "inadequacy of morphological analysis" usually performed for the description of cryptic species.
Pseudocryptic status of Eurytemora species
Comparative analysis of E. carolleeae and of E. affinis showed that the indices have a higher discriminatory power than the initial traits, but lower than PCs generated by principal component analysis. Similar results were obtained in an earlier comparison of three samples of freshwater copepods Acanthocyclops signifier Mazepova, 1952, from Baikal Lake (Lajus and Alekseev 2000). As expected, considerable reduction in sample size decreased statistical significance between samples and, for sample sizes close to those used by Lee and Frost (2002), we detected much fewer pairwise statistical differences between samples than in the large samples.
Often traits for morphological analysis in copepods are measured only on one side of the body. This simplifies measurements and analysis but results in loss of information. Firstly, measuring both trait values results in sampling error that is lower than measuring either one or the other (either left or right). The larger the differences between left and right structures and the higher the measurement error, the greater the difference between sampling error based on one or two measurements. In small and difficult to measure copepod structures, measurement error can be quite high (Lajus and Alekseev 2000).
Secondly, analysis of left and right values allows for measuring fluctuating asymmetry which may yield additional information about morphological differences between the forms. Analysis of FA in the first dataset showed pronounced differentiation of samples. This indicates that some factors caused heterogeneity of samples within developmental stability. However, detailed analysis of patterns of asymmetry and their drivers was not the goal of this study. Here, we merely demonstrated that this morphological parameter provides additional independent information pertaining to species description. These results show that fluctuating asymmetry analysis suggests the pseudocryptic status of forms previously considered to be cryptic species by providing additional evidence about their morphological differentiation.
In our study of the E. affinis species complex, previously considered to be cryptic (Lee and Frost 2002), we confirmed morphological differences between described species. This supports our conclusion that a detailed morphological analysis should be an essential part of justifying cryptic species. As the morphological analyses that formerly comprised species descriptions were usually performed at a lower resolution than is needed to designate cryptic species, it is necessary to use many different traits as well as samples of reasonable size.
Our analyses showed that it is reasonable to apply other analytical methods in addition to traditional morphological indices. Multivariate statistical techniques may increase the resolution of morphological analysis. Analyzing bilateral traits on the left and right sides reduces sampling error and provides new information on morphological variationinformation about developmental stability measured by FA. Combined with the analysis of mean values, FA can be used as an additional morphological marker in population studies of copepods and in the revision of cryptic species status.
Conclusion
Our critical survey of literature on cryptic species in copepods and detailed morphological analysis of the E. affinis species complex suggest that not all species considered to be cryptic are truly cryptic. This affirms that the potential of morphological techniques to contribute insights into taxonomyeven using traditional structuresis still far from its limit. New techniques, in particularly, scanning electron microscopy, can provide an important complementary source of additional characters. How this potential can be met is a broad problem in taxonomy. At a time when the objective need for taxonomists qualified in current methodologies exceeds professional capacity, calls come to invest more resources in this field (Wheeler et al. 2012). Copepods are among species-rich, but small-sized taxa for which the situation is even more difficult than for other groups (Costello et al. 2006). Training taxonomic experts to measure, analyze, and describe such biodiversity requires extensive time and resources. Financial effort is one reason why taxonomists are becoming scarce at some institutions. At the Natural History Museum, London, UK, the number of traditional taxonomists has fallen 12% over the last 15 years due to institutional investments in molecular biological capabilities (Boxshall and Self 2011). Lack of taxonomical expertise, however, cannot be compensated by molecular biological techniques. We agree that the ". . .notion that anyone with a thermal cycler and DNA sequencer can act as a taxonomist for any group of organism, however appealing the notion might be, is overly optimistic and biologically specious" (Bickford et al. 2007). | 2018-04-03T02:53:53.411Z | 2015-05-25T00:00:00.000 | {
"year": 2015,
"sha1": "a9d04cb895e05edab9e0527e15e1d104ab550c8a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ece3.1521",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9d04cb895e05edab9e0527e15e1d104ab550c8a",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
119133156 | pes2o/s2orc | v3-fos-license | Local contact homology and applications
We introduce a local version of contact homology for an isolated periodic orbit of the Reeb flow and prove that its rank is uniformly bounded for isolated iterations. Several applications are obtained, including a generalization of Gromoll-Meyer's theorem on the existence of infinitely many simple periodic orbits, resonance relations and conditions for the existence of non-hyperbolic periodic orbits.
Introduction
Since the seminal works of Floer [11,12,13] several Morse theoretic methods have been developed in order to understand periodic orbits in Hamiltonian dynamics. In particular, contact homology was introduced in [9] in the bigger framework of symplectic field theory. Its chain complex is generated by good periodic orbits of the Reeb flow graded by the reduced Conley-Zehnder index and the differential counts rigid pseudo-holomorphic curves in the symplectization asymptotic to closed orbits. 1 Recall that a periodic orbit is good if it is not an even multiple of a simple periodic orbit whose linearized first return map has an odd number of real eigenvalues less than minus one. Otherwise, it is called bad.
There are different versions of contact homology, see [2] for a survey. We will consider here both cylindrical contact homology and linearized contact homology. The first one is defined and an invariant of the contact structure for nice contact forms, that is, contact forms with no periodic orbits of degree one, zero or minus one. The latter one, in turn, does not impose any condition on the closed orbits, but requires an augmentation for a certain differential graded algebra, and depends on the homotopy class of the augmentation. Nice contact forms admit a trivial augmentation and the linearized contact homology with respect to this augmentation is isomorphic to cylindrical contact homology [2]. Hence cylindrical contact homology will be regarded here as a particular case of linearized contact homology. If the contact manifold admits a strong filling (X, ω) then it induces an augmentation over the rationals under the hypothesis that ω| π 2 (X) = 0 and c 1 (T X, ω) = 0.
The purpose of this work is to introduce a local version of contact homology and provide some applications. For well known transversality reasons, our results are conditional on the completion of foundational work by Hofer, Wysocki and Zehnder, see [22,23,24].
1.1. Main result. Let us fix from now on a closed contact co-oriented manifold (N 2n−1 , ξ). Throughout this work we will always assume that the first Chern class of the contact structure vanishes and consider only augmentations over the rationals. Let α be a contact form for ξ and γ an isolated periodic orbit of the Reeb flow of α. This means that there is no sequence of periodic orbits converging to γ besides the obvious one.
Date: February 15, 2012. 1 One should be aware that for homotopically non-trivial periodic orbits the grading is not unambiguously defined and requires fixing an extra structure on (N, ξ), see Section 2.1.2. Throughout this work we fix such structure.
One associate to γ its local contact homology HC * (α, γ) by making a small non-degenerate perturbation of α and counting rigid holomorphic cylinders that are contained in the symplectization of an isolating neighborhood K of γ and are asymptotic to good periodic orbits homotopic to γ in K, see Sections 3 and 4 for details and properties of local contact homology. An interesting feature of local contact homology is that although we relate it to linearized contact homology via Morse inequalities (Proposition 7.4) it does not depend on the augmentation at all. This is due to the fact that there are no holomorphic planes with finite energy in the symplectization of K. Therefore, one cannot have holomorphic curves with more than one negative puncture.
The main result in this work establishes a uniform bound for the rank of local contact homology of iterations of a periodic orbit. Theorem 1.1. Let γ be a periodic orbit of the Reeb flow such that γ j is isolated for every j ∈ N. Then there exists a constant B > 0 satisfying dim HC * (α, γ j ) < B for every j ∈ N.
The proof is given in Section 6.6 and is based on two main building blocks. The first one is Proposition 6.1 establishing that the rank of the local contact homology of an isolated periodic orbit is less or equal than the rank of the local Floer homology of its return map. The second one is the main result in [16] that implies the existence of a uniform bound for the rank of local Floer homology of admissible iterations of an isolated periodic point of a Hamiltonian diffeomorphism.
1.2. Applications. Augmentations are defined for the differential graded algebra associated to a contact form, but we can define homotopy classes of augmentations for the contact structure as the following discussion shows. We refer to [2] for details. Given two non-degenerate contact forms α 0 and α 1 for ξ and regular almost complex structures J 0 and J 1 , a regular cobordism between the pairs (α 0 , J 0 ) and (α 1 , J 1 ) induces a chain map Ψ : (A(α 0 ), ∂ 0 ) → (A(α 1 ), ∂ 1 ) between the corresponding differential graded algebras. An augmentation for (A(α 1 ), ∂ 1 ) induces an augmentation Ψ * for (A(α 0 ), ∂ 0 ) and Ψ induces an isomorphismΨ : HC [Ψ * ] (α 0 , J 0 ) → HC [ ] (α 1 , J 1 ). It turns out that the homotopy class of Ψ * does not depend on the choice of the cobordism (see [8,Theorem 3.2]) and hence we have a natural identification of homotopy classes of augmentations for non-degenerate contact forms for ξ. In this way, one can define the set of homotopy classes of augmentations for ξ as the set of equivalence classes under this identification. The collection of the corresponding linearized contact homologies is an invariant of the contact structure, see [2,Theorem 2.8]. We will abuse a bit the notation and denote such equivalence classes by [ ] and its corresponding linearized contact homology by HC [ ] * (ξ). Let b [ ] * (ξ) be the rank of HC [ ] * (ξ) (that may be infinite).
We say that ξ is homologically unbounded if there is a homotopy class of augmentations [ ] for ξ such that there is a sequence of integers |l i | → ∞ satisfying lim i→∞ b [ ] l i (ξ) = ∞. The following theorem follows easily from Theorem 1.1 and the Morse inequalities in Proposition 7.4. Its proof is given in Section 8.1. Theorem 1.2. Suppose that ξ is homologically unbounded. Then the Reeb flow of every contact form α for ξ has infinitely many geometrically distinct periodic orbits.
Examples of homologically unbounded contact structures can be obtained by cosphere bundles. More precisely, given a closed oriented manifold M of dimension n, it is proved in [7] that (1.1) HC [ 0 ] * +(n−3) (S * M, ξ 0 ) H * (ΛM/S 1 , M ; Q), where ξ 0 is the standard contact structure of the unit cotangent bundle S * M , 0 is given by the obvious filling of S * M , ΛM is the free loop space on M and M ⊂ ΛM indicates the subset of constant loops [2,Theorem 4.4]. 2 A result due to Vigué-Poirrier and Sullivan [31] establishes that if M is simply connected then the rank of H * (ΛM/S 1 , M ; Q) is unbounded if and only if the homological algebra of M is not generated by a single class. Consequently, we have the following generalization of a celebrated result due to Gromoll and Meyer [18]. It is also proved in [28] for general coefficient fields using symplectic homology. Corollary 1.3. Let M be a closed oriented manifold such that the rank of H * (ΛM/S 1 , M ; Q) is unbounded. Then every fiberwise starshaped hypersurface in T * M has infinitely many geometrically distinct periodic orbits. In particular, the result holds if M is simply connected and its homological algebra over Q is not generated by a single class.
Another source of examples is given by connected sums. Given two contact manifolds (N 1 , ξ 1 ) and (N 2 , ξ 2 ) it is well known that its connected sum N 1 #N 2 carries a contact structure ξ 1 #ξ 2 , see [14]. Moreover, homotopy classes of augmentations [ 1 ] and [ 2 ] for ξ 1 and ξ 2 respectively induce a homotopy class of augmentations [ 1 # 2 ] for ξ 1 #ξ 2 . A result due to Bourgeois and van Koert [2,6] gives the long exact sequence · · · → HC * −1 (S 2n−3 , ξ 0 ) → HC where HC * (S 2n−3 , ξ 0 ) is the cylindrical contact homology of the standard contact structure on S 2n−3 . Since the rank of HC * (S 2n−3 , ξ 0 ) is at most one, we conclude that the connected sum of any contact manifold (admitting an augmentation) with a homologically unbounded contact manifold is homologically unbounded. Now, fix a non-degenerate contact form α for ξ with an augmentation . There is a natural filtration in contact homology given by the action. Given a ∈ R denote the truncated homology by HC a, * (α). Notice that it in general depends on the contact form and the augmentation. Following [30,27], we define the growth rate of HC * (α) as Γ (α) = lim sup a→∞ 1 log a log dim ι(HC a, * (α)), where ι : HC a, * (α) → HC [ ] * (ξ) is the map induced by the inclusion. The argument in [30,Section 4a] shows that the set {Γ (α); is an augmentation for α} is an invariant of the contact structure. Since our context is different from the one in [30] (in particular, we have to deal with augmentations) we will give a proof of this fact in Section 8.2. The following theorem will be proved in Section 8.3. Theorem 1.4. If there exists a non-degenerate contact form α for ξ with an augmentation such that Γ (α) > 1 then every contact form α representing ξ has infinitely many geometrically distinct periodic orbits. 2 The trivializations of the contact structure over periodic orbits used in [7] send the vertical distribution in the cotangent bundle to a fixed Lagrangian subspace in R 2n−2 . This fixes the grading in the isomorphism (1.1).
There are several examples of contact manifolds satisfying the previous hypothesis, see [25]. Our next application is a generalization of a result due to Ginzburg and Kerman [17] on resonance relations. Assume that there exist integers l − and l + such that HC [ ] l (ξ) has finite rank for every l ≤ l − and l ≥ l + . Under this assumption the positive/negative mean Euler characteristic is defined as provided that the limits exist. Notice that by Theorem 1.2 if ξ admits a contact form with finitely many prime periodic orbits then this hypothesis is fulfilled and the limits above always exist.
Given an isolated periodic orbit γ, its positive/negative local Euler characteristic is defined as The sum above is finite. Now, assume that γ j is isolated for every j ∈ N. The positive/negative local mean Euler characteristic of γ is defined aŝ By Theorem 1.1 these limits exist.
Theorem 1.5. Let α be a contact form for ξ with finitely many simple closed orbits. Given any homotopy class of augmentations [ ] for ξ, the positive/negative mean Euler characteristic satisfies where ∆(γ) is the mean index of γ and the sum runs over the set of simple periodic orbits γ such that ±∆(γ) > 0.
Notice that in the previous theorem we do not assume α to be non-degenerate. When α is non-degenerate, the local mean Euler characteristic of a periodic orbit is easily computed and we obtain Corollary 1.6 (Theorem 1.7 and Remark 1.10 in [17]). If α is non-degenerate and has finitely many simple periodic orbits then where B ± (α) (resp. G ± (α)) is the set of simple periodic orbits with positive/negative mean index whose even iterates are bad (resp. good).
An application of the previous theorem is the following result. Recall that a periodic orbit is hyperbolic if its linearized first return map has no eigenvalue in the circle. Theorem 1.7. Suppose that there is a homotopy class of augmentations [ ] for ξ such that HC [ ] n−3 (ξ) has finite rank and that there exists a positive integer C such that for every m ∈ N. If a contact form α for ξ has finitely many geometrically distinct closed orbits then there is a non-hyperbolic one.
Examples satisfying these hypotheses can be obtained using Yau's computation of the contact homology of subcritical Stein fillable contact manifolds [34]. More precisely, it is proved in [34] that given a subcritical Stein domanin (V 2n , J) such that ∂V = N then the cylindrical contact homology is given by where ξ is the maximal complex subbundle of T N . One can check from this that if n is even and V has trivial homology in every odd degree then N satisfies the hypotheses of Theorem 1.7 for the positive Euler characteristic. In particular, we get S 2n−1 with its standard contact structure and n even, recovering a result obtained by Viterbo in [32].
As a byproduct of the proof of Theorem 1.7 we obtain Theorem 1.8. Suppose that there is a homotopy class of augmentations [ ] for ξ such that If a contact form α for ξ has finitely many geometrically distinct closed orbits then there is a non-hyperbolic one.
A homology computation in [1] shows that there is a family of inequivalent contact structures on S 2 ×S 3 meeting this assumption. The isomorphism (1.1) implies that the unit cotangent bundle of a closed oriented manifold with non-trivial fundamental group and compact universal covering also satisfies this condition. By the aforementioned long exact sequence, the connected sum of one of these manifolds with any contact manifold (N, ξ) such that dim HC [ ] n−3 (ξ) < ∞ has a contact structure satisfying this assumption as well. Organization of the paper. Section 2 furnishes the basic material on pseudo-holomorphic curves necessary for this work. In Section 3 we define isolating neighborhoods of isolated closed Reeb orbits. This notion is crucial in the construction of local contact homology accomplished in Section 4. The computation of local contact homology is focused in Sections 5 and 6 where we deal with prime and iterated periodic orbits respectively. Morse inequalities are achieved in Section 7. They are a cornerstone in the proof of our applications presented in Section 8. Finally, Appendix A provides technical details about holomorphic curves in symplectizations of stable Hamiltonian structures.
Note. After we started to write the present paper, we were aware that Mark McLean obtained similar results using symplectic homology [28]. The results can be related to ours using the Bourgeois-Oancea long exact sequence [2]. However, although our techniques lead to similar results, we think they are complementary to McLean's since they furnish a equivariant version of the local homology of closed Reeb orbits. useful discussions regarding this paper. Special thanks to Alberto Abbondandolo for pointing out to us, in the beginning of this work, Ginzburg-Gurel's result in [16] and its relationship with Gromoll-Meyer's theorem and to Viktor Ginzburg for pointing out a mistake in a first draft of this paper. We also thank the IAS for its hospitality during part of the preparation of this work.
These results were first presented by the first author at the Workshop on Conservative Dynamics and Symplectic Geometry, IMPA, Rio de Janeiro, Brazil, August 1-5, 2011. He thanks the organizers for the opportunity to participate in such a great event.
This material is based upon work supported by the National Science Foundation under agreement No. DMS-0635607. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Preliminaries for Local Contact Homology
2.1. Stable Hamiltonian structures. We start by recalling the concept of a stable Hamiltonian structure from [3].
Definition 2.1. A stable Hamiltonian structure on a (2n − 1)-manifold N is a triple H = (ξ, X, ω) where ξ ⊂ T N is a hyperplane distribution, X is a vector field everywhere transverse to ξ and its flow preserves ξ, ω is a closed 2-form such that ω| ξ turns ξ into a symplectic vector bundle and i X ω = 0.
We refer to X as the Hamiltonian vector field of H.
Remark 2.2.
Given such H one can define a 1-form λ on N by It easy to check that λ ∧ ω n is a volume form and ker dλ ⊇ ker ω = RX. In particular L X λ = 0 and L X ω = 0. The stable Hamiltonian structure could be alternatively defined as the pair (λ, ω) and, in this case, X and ξ would be uniquely determined by i X λ = 1, i X ω = 0 and ξ = ker λ.
Any contact form α on N induces a stable Hamiltonian structure (ξ, R, Cdα), where R is the associated Reeb vector field, ξ = ker α is the contact structure and C > 0.
2.1.1. Periodic orbits. If x : R → N is a periodic trajectory of X with period T > 0 then , defines an element of C ∞ (S 1 , N ). A periodic orbit γ of X is the element of C ∞ (S 1 , N )/S 1 induced by some x T as above. We write γ = (x, T ) and γ k = (x, kT ) for any k ∈ Z + . The set x(R) ⊂ N is called the geometric image of γ. If T is the minimal positive period of x then we call γ simply covered. The set of periodic orbits of X will be denoted by P(H). When H is induced by some contact form α we may write P(α), or simply P when the context is clear. For any given K ⊂ N we denote by P(H, K), or P(α, K), the subset of orbits with geometric image contained in K.
The flow {φ t } t∈R of X induces a ω-symplectic linear flow on ξ If γ = (x, T ) ∈ P and 1 is not in the spectrum of dφ T : ξ x(0) → ξ x(0)=x(T ) then γ is called non-degenerate. When every γ ∈ P(H) (or γ ∈ P(H, K)) is non-degenerate we call H non-degenerate (on K). The notation γ = (x, T ) is ambiguous since the choice of x is not determined: we choose a special point pt γ in the geometric image of every closed orbit γ of {φ t } and assume x(0) = pt γ . Orbits with the same geometric image share the same special point.
2.1.2.
Conley-Zehnder indices. Assume for simplicity that H 1 (N, Z) is torsion free, and choose a set of generators {C i }, i = 1, . . . , l. We assume the C i are represented by 1-dimensional submanifolds, still denoted C i , and we choose ω-symplectic trivializations of ξ| C i . These choices will be fixed for the rest of this work. Any γ = (x, T ) ∈ P(H) can be seen as a singular 1-chain, which induces a homology class [γ] ∈ H 1 (N, Z). There are unique n i ∈ Z satisfying [γ] = i n i C i . A 2-chain realizing a homology between γ and i n i C i can be used to single out a homotopy class of ω-symplectic trivializations of (x T ) * ξ. A trivialization in this class represents the linearized dynamics of X as a path in the group Sp(2n − 2) starting at the identity. If γ is non-degenerate, this path ends in the complement of the Maslov cycle, and has a well-defined Conley-Zehnder index as defined in [29], denoted µ CZ (γ) ∈ Z. This is independent of the choice of the 2-chain since we assume c 1 (ξ) vanishes.
It is convenient to consider the degree of γ defined by 2.1.3. Good orbits. Let γ = (x, T ) ∈ P(H) be simply covered. According to [9], if the number of eigenvalues of dφ T : ξ x(0) → ξ x(T )=x(0) in (−1, 0) is odd (counted with multiplicities) then the even multiples γ 2k are called bad orbits. An orbit is called good if it is not bad, and we define (2.4) P 0 (H) := {γ ∈ P(H) : γ is good} and P 0 (K, H) = P(K, H) ∩ P 0 (H). In the case H is induced by a contact form α we write P 0 (α) and P 0 (K, α) accordingly.
2.2.
Pseudo-holomorphic curves. We take a moment to review the basic definitions from pseudo-holomorphic curve theory.
2.2.1. Cylindrical almost complex structures. Let V be a compact (2n−1)-manifold. In R×V there is a natural R-action induced by the maps (2.5) τ c : (a, p) → (a + c, p), c ∈ R.
In the language of [3], an almost complex structure J on R×V is cylindrical if τ * c J = J, ∀c ∈ R, and if the vector field R := J∂ a is horizontal, i.e. it is tangent to {a} × V , ∀a ∈ R. Since J is R-invariant, the formula Ξ := T V ∩ J(T V ) defines a (2n − 2)-dimensional J-invariant distribution in V , and R seen as vector field in V is everywhere transverse to Ξ. Note also that J is a complex structure on Ξ. J is called symmetric if the 1-form λ on V defined by i R λ = 1 and Ξ = ker λ satisfies L R λ = i R dλ = 0. Let Ω be a closed 2-form on V of maximal rank, that is, dim ker Ω = 1. Then a cylindrical J as above is said to be adjusted to Ω if the restriction Ω| Ξ turns Ξ into a symplectic bundle, J| Ξ is Ω| Ξ -compatible, that is Ω(·, J·) defines a (fiberwise) metric on Ξ, and i R Ω = 0. Note that if J is cylindrical, symmetric and adjusted to Ω as above then H = (Ξ, R, Ω) is a stable Hamiltonian structure.
Conversely, a stable Hamiltonian structure H = (Ξ, R, Ω) induces symmetric cylindrical almost complex structures on R × N adjusted to Ω. In fact, choose some Ω-compatible complex structure J on Ξ. Then we have a unique R-invariant almost complex structure J on R × N defined by requiring J∂ a = R and J| Ξ = J, which is the desired almost complex structure. The set of such J will be denoted by J (H). When H is induced by a contact form α we may write J (α) instead of J (H).
We can view an element of Λ as a real function on R × N that depends only on the first coordinate. Similarly we can view the form λ in (2.1) as a 1-form on R × N . Following [3], one defines the ω-energy of F as and the energy of F as All these integrals have non-negative integrands. If 0 < E(F ) < ∞ then F is said to be a finite-energy pseudo-holomorphic curve. The elements of Z are called punctures of F , and a puncture z ∈ Z is called removable when F is bounded near z. In this case, an application of Gromov's Removable Singularity Theorem shows that F can be smoothly continued across z. Let H ± = (ξ ± , X ± , ω ± ) be stable Hamiltonian structures on N , fix J ± ∈ J (H ± ), L > 0 andJ ∈ J L (J − , J + ). Assume also there exists a symplectic form Ω on [−L, L] × N that agrees with ω ± on T ({±L} × N ), up to positive constants, and tamesJ. Consider a smooth map F : S \ Z → R × N which isJ-holomorphic. Following [3] we define where λ ± are the 1-forms associated to H ± as in (2.1). All integrands above are non-negative. Moreover F is constant if, and only if, E(F ) = 0. F is called a finite-energy curve when 0 < E(F ) < ∞. Again the points of Z are called punctures, and a puncture is removable if, and only if, F is bounded around it.
Finally, we need to define finite-energy J-holomorphic curves for J ∈ J L<R (J), where 0 < L < R, H = (ξ, X, ω) and J ∈ J (H) are fixed. The correct taming conditions are as follows. We assume J is adjusted to some 2-formω ∈ Ω 2 (N ) of maximal rank on the neck 3 [L − R, R − L] × N , as discussed in 2.2.1, and assume also that J is tamed by symplectic forms Ω ± on (±[R + L, R − L]) × N satisfying the following conditions: • The behavior near the punctures is exactly as in the other cases.
A non-removable puncture z is positive if = +1, and negative if = −1. In any case one says that F is asymptotic to γ at z, and γ is the asymptotic limit of F at z. These definitions are independent of the choice of ψ.
Under the assumption that the Hamiltonian vector fields are non-degenerate, the asymptotic behavior of finite-energy curves in non-cylindrical cobordisms is analogous and we will not describe it here. The reader can easily guess the precise statements in this case.
Isolating neighborhoods of isolated orbits
In this section we discuss the necessary geometric set-up for defining local contact homology of an isolated orbit. Let H = (ξ, X, ω) be a stable Hamiltonian structure on the (2n − 1)manifold N , and x : R → N be a T -periodic trajectory of the vector field X, T > 0. Assume Definition 3.1. An isolating neighborhood for γ is a compact connected neighborhood K of x 0 (R) with smooth boundary satisfying: • γ is the only closed X-orbit in K in the free homotopy class of γ (of loops in K), There are no closed X-orbits inside K as above, which are contractible in K.
Lemma 3.2. Every isolated γ has an isolating neighborhood.
Proof. Let T 0 be the minimal positive period of x. Take a neighborhood K loc . If T k → ∞ then γ dt = γ k dt → +∞, an absurd. Thus we get a bound for the periods T k = γ k λ so that γ k → γ, contradicting the hypothesis that γ is isolated. This contradiction shows that, possibly after further shrinking B, we get an isolating neighborhood K as in Definition 3.1 since H 1 (K, R) = 0.
and Ω ∈ D is symplectic and coincides with ω ± on T ({±L} × K) up to positive constants, then the following assertions hold.
(2) There exists C > 0 such that every finite-energy J-holomorphic map F = (a, f ) : Here we use Ω to define E(F ) according to the discussion in 2.
2.3
The proof below makes use of the results from Appendix A.
Proof. Suppose the lemma is not true. Then we find to positive constants such that Ω n → Ω, and non-constant finite-energy J n -holomorphic maps F n = (a n , f n ) satisfying i), ii) above, and not satisfying the conclusions of lemma.
Obviously, Ω n tames J n on [−L, L] × K if n is large enough. We claim that {E(F n )} is bounded. In fact, since H 2 (K, R) vanishes, there exists a primitive α for Ω on [−L, L] × K. Using the Mayer-Vietoris principle, we find a sequence of primitives α n , dα n = Ω n , on Then, by the properties of Ω n and Ω we have for some bounded constant c > 0 independent of n, where γ ± n are the asymptotic limits of F n at {±∞} × S 1 oriented by the Hamiltonian vector fields. By Lemma 3.3, γ ± n → γ, so that we get the uniform bound for the last line. The other terms of the energy defined in 2.2.3 are easier to estimate.
We can assume f n (R × S 1 ) ∩ ∂K = ∅. By Lemma 3.3, γ ± n ⊂ int(K). Thus, we find (s n , t n ) such that f n (s n , t n ) ∈ ∂K. We claim that dF n is C 0 loc -bounded. If not we use Lemma A.1 to get a non-constant finite-energy J-holomorphic plane with image in R × K. Lemma A.6 applied to the end of this plane gives us a contractible periodic orbit of X inside K, a contradiction to our assumptions.
Let d n = a n (s n , t n ) and define u n = (a n (s+s n , t n )−d n , f n (s+s n , t n )). By the above discussion we have C 1 loc -bounds for u n and, consequently, also C ∞ loc -bounds. Up to a subsequence, we can assume there exists a finite-energy non-constant J-holomorphic cylinder u = (b, u) such that u n → u in C ∞ loc . Letting C ⊂ R × S 1 be compact and arbitrary, we wish to show that C u * ω = 0. By Lemma A.6 we find sequences s ± j → ±∞ such that the loops u(s ± j , t) converge to periodic orbits of X (homotopic to γ in K) as j → ∞. But the only such orbit is γ itself, so that Thus E ω ( u) = 0 and we can apply Lemma A.5 to u to conclude that X has a periodic orbit in K, homotopic to γ in K, which touches ∂K. This is absurd.
We need a version of the above statement for almost complex structures as we stretch the neck. The proof is similar and omitted. (
Special stable Hamiltonian structures.
Definition 3.6. We call H = (ξ, X, ω) special near γ if there exists a small closed smooth tubular neighborhood K of γ as above such that one of the following (mutually excluding) conditions holds: i) H is induced by some contact form, i.e., ξ = ker α, X = X α and ω = Cdα where α is a contact form defined near K and C > 0 is a constant. ii) The 1-form λ given by (2.1) is closed on K. If some K is given for which i) or ii) applies then we say H is special for γ and K.
In order to define local contact homology of the pair (H, γ) one needs to slightly perturb H near K, within the class of stable hamiltonian structures, to make closed orbits inside K and homotopic to γ (in K) non-degenerate. This may not be possible in general. However, when H is special near γ this perturbation can always be performed. This is well-known if i) holds. Assume H falls into case ii). Let α be a primitive for ω on K, which exists since H 2 (K, R) vanishes. Possibly after replacing α by Cλ + α, with C 1, we can assume inf K i X α > 0 and α is a contact form on K. The Reeb vector field X α is a pointwise positive multiple of X. Let α be a C ∞ -small perturbation of α near K so that all closed α -Reeb orbits inside K homotopic to γ are non-degenerate. Then ω = dα is a small perturbation of ω and H = (ξ = ker λ, X , ω = dα ), where the vector field X is given by i X ω = 0 and i X λ = 1, is a stable Hamiltonian structure C ∞ -close to H. Moreover, since RX = RX α , closed X -orbits inside K which are homotopic to γ (in K) are non-degenerate.
Remark 3.7. Consider K = S 1 × B and a smooth Hamiltonian H : K → R satisfying dH t (0) = 0, ∀t. Assume 0 is an isolated 1-periodic orbit of the Hamiltonian vector field X Ht characterized by dH t = i X H t ω 0 . The typical example of special stable hamiltonian structure satisfying ii) in Defintion 3.6 is H = (ker dt, X H , ω H ), where X H = ∂ t + X Ht and ω H = dH t ∧ dt + ω 0 . Then x 0 (t) = (t, 0) is an isolated 1-periodic orbit of X H . In this case we can perturb H to obtain a non-degenerate perturbation of H.
Local contact homology
Contact Homology was originally introduced in [9] inside the bigger framework of Symplectic Field Theory. Following Floer [11] we define a suitable version of what we call the local contact homology of an isolated orbit, see Definition 4.4.
Defining local contact homology.
4.1.1. Local chain complexes. Throughout Section 4 we fix H = (ξ, X, ω), an isolated closed X-orbit γ = (x, T ), and assume H is special for γ as in Definition 3.6. Since γ is isolated, we will fix an isolating neighborhood K of γ. Moreover, as explained in 3.2, perhaps after shrinking K, we can always perturb H to an arbitrarily C ∞ -close H = (ξ , X , ω ) so that all X -orbits in K homotopic to γ are non-degenerate. We refer to H as a non-degenerate perturbation of H. This notion depends on K and on the homotopy class of γ (in K).
Let H = (ξ , X , ω ) be a small non-degenerate C ∞ -perturbation of H. We denote by P(H , K, γ) and P 0 (H , K, γ) the sets of closed X -orbits and good closed X -orbits in K which are homotopic to γ, respectively. By Lemma 3.3 every γ ∈ P(H , K, γ) is very close to γ if H is close enough to H, in particular, they all lie in the interior of K. P(H , K, γ) is finite if H is close enough to H since there are automatic period bounds for the orbits in P(H , K, γ). We fix a homotopy class of ω-symplectic trivializations of the bundle x * T ξ → R/Z, where x T : R/Z → N is the map given by t → x(T t). It distinguishes homotopy classes of ω -symplectic trivializations of ξ along every γ ∈ P(H , K, γ), which are used to compute Conley-Zehnder indices µ CZ (γ ). Let C * (H , K, γ) be the vector space over Q freely generated by P 0 (H , K, γ) and graded by |γ | = µ CZ (γ ) + n − 3.
In order to define a differential on C * (H , K, γ) we need to choose J ∈ J (H) and assume that we can find J ∈ J (H ) arbitrarily C ∞ -close to J (strong or weak) which is regular for the data (H , K, γ) in the following sense. Consider the set F(J , K, γ) of finite-energy J -holomorphic maps F : R × S 1 → R × N with a positive (negative) puncture at +∞ × S 1 (at −∞ × S 1 ), with image in R × K, and asymptotic to orbits in P(H , K, γ). Then we call J regular for the data (H , K, γ) if the linearized Cauchy-Riemann equations at every F ∈ F(J , K, γ) determines a surjective Fredholm operator in a standard functional analytical set-up; see [33].
Remark 4.1. The above described transversality assumption does not hold in general. The transversality issues in Symplectic Field Theory are expected to be solved by the work of Hofer, Wysocki and Zehnder [22,23,24].
Let γ , γ ∈ P(H , K, γ), and consider the moduli spaces M K,J (γ ; γ ) consisting of equivalence classes of triples (t + , t − , F ), where t ± ∈ S 1 and F ∈ F(J , K, γ) is asymptotic to γ and γ at the positive and negative puncture, respectively. Moreover, writing F = (a, f ), it is required that Two triples (t + , t − , F ) and (τ + , τ − , G) are equivalent if there exist ∆s, ∆t ∈ R such that F (s, t) = G(s + ∆s, t + ∆t) and t ± + ∆t = τ ± . Under the above mentioned regularity assumption then, according to Theorem 0 from [33], we find that M K,J (γ ; γ ) is a smooth orbifold of dimension |γ | − |γ |. When γ = γ these spaces are equipped with a free R-action induced by translations in the first coordinate of the target manifold R × N .
The proof of the following statement will be deferred to the end of 4.1.1.
Under the above assumptions on H and J we follow [4] and associate to every by using suitable coherent orientations of these moduli spaces (and comparing them with the orientation given by the infinitesimal R-action). These signs are well-defined even when γ or γ is a bad orbit. Following [9], we set for every pair γ , γ ∈ P 0 (H, K, γ) satisfying |γ | − |γ | = 1. Set n(γ , γ ) = 0 otherwise. Finally we define a linear map on generators γ ∈ P 0 (H , K, γ). Note that the coefficients in the above sum are integers. The proof of the above statement strongly relies on Lemma 3.4 and will be given below.
Definition 4.4. Let γ be an isolated closed orbit for the special stable Hamiltonian structure H = (ξ, X, ω), and take a small isolating neighborhood K for γ. Let H be a small nondegenerate perturbation of H, and J ∈ J (H ) be a small perturbation of J in the strong C ∞topology which is regular for the data (H , K, γ). The local contact homology of HC(H, γ) is defined as the homology of the complex (C * (H , K, γ), d).
The remaining of Section 4 is devoted to showing that this definition does not depend on the choices of K, H and J with the above properties.
Proof of Lemma 4.2. The argument is standard. If the lemma does not hold we find a sequence C n ∈ M K,J (γ ; γ )/R of distinct elements. Energy bounds for {C n } are automatically guaranteed by Lemma 3.4 since the data (H , J ) is assumed arbitrarily close to (H, J). The limiting behavior of the sequence is described by the SFT-Compactness Theorem from [3]. Using a primitive for ω on K one constructs an exact symplectic form on R × K taming J , so that the limiting holomorphic building of the sequence C n does not contain spheres. Also, finite-energy J -holomorphic punctured spheres on R × K must have positive punctures. Since there are no contractible X -orbits in K, Lemma A.1 and Lemma A.6 together imply that the limiting holomorphic building does not contain planes. Hence, it must be a broken cylinder with possibly many levels, all contained in R × K. However, by additivity of the Fredholm indices and regularity of J there is only one level, which must be an element of M K,J (γ ; γ )/R. But these are isolated, again by regularity, a contradiction.
Proof of Lemma 4.3. The proof follows a standard argument, see [2]. Consider closed Xorbits γ, γ ∈ P(H , K, γ) satisfying |γ| − |γ | = 2. We need to show first that sequences of elements in M K,J (γ, γ )/R must necessarily converge to a 2-level holomorphic building, its upper level being an element of M K,J (γ; γ )/R and its lower level belonging to M K,J (γ ; γ )/R. This follows from an argument similar to that given in the proof of Lemma 4.2, using that there are no finite-energy J -holomorphic spheres on R × K, using also that limiting holomorphic buildings of sequences in M K,J (γ, γ )/R do not contain planes, and that J is regular for the relevant cylinders. Clearly, we used compactness of K and the automatic energy bounds from Lemma 3.4 since (H , J ) can be as close to (H, J) as we want. Secondly, one must show that all such 2-level broken cylinders arise as SFT-limits of sequences in M K,J (γ, γ )/R. To this end we argue that, since we allow (H , J ) to be as C ∞ -close to (H, J) as we please, all maps F = (a, f ) : for any γ ± ∈ P(H , K, γ). This follows from an application of Lemma 3.4. Therefore, using the assumed regularity, we can glue a cylinder in M K,J (γ; γ )/R with a cylinder in M K,J (γ ; γ )/R and again obtain a cylinder in M K,J (γ; γ )/R (the projection of the image of the glued cylinder still lies in int(K)). Thus d 2 counts algebraically boundary points of entire 1-dimensional moduli spaces (broken cylinders with a bad orbit in between cancel each other).
4.1.2.
Chain maps. The first step to prove that Definition 4.4 is well-posed is to define suitable chain maps between the chain complexes induced by different perturbations. We consider C ∞small non-degenerate perturbations H = (ξ , X , ω ), H = (ξ , X , ω ) of H, as explained in 4.1.1. Also, we select J ∈ J (H ) and J ∈ J (H ) C ∞ -close to J and regular for the data (H , K, γ) and (H , K, γ), respectively.
We assumed H is special for γ. Consequently, according to Definition 3.6, it is either induced by some contact form α, or the 1-form λ as in (2.1) is closed. In the first case we consider Ω 0 = d(e a α), and in the second case consider Ω 0 = d(Ae a λ + α) where α is some primitive of ω on K, A 1 and a is the R-coordinate. In both cases J is Ω 0 -compatible. For any L > 0 we can find a small exact perturbation Ω of Ω 0 on [−L, L] × K, which agrees with a positive multiple of ω on T ({L} × K) and with a positive multiple of ω on T ({−L} × K). For any fixed L > 0 we may find an almost complex structureJ ∈ J L (J , J ). Taking J , J sufficiently C ∞ -close to J, we can find suchJ arbitrarily close to J in the C ∞ -strong topology. ThenJ will be Ω-tamed when Ω is a small perturbation of Ω 0 as described above. Ω will be used to define energy ofJ-holomorphic maps.
Analogously as before, consider the space F(J, K, γ) of finite-energyJ-holomorphic maps F = (a, f ) : R × S 1 → R × N , with image in R × K and in the homotopy class of γ (meaning that t → f (s, t) is homotopic to γ in K), see 4.1.1, with a positive (negative) puncture at +∞ × S 1 (at −∞ × S 1 ). We need again to assume that regularity is achieved for such cylinders by arbitrarily small perturbations ofJ within J L (J , J ). After such perturbation, in the appropriate functional analytical set-up, the linearized Cauchy-Riemann equations at all F ∈ F(J, K, γ) determine surjective Fredholm operators. In this case we callJ regular for the data ((H , J ), (H , J ), K, γ).
Remark 4.5. Such transversality assumptions are not expected to hold in general, and one needs the difficult analytical tools from [22,23,24] in order to achieve transversality in a suitable sense.
Given any γ ∈ P(H , K, γ) and γ ∈ P(H , K, γ), we consider the space M K,J (γ ; γ ) of equivalence classes of triples (t + , t − , F ), where t ± ∈ S 1 and F ∈ F(J, K, γ) is asymptotic to γ and γ at the positive and negative puncture, respectively. The equivalence relation is exactly as discussed in 4.1.1. Under the above transversality assumption, this is a smooth orbifold of dimension |γ | − |γ |; see Theorem 0 in [33]. If |γ | − |γ | = 0 one can associate signs [t + , t − , F ] to classes [t + , t − , F ] ∈ M K,J (γ ; γ ): these are 0-dimensional and one has the coherent orientations obtained from [4]. Again, there is no need to assume γ or γ are good. Proof. The proof of the above statement is entirely analogous to that of Lemma 4.2. Only note that one can construct an exact symplectic form on R × K tamingJ. For that one uses primitives of ω , ω on K which are C ∞ -close to a given primitive of ω. There are automatic energy bounds for sequences in M K,J (γ ; γ ), by Lemma 3.4. Hence limiting holomorphic buildings of such sequences contain no spheres. Since there are no finite-energy pseudoholomorphic planes with respect to J , J orJ in R × K (X and X have no contractible orbits in K, see Lemma A.6), any limiting holomorphic building of a sequence in M K,J (γ ; γ ) must be a broken cylinder with possibly many levels. An index argument concludes the proof since we assume regularity for all relevant cylinders.
Analogously to [9] we set As in (4.4), the coefficients in (4.6) are integers. Proof. The argument is entirely analogous to that for Lemma 4.3. Again the relevant facts are: a) there are no pseudo-holomorphic spheres or finite-energy planes in R × K with respect J , J orJ; b) the closures of the images of the projections onto N of all cylinders in F(J , K, γ), F(J , K, γ) or F(J, K, γ) are strictly contained in int(K). Note that b) is achieved by an application of Lemma 3.4 since J , J andJ are allowed to be taken arbitrarily close to J. Lemma 3.4 also provides automatic energy bounds for these cylinders (note our careful choices of taming symplectic forms on [−L, L] × K). Let us fix γ ∈ P(H , K, γ) and γ ∈ P(H , K, γ) satisfying |γ| − |γ | = 1. By a), the automatic energy bounds and the assumed regularity for cylinders, SFT-limits of sequences in M K,J (γ; γ ) must be a holomorphic building of height-2 with two cylindrical levels, one of them isJ-holomorphic and the other is holomorphic with respect to either J or J . The upper level is asymptotic to γ at its positive puncture, and the lower level is asymptotic to γ at its negative puncture. Now b) implies that the height-2 holomorphic buildings as just described arise as limits of a sequence in M K,J (γ; γ ), since after glueing both levels we must obtain a cylinder with image inside R × K. We need the path {J t } to be regular for the data ((H , J ), (H , J ), K, γ) in the following sense. Let F({J t }, K, γ) denote the set of pairs (t, F ), where t ∈ [0, 1], and F : R×S 1 → R×K is a finite-energyJ t -holomorphic cylinder with a positive (negative) puncture at +∞ × S 1 (at −∞ × S 1 ), asymptotic to an orbit in P(H , K, γ) at the positive puncture and to an orbit in P(H , K, γ) at the negative puncture. Regularity of {J t } means that, in a standard functional analytical set-up, the linearization of the (t-dependent) Cauchy-Riemann equations at every (t, F ) ∈ F({J t }, K, γ) is surjective. We will assume that any path in J τ,L (J , J ) can be slightly C ∞ -perturbed to a regular path. Note that sinceJ 0 ,J 1 are already assumed regular then the perturbation may be done keeping the endpoints fixed.
Remark 4.8. The above mentioned regularity may not always be obtained. One requires analytical tools from [22,23,24] to describe the precise set-up where the appropriate notion of transversality can be achieved. The proof is entirely analogous to those of Lemma 4.2 or Lemma 4.6 and will be omitted. We need automatic energy bounds for elements in F({J t }, K, γ), which is achieved by Lemma 3.4 in view of the special form of our small perturbations of the data (H, J) and by the properties of Ω.
We set when |γ | − |γ | = −1, or n(γ , γ ) = 0 when |γ | − |γ | = −1. There is a degree +1 map The argument is analogous as the ones given to prove lemmas 4.3 and 4.7, only note here that moduli spaces M K,{Jt} (γ; γ ) with γ ∈ P(H , K, γ) and γ ∈ P(H , K, γ) satisfying |γ| = |γ | do have a genuine boundary, corresponding to elements of M K,J 0 (γ; γ ) and M K,J 1 (γ; γ ). As before one needs to make strong use of Lemma 3.4 to conclude that certain glued cylinders lie in R × K. For any R > L we can consider the almost complex structureJ R defined by Note thatJ R ∈ J L<R (J ) ⊂ J R+L (J , J ), and if J , J ,J + ,J − are sufficiently close to J then J R lies in any given neighborhood of J in the C ∞ -strong topology, uniformly in R > L. As is well-known for all choices γ ± ∈ P(H , K, γ) and γ ∈ P(H , K, γ) satisfying |γ + | = |γ − | = |γ |, when R is large enough there is a surjective glueing map Regularity is crucial in order to get this map well-defined. Also, finite-energy pseudoholomorphic cylinders in K with respect toJ ± connecting orbits homotopic to γ (in K) have the closure of the projection of their images onto K contained in the interior of K, by Lemma 3.4. So the same holds for the glued cylinders, and this is the reason why they lie in M K,J R (γ + ; γ − ). How the cylinders are glued is determined by the asymptotic markers at the punctures corresponding to the orbit γ , and different configurations could induce the same glued cylinder. So, this map is not injective and the same glued cylinder appears m γ times. Thus, for fixed γ ± and γ as above with |γ + | = |γ − | = |γ |, we have the formula The fact that the orientations are coherent under glueing was used above. It follows from (4.12) that where Φ ± are the chain maps (4.5) induced byJ ± , and Φ R is the chain map induced byJ R . As explained above, the glued cylinders have images contained in R × int(K). The glueing analysis will give surjectivity for the linearized Cauchy-Riemann operators at the maps parametrizing these glued cylinders, so thatJ R is regular for the data ((H , J ), (H , J ), K, γ) in the sense explained in 4.1.2. Recall thatJ R can be arranged to lie on a small neighborhood of J (in the C ∞ -strong topology).
Note thatJ R ∈ J L<R (J ) is adjusted to ω on the neck [L − R, R − L] × K. Moreover, one can find regular 4 homotopies {J t } ⊂ J L<R (J ) connectingJ R to J inside an arbitrarily small neighborhood of J in the strong C ∞ -topology, since we are allowed to assume J , J J + ,J − are arbitrarily close to J. Moreover, the convex combination ω t := (1 − t)ω + tω is a path of closed 2-forms near K of maximal rank (since ω ∼ ω ) and the pathJ t can be arranged to be adjusted to ω t on the neck [L − R, R − L] × K. We can apply Lemma 3.5 to conclude that finite-energyJ R cylinders F = (a, f ) in R × K connecting orbits homotopic to γ in K satisfy f (R × S 1 ) ⊂ int(K). Note that we have automatic energy bounds for such cylinders, and that the conclusion we obtained is independent of R > L.
Thus, we can argue as previously explained in 4.1.3 to get a chain homotopy between the chain map Φ R and the chain map induced by the R-invariant J . This last chain map induces the identity already at the chain level. In view of (4.13) we conclude that Φ − • Φ + is the identity at the homology level. We proved Lemma 4.11. Suppose that H , H are sufficiently small special non-degenerate C ∞ -perturbations of H. Suppose also that J ∈ J (H ) and J ∈ J (H ) are sufficiently C ∞ -close to J and regular for the data (H , K, γ) and (H , K, γ), respectively. Then the homologies of (C * (H , K, γ), d) and (C * (H , K, γ), d) defined above are isomorphic.
It follows from our discussion that there are well-defined graded vector spaces HC * (H, K, γ, J) given by the homology of the chain complex (C * (H , K, γ), d) where K is a small tubular neighborhood of γ and the data (H , J ) (H, J) is carefully chosen as above. We still need to address the independence of HC * (H, K, γ, J) on J and K, which will be done below.
4.2.
Invariance of local contact homology. Lemma 4.12. Let {H s = (ξ s , X s , ω s )} s∈[0,1] be a smooth family of stable hamiltonian structures on a manifold N , and J s ∈ J (H s ) be a smooth 1-parameter family of R-invariant almost complex structures. Let γ be a closed X 0 -orbit and let K be a small compact tubular neighborhood of (the geometric image of ) γ such that for every s ∈ [0, 1] the following hold: (a) the vector field X s is a pointwise positive multiple of X 0 on the geometric image of γ, (b) γ is the only closed orbit of X s contained in K in its free homotopy class (of loops in K), (c) X s has no closed orbit contained in K which is contractible in K, (d) Either H s is induced by some contact form on K, or the 1-form λ s associated to H s as in (2.1) is closed on K (see Definition 3.6). Then HC * (H 0 , K, γ, J 0 ) HC * (H 1 , K, γ, J 1 ).
In (b) above we abuse the notation and see γ as a closed X s -orbit. This is possible in view of (a).
Proof. It is an immediate consequence of Lemma 4.11 that for every s 0 ∈ [0, 1] there exists > 0 such that HC * (H s , K, γ, J s ) = HC * (H s 0 , K, γ, J s 0 ) for all s ∈ [0, 1] satisfying |s−s 0 | < . In fact, if not, we find a sequence s n → s 0 such that HC * (H sn , K, γ, J sn ) = HC * (H s 0 , K, γ, J s 0 ), ∀n. By our transversality assumptions, there are very small C ∞ -perturbations (H n , J n ) of (H sn , J sn ) such that H n is non-degenerate, J n is regular for the data (H n , K, γ), (H n , J n ) → (H s 0 , J s 0 ) in C ∞ as n → ∞, and the conclusions of Lemma 3.4 hold for all J n . Moreover, the homology of the chain complex (C * (H n , K, γ), d), where d is defined using J n , is HC * (H sn , K, γ, J sn ). However, Lemma 4.11 says that these homology groups are also equal HC * (H s 0 , K, γ, J s 0 ) when n is large, a contradiction. The conclusion now follows from compactness of [0, 1].
As a consequence we can drop the dependence on J of the local contact homology of the data (H, K, γ, J). It is easy to see that it is also independent of the small tubular neighborhood K where γ is the only closed Hamiltonian orbit in its free homotopy class (of loops in K). We will write simply HC(H, γ).
Local contact homology of isolated prime Reeb orbits
In this section we establish the relation between local contact homology of an isolated prime Reeb orbit and the associated Poincaré return map to a local cross section.
Proposition 5.1. Let α be a contact form on a manifold N , and γ be an isolated prime Reeb orbit. Let Σ ⊂ N be an embedded hypersurface transverse to γ at a point p ∈ γ, so that the local first return map ϕ : (U, p) → (Σ, p) is well-defined on a small neighborhood U of p in Σ. Then HC(α, γ) and HF (ϕ, p) are isomorphic.
In the above statement we denote by HF (ϕ, p) the local Floer homology at the isolated fixed point p of the germ of symplectic diffeomorphism ϕ of the symplectic manifold (Σ, dα| Σ ). The isomorphism in Proposition 5.1 is defined only up to an even shift in the grading, since the grading of the local Floer homology of a germ of Hamiltonian diffeomorpohism near an isolated fixed point is only defined up to an even shift, see [16]. R/Z × 0, α Hdt + λ 0 , where H : K → R satisfies H t (0) = T , dH t (0) = 0, and λ 0 = 1 2 n−1 k=1 q k dp k − p k dq k . Proof. First, it is simple to get a tubular neighborhood K R/Z×B such that α| R/Z×0 = T dt and dα restricted to 0 × R 2n−2 ⊂ T (t,0) (R/Z × B) coincides with ω 0 , ∀t ∈ R/Z. Now, by a parametrized version of Darboux's Theorem for symplectic forms, we can change coordinates to obtain dα| T (t×B) = ω 0 , ∀t. Now, let the α-Reeb flow be denoted by φ t . On a small neighborhood U of 0 ∈ R 2n−2 we find a smooth function τ : [0, 1] × U → R such that φ τ (t,z) (0, z) ∈ t × B. The maps ϕ t (z) = φ τ (t,z) (0, z) defined on U are symplectic embeddings fixing the origin. Hence, we can find a smooth Hamitonian H t defined near 0 such that ϕ t is its Hamiltonian flow. Moreover, H t can be arranged to be 1-periodic on t and, consequently, defines a smooth function near R/Z × 0. There is no loss of generality to assume that H t (0) = T . It must satisfy dH t (0) = 0 since 0 is left fixed. Consider the vector field X H = ∂ t + X Ht , where dH t = i X H t ω 0 . By the definition of ϕ t we get i X H dα = 0. But our coordinates obtained so far guarantee that dα = β t ∧ dt + ω 0 , for some 1-periodic smooth family of 1-forms β t defined near 0 ∈ R 2n−2 . Consequently proving that β t = dH t . In other words, dα = dH t ∧ dt + ω 0 .
Let α 1 = Hdt + λ 0 , so that d(α − α 1 ) = 0. Moreover, R/Z×0 α − α 1 = 0 and, consequently, we find a smooth function f defined near R/Z × 0 such that df = α − α 1 . After subtracting a constant we can assume f = 0 on R/Z × 0. Consider α s = (1 − s)α + sα 1 and the vector field Y s = f X αs where, for each s ∈ [0, 1], X αs is the Reeb vector of the contact form α s . Denoting by ψ s the flow of Y s we get Moreover, R/Z × 0 is left fixed by ψ s . Using ψ 1 we obtain the desired coordinates. where X H = ∂ t +X Ht . Since the 2-form is independent of s, the conditions of Lemma 4.12 are fulfilled, so that HC * (H s , γ) does not depend on s ∈ [0, 1]. It is easy to check that HC * (H 1 , γ) coincides with the local Floer homology of the isolated 1-periodic orbit 0 of the Hamiltonian H t , up to an even shift in the grading since the homotopy class of dα-symplectic trivializations along γ induced by the choice of coordinates given by Lemma 5.2 was not specified. This concludes the argument.
Estimating local contact homology
In this section we prove the following statement. Proposition 6.1. Let α be a contact form on a manifold N and γ = (x, T = mT 0 ) be an isolated α-Reeb orbit with multiplicity m and minimal period T 0 > 0. Let Σ ⊂ N be an embedded hypersurface transverse to γ at p = x(0), so that the local first return map ψ : (U, p) → (Σ, p) is well-defined on a small neighborhood U of p in Σ. Then dim HC * (α, γ) ≤ dim HF * (ψ m , p), for every * ∈ Z.
The gradings in HC * (α, γ) and in HF * (ψ m , p) are given by the Conley-Zehnder indices computed with respect to homotopy classes of symplectic trivializations induced by a common homotopy class of dα-symplectic trivializations of ξ = ker α along γ, which we fix from now on.
6.1. Geometric set-up and notation. Let n be defined by dim N = 2n−1, denote the Reeb vector field of α by R and fix J ∈ J (α). Let K R/Z × B be an isolating neighborhood for γ equipped with coordinates (t, z), z = (q 1 , . . . , p 1 , . . . ), such that x(t) = (t/T 0 , 0), α coincides with dt on R/Z × 0, dα| ξ coincides with ω 0 = i dq i ∧ dp i along R/Z × 0, and inf K i R dt > 0. Here B ⊂ R 2n−2 is a ball centered at the origin. This choice of coordinates induces a dα-symplectic trivialization of ξ along γ, which is assumed to be in the homotopy class previously chosen.
Consider a small non-degenerate perturbation α of α on K, and J ∈ J (α ) a small perturbation of J which is regular for the data (α , K, γ) as explained in 4.1.1. We denote by P the set of closed α -Reeb orbits in K homotopic to γ, and by P 0 ⊂ P those which are good. Let C * = C * (α , K, γ) be the Q-vector space freely generated by P 0 graded by the Conley-Zehnder indices. Then J can be used to define a differential d on C * and, by Lemma 4.11, if (α , J ) is sufficiently close to (α, J) the homology of (C * , d) is the local contact homology HC(α, γ).
The data (Π * α , (id × Π) * J ) is invariant under this action. The lifts of closed α -Reeb orbits homotopic to γ are precisely the closed Π * α -orbits which go once around the tube K and, consequently, they are all good. Moreover, their Conley-Zehnder indices coincide with the Conley-Zehnder indices of their projections. Let P be the set of closed Π * α -Reeb orbits in K going once around the tube, which coincides precisely with the set of lifts of orbits in P. The elements of P freely generate a Q-vector space C * graded by the Conley-Zehnder indices. Since J is assumed very close to J in the C ∞ -strong topoloy, (id × Π) * J determines in the standard way described in Section 4 a differential d on C * . According to Proposition 5.1, the homology of ( C * , d) coincides with the local Floer homology HF * (ψ m , p).
Orbits in P have possibly many lifts to P, and the natural projection is still denoted Π : P → P. The generator σ of the Z m -action (6.2) induces an obvious action on P, and we choose a preferred lift for every element of P. Our notation will be the following: if we writē ϕ to denote an element in P then the chosen preferred lift is ϕ. Every orbitφ ∈ P comes with a marked point ptφ ∈ 0 × B. Its multiplicity mφ divides m andφ has precisely p = m/mφ lifts which are orbits in Note that σ i+p ϕ = σ i ϕ, ∀i. The marked point pt ϕ is chosen in 0 × B and we set pt σ j ϕ = σ j (pt ϕ ), so that Π(pt σ j ϕ ) = ptφ, for j = 0, . . . , p − 1. The elements of O ϕ are simultaneously called good/bad ifφ is good/bad. This terminology might be troublesome since all elements of P are SFT-good (all such orbits are simple), but we will proceed without fear of ambiguity. The map Π : P → P induces a linear map 6.2. Finite-energy cylinders and their lifts. Given η, ζ ∈ P we denote by M(η, ζ) the moduli spaces of finite-energy J -holomorphic cylinders in R×K with a positive and a negative puncture, asymptotic to η at its positive puncture and to ζ at its negative puncture, with asymptotic markers. Namely, an element is an equivalence class of triples (t + , t − , F ), where t ± ∈ S 1 , F = (a, f ) : R × S 1 → R × K is a non-constant finite-energy J -holomorphic map with a positive puncture at +∞ × S 1 where it is asymptotic to η, with a negative puncture at −∞ × S 1 where it is asymptotic to ζ, and satisfying lim s→+∞ f (s, R and ∆t ∈ S 1 satisfying F (s, t) = τ c • G(s + ∆s, t + ∆t) and t ± + ∆t = θ ± . Differently from the notation in Section 4, here we do quotient out by the R-action on the target. The equivalence class of (t + , t − , F ) is denoted [t + , t − , F ]. Moduli spaces M 0 (η, ζ) of cylinders in R × K without asymptotic markers are defined as a set of equivalence classes of maps as above, where two maps F, G are equivalent if there exist c, ∆s ∈ R and ∆t ∈ S 1 such that F (s, t) = τ c • G(s + ∆s, t + ∆t). The class of such F is denoted by [F ], and there is a natural surjective map Let F represent a given class [F ] ∈ M 0 (η, ζ). Then the group Iso(F ) of holomorphic self-diffeomorphisms h of R × S 1 fixing the ends ±∞ × S 1 and satisfying F • h = F can be identified with a subgroup of S 1 since such h must have the form h(s, t) = (s, t + ∆t). Although Iso(F ) depends on the representative F , its order w[F ] = #Iso(F ) depends only on [F ]. The choice of F determines a subset of S 1 with m η elements, which are possible locations of asymptotic markers at the positive puncture. The group Iso(F ) acts freely on this set and, consequently, w[F ] divides m η . Analogously, w[F ] divides m ζ . Note that Z mη and Z m ζ act on M(η, ζ) by rotation of asymptotic markers, the generators are: There is a coherent system of orientations of the spaces M(η, ζ), for all choices η, ζ ∈ P, compatible with glueing 5 . These are defined as in [4] even when η or ζ is bad, and determine signs [t + , t − , F ] = ±1 when |η| − |ζ| = 1. One has (6.6) [ In other words, the action of Z mη in M(η, ζ) by rotating the asymptotic marker at the positive puncture is orientation preserving/reversing when η is good/bad. The analogous statement holds for the action of Z m ζ by rotations of the asymptotic marker at the negative puncture. Hence this signs descend to signs [F ] on M 0 (η, ζ) only when η and ζ are good. Moduli spaces of finite-energy (id × Π) * J -holomorphic cylinders in R × K asymptotic to orbits in P with or without marked points are defined in the same way. However, note that all such cylinders are somewhere injective and there are no non-trivial reparametrization groups.
Choose anyφ,η ∈ P and set p = m/mφ, q = m/mη. There are well-defined projections We define the space M(Oφ, Oη) analogously. Any finite-energy J -holomorphic cylinder F = (a, f ) representing an element in M 0 (φ,η) can be lifted to (possibly many) finite-energy (id×Π) * J -holomorphic cylinders, since the loops t → f (s, t) go m times around the tube K and the projection (6.1) is pseudo-holomorphic with respect to J and (id × Π) * J . To be more precise, recall the forgetful map ∆ (6.5) and 5 The reader should note that, in view of Lemma 3.4, if F = (a, f ) is a cylinder representing an element of M(η, ζ) for η, ζ ∈ P then f (R × S 1 ) ⊂ int(K) because J is assumed very close to some J ∈ J (α). Hence, assuming regularity, such cylinders can be glued to obtain cylinders which again project into a compact subset of int(F ).
consider the bijection (6.8) For each fixed i ∈ {0, . . . , p − 1}, a given choice of asymptotic marker t + at +∞ × S 1 uniquely determines a lift F = ( a, f ) of F to R × K asymptotic to the orbit σ i ϕ at the positive puncture and satisfying f (s, t + ) → pt σ i ϕ as s → +∞. After this is done there is no control at the negative puncture: the asymptotic limit σ j η is forced on us, together with the unique location of the asymptotic marker t − which satisfies f (s, t − ) → pt σ j η as s → −∞. One must have Let us agree to say that [t + , t − , F ] ∈ M(φ,η) lifts to M(σ i ϕ, Oη) when the unique lift F = ( a, f ) of F satisfying lim s→+∞ f (s, t + ) = pt σ i ϕ also satisfies lim f (s, t − ) = pt σ j η for the uniquely determined σ j η ∈ Oη that F is asymptotic to at the negative puncture.
Obviously, all cylinders in M(Oφ, Oη) are obtained by this lifting procedure from some cylinder in M(φ,η). The projection Π can be used to pull-back the system of coherent orientations of moduli spaces of curves in R × K to a system of coherent orientations on moduli spaces of curves in R × K: These are clearly compatible with glueing of curves on R× K. The generator σ of the covering group determines a bijection (again denoted by σ): It follows that for all i ∈ {0, . . . , p − 1}.
6.3.
A Z m -action on ( C * , d) by chain maps. Since mφ is even whenφ is bad, we can consider a linear Z m -action on C * with generator E defined by (6.13) E(ϕ) = σϕ, E(σϕ) = σ 2 ϕ, . . . , E(σ p−1 ϕ) = δφϕ on the generators of C * . Our goal here is to show Lemma 6.3. The map E induces a Z m -action on C * by chain maps.
The lemma can be restated by saying that (6.14) so that we need to understand the differential d qualitatively. Of course, it suffices to prove (6.14) on the generators P. Fixφ,η ∈ P satisfying |φ| − |η| = 1, and denote p = m/mφ and q = m/mη. Each cylinder [F ] ∈ M 0 (φ,η) reveals a distinct set of coefficients which, loosely speaking, is the contribution of the lifts of F to cylinders in R× K (with all possible choices of asymptotic markers) connecting σ i ϕ to σ j η to the differential d.
To be more precise, recall the set M F ⊂ M(Oφ, Oη) discussed in 6.2 obtained by the lifts of F . We write . The coefficients (6.15) are defined as We get the formula From now on we view the indices i, j as periodic: i ∈ Z p and j ∈ Z q . Recall the functions δ + : Z p → {0, 1}, δ − : Z q → {0, 1} from (6.11). With these agreements the map E (6.13) acts on O ϕ and O η as We have and so to prove (6.14) it suffices to show that for any [F ] ∈ M 0 (φ,η) the following identity holds In fact, the map (6.10) maps M F ij bijectively onto M F (i+1)(j+1) . Thus which is another way of writing (6.20). In the third equality we used (6.12). This concludes the proof of Lemma 6.3.
To prove the above statement we first fix arbitrary orbitsφ,η ∈ P, denote p = m/mφ, q = m/mη and split the argument in three cases. 6.4.1. Case 1:φ,η are good. Then for any i ∈ {0, . . . , p − 1} we have (6.21) In the second equality we used (6.18), in the third equality we used (6.17), in the fourth equality we used thatφ,η are good, in the fifth equality we used (6.9), in the seventh equality we used thatφ,η are good and that ∆ is surjective. For any F representing some [F ] ∈ M 0 (φ,η) we have the p×q matrix of coefficients d F = ( d F ij ) and, according to (6.21), is the sum over all possible such matrices of the sum of the elements of the i 0 -th line. Fixing F , let Φ 0 = [t + , t − , F ] ∈ M F i 0 j 0 be some reference element (as explained before, the lift F = ( a, f ) of F is uniquely determined by asking lim s→+∞ f (s, t + ) = pt σ i 0 ϕ , the value of j 0 is forced on us). Recalling the action σ (6.10) we have, by Remark 6.2, that M F = {σ k Φ 0 : k ∈ Z}. From now we consider the variables i and j as periodic: i ∈ Z p , j ∈ Z q . Analogously, the indices of the matrix d F will also be seen as periodic. Note that σ k Φ 0 ∈ M(σ i 0 +k ϕ, σ j 0 +k η), However the set M F might be larger than {Φ 0 , σΦ 0 , . . . , σ lcm(p,q)−1 Φ 0 } since Φ 0 need not be equal to σ lcm(p,q) Φ 0 . In fact, we have By (6.10), each walk {Φ 0 , σΦ 0 , . . . , σ lcm(p,q)−1 Φ 0 } on the matrix d F corresponds to lcm(p, q)/p rotations at the positive puncture, and to lcm(p, q)/q rotations of the negative puncture. In view of the definition of x we have so after applying the projection (6.7) we conclude Thus, x can be computed by or, alternatively, by saying that x is the minimal positive integer for which ∃y ∈ {1, 2, . . . } such that .
Substituting y = 1 above we would obtain since pmφ = qmη = m. Note that x given by this formula is an integer since m/w[F ] is a common multiple of p and q because w[F ] is a common divisor of mφ and mη.
Observe that the path {Φ 0 , . . . , σ xlcm(p,q)−1 Φ 0 } visits the space M(σ i 0 ϕ, Oη) exactly xlcm(p, q)/p = mφ/w[F ] times, and each visit contributes with alternating signs to the i 0 -th line of d F . This follows from the formula (6.10) for the action σ. Thus, in order to prove it suffices to show that mφ/w[F ] is even. This follows easily from the fact that the Z mφ -action on M(φ,η) given by rotations of asymptotic markers at the positive puncture is orientation reversing. Thus (6.22) follows from (6.23) and (6.25). This concludes Case 2 and the proof of Lemma 6.4.
6.5.
Conclusion of the proof of Proposition 6.1. Consider the average operator (6.26) A : C * → C * A = 1 m (I + E + · · · + E m−1 ). Since E m − I = 0 we have a decomposition By Lemma (6.3) we have a subcomplex (imA, d) ⊂ ( C * , d). Let Q : C * → C * be defined on generatorsφ ∈ P 0 by Qφ = A(some element in Oφ). Then clearly Q is injective and QΠ * = A. Here we used that A(σ i ϕ) = 0 wheneverφ is a bad orbit, and that C * is generated by the good orbits. It follows that ker Π * = ker A. Here injectivity of Q was used. Thus we get that is a linear isomorphism and a chain map. Consequently, the homology of (C * , d) is equal to the homology of the subcomplex (imA, d), and Proposition 6.1 follows immediately. 6.6. Proof of Theorem 1.1. Let γ = (x, T ) be an isolated periodic orbit with multiplicity m. Let Σ ⊂ N be an embedded hypersurface transverse to γ at pt = x(0), so that the local first return map ψ : (U, pt) → (Σ, pt) is well-defined on a small neighborhood U of pt in Σ. Following [16], we say that a positive integer j is admissible for γ if λ j = 1 for all eigenvalues λ = 1 of dψ m (pt). It follows from Proposition 6.1 and Theorem 1.1 in [16] that the total rank of HC * (α, γ j ) is less or equal than the total rank of HC * (ψ m , pt) for every admissible j. Now, suppose that γ is simple and every iterate of γ is isolated. In order to prove Theorem 1.1 it remains only to show that we can write the set of natural numbers as a finite union of admissible integers for some iterates of γ. This is the content of the lemma below which is extracted from [18, Lemma 2]. For the reader's convenience, we will reproduce its proof. We list the elements of the set {j ∈ N; q does not divide jm k , ∀q ∈ Q k } in strictly increasing order j 1 k < j 2 k < . . . . Let us prove that j i k is admissible for γ m k . The eigenvalues in the circle for dψ m k (pt) are of the form λ m k m k p/q, for some eigenvalue λ p/q for dψ(pt) which lies in the circle. The conclusion follows since λ m k = 1 is equivalent to the condition that q does not divide m k , and the condition λ j i k m k = 1 is equivalent to the condition that q does not divide j i k m k . So it remains only to show that ∪ i,k {j i k m k } = N but this is easy and left to the reader.
Morse inequalities
Fix a homotopy class of augmentations [ ] and let α be a contact form for ξ. The action spectrum of α is given by Σ(α) = {A(γ); γ is a periodic orbit of α}, where A(γ) = γ α is the action of γ.
7.1. Filtered linearized contact homology. In this section we will define filtered linearized contact homology for any defining contact form for ξ. In contrast to the non-filtered homology, it depends on the choice of the contact form and the augmentation. But an augmentation is defined for the differential graded algebra associated to a non-degenerate contact form and we have to handle possibly degenerate contact forms.
In order to overcome this difficulty, we will fix a defining non-degenerate contact form α 0 for ξ, and some J 0 ∈ J (α 0 ) assumed generic enough in order to define a differential graded algebra (A(α 0 ), ∂ 0 ) whose homology is the (full) contact homology of ξ, as explained in [9]: A(α 0 ) is the supercommutative graded (by | · |) algebra (with a unit) generated by the good closed α 0 -Reeb orbits with Q coefficients (c 1 (ξ) is assumed to vanish), and ∂ 0 is defined by the (algebraic) count of rigid punctured finite-energy spheres with one positive puncture in the symplectization (R×N, J 0 ). We also fix a cobordism (W α We briefly say that W α α 0 is a cobordism from α to α 0 . This choice can be compared to a choice of a "filling" of the contact manifold (in the fillable case the filling gives a cobordism to the empty set). Fix an augmentation 0 for α 0 with homotopy class [ ]. For a shorter notation, we will omit the symplectic form and the almost complex structure for the cobordism although they will be tacitly assumed.
Let α ± be contact forms for ξ and fix a Riemannian metric on N . Given a constant δ > 0 we say that a cobordism W α + Notice that given two δ-small cobordisms then their gluing is 2δ-small.
Given −∞ ≤ a < b ≤ ∞ such that a, b / ∈ Σ(α) and a constant δ > 0, let V be a sufficiently small neighborhood of α such that a, b / ∈ Σ(α ) for every α ∈ V and every pair of contact forms in V can be joined by a δ-small cobordism.
Let α ∈ V be a non-degenerate contact form and J ∈ J (α ) be regular enough to get a well-defined differential graded algebra (A(α ), ∂ ) whose homology is the contact homology for ξ. Let also W α α be a δ-small cobordism from α to α with an almost complex structure in J ( J, J ).
Proposition 7.1. The constant δ can be chosen such that HC (a,b), (α ) does not depend on the choices of the δ-small cobordism from α to α and on the non-degenerate contact form α ∈ V .
The proof is based on the following proposition. Proof. Since the action spectrum is a closed subset, there exists κ > 0 such that (a−κ, a+κ)∩ Given c > 0 and a non-degenerate contact form α denote by A c (α) the subalgebra generated by the periodic orbits of α with action less than c.
By our choice of κ we have that Ψ and Φ preserve the subalgebras A a and A b and, perhaps after making δ smaller, the same holds for a degree +1 map induced by family of 2δ-small cobordisms (assumed regular) joining the gluing of W α α with Wα α and the symplectization of α that defines a chain homotopy between Φ • Ψ and the identity. The proof then follows from a standard argument.
Proof of Proposition 7.1. Let δ =δ/2, withδ given by the previous proposition. Let V be a neighborhood of α as before, that is, such that every pair of contact forms in V can be joined by a δ-small cobordism. We can assume that the fixed cobordism W α α 0 comes from a family of pairs (α t , J t ) such that α t = α for t > 1, α t = α 0 for t < −1 and α t is non-degenerate for every t in an open and dense subset of [−1, 1]. Choose a value of t such that α t is non-degenerate and contained in V . Denote the corresponding contact form byα. Choosingα sufficiently close to α, we can write W α α 0 as a gluing of cobordisms W α α and Wα α 0 , with W α α being δ-small. The gluing of a δ-small cobordism W α α with W α α gives aδ-small cobordism W α α . Now, let Ψα α 0 : (A(α),∂) → (A(α 0 ), ∂ 0 ) be the chain map induced by Wα α 0 and˜ = (Ψα α 0 ) * 0 . By the relation Ψ α α 0 = Ψ α α • Ψα α 0 we have that = (Ψ α α ) * ˜ . Consequently, by the previous proposition, for everyδ ∈ (0, δ).
Proof. First we find σ > 0 and a C ∞ -neighborhood V of α such that Σ(α ) ⊂ (σ, +∞), ∀α ∈ V . Since [a − δ, a + δ] ∩ Σ(α) = {a} if δ < σ/2 is small enough, we can assume, possibly after making V smaller, that a ± δ ∈ Σ(α ) for any non-degenerate α ∈ V . Let us fix α nondegenerate, and assume J ∈ J (α ) regular enough so that the differential graded algebra (A(α ), ∂ ) of contact homology is well-defined. Making choices α 0 and W α α 0 as explained before, we glue a very small cobordism W α α to W α α 0 to obtain a cobordism W α α 0 inducing a chain map Ψ : (A(α ), ∂ ) → (A(α 0 ), ∂ 0 ) and we use Ψ * 0 to linearize ∂ and obtain the differential of linearized contact homology HC (a−δ,a+δ),Ψ * 0 * (α ). The proposition follows from the observation that this linearized differential is defined only by counting spheres with one negative puncture, since the presence of an extra negative punctures would drop the action by at least σ > 2δ, giving closed Reeb orbits out of the action interval (a − δ, a + δ). In other words, the linearization Ψ * 0 plays no role, the homology HC (a−δ,a+δ),Ψ * 0 * (α ) is cylindrical in essence. Now an easy compactness argument using the results from the Appendix will show that, after taking δ > 0 smaller, we can assume that the cylinders which define the linearized differential must be connecting closed α -Reeb orbits inside small (isolating) tubular neighborhoods of the closed α-Reeb orbits with action a.
the Morse type numbers. The main result in this section provides versions of weak and strong Morse inequalities for contact homology suitable for our applications. and c i are finite for every i ≥ 2n − 3 and satisfy the inequalities Remark 7.7. One easily conclude from the proof of Proposition 7.4 the Morse inequalities for filtered contact homology, that is, ∈ Σ(α) and a constant C ≥ 0 that does not depend on i, a and 0 .
8. Proof of the applications 8.1. Proof of Theorem 1.
2. An important ingredient in the proof of our applications is the following lemma that gives uniform bounds for the Morse type numbers of periodic orbits with mean index different from zero and follows easily from Theorem 1.1.
Lemma 8.1. Let γ be an isolated periodic orbit of the Reeb flow of α with mean index different from zero such that γ j is isolated for every j ∈ N. There exists a constant B > 0 such that Proof. Since γ j is isolated for every j ∈ N, we conclude from Theorem 1.1 that there exists a constant C > 0 such that dim HC i (α, γ j ) < C for every i ∈ Z and j ∈ N. By (7.3) we have that . Now the result follows easily.
Proof of Theorem 1.2. We will prove the result in the case that there exists a positive sequence of integers l i → ∞ such that b [ ] l i (ξ) → ∞ since the negative case is analogous. Suppose that there exists a contact form for ξ with finitely many simple closed orbits. By inequality (7.3) only periodic orbits with positive mean index can contribute to c i for i ≥ 2n−3. By Lemma 8.1 there exists a constant B > 0 such that c i < B for every i ≥ 2n − 3. Hence by our assumption and inequality (7.1) we obtain a contradiction.
8.2.
Invariance of the growth rate. We will reproduce the argument of Seidel in [30, Section 4a] that shows the invariance of the growth rate for symplectic cohomology under Liouville isomorphisms. However, the argument has to be adapted to our context, where we have to deal with augmentations. Let α 0 and α 1 be two non-degenerate contact forms for ξ and W α 0 α 1 a cobordism from α 0 to α 1 . This cobordism induces a chain map Ψ : (A(α 0 ), ∂ 0 ) → (A(α 1 ), ∂ 1 ). An augmentation for (A(α 1 ), ∂ 1 ) yields an augmentation Ψ * for (A(α 0 ), ∂ 0 ) and Ψ induces an isomorphismΨ : HC Ψ * (α 0 ) → HC (α 1 ). As in Section 7.1, given a > 0 and a contact form α let A a (α) be the subalgebra generated by the periodic orbits of α with action less than a. It turns out that there exists a constant D 1 > 0 such that Ψ sends A a (α 0 ) to A D 1 a (α 1 ) for every a > 0.
Exchanging the roles of α 0 and α 1 we obtain a chain homomorphism Φ : (A(α 1 ), ∂ 1 ) → (A(α 0 ), ∂ 0 ) that sends A a (α 1 ) to A D 2 a (α 0 ) for every a > 0, where D 2 > 0 is a constant. The augmentations Φ * Ψ * and are homotopic, that is, there exists a derivation K such that = Φ * Ψ * • e ∂ 1 •K+K•∂ 1 . One can check that there exists D 3 > 0 such that the chain homomorphism e ∂ 1 •K+K•∂ 1 sends A a (α 1 ) to A D 3 a (α 1 ) for every a > 0. Now, let D = max i D i . It turns out that the induced maps in the homology fit into the ladder-shaped commutative diagram where the maps in the vertical arrows are those induced by the inclusion. Now, suppose that HC Ψ * * (α 0 ) HC * (α 1 ) is infinite-dimensional, since, otherwise, we would have Γ Ψ * (α 0 ) = Γ (α 1 ) = 0. Then, where r( , α, a) is the rank of ι(HC a, * (α)). Inverting the roles of α 0 and α 1 we conclude that the set {Γ (α); is an augmentation for α} is an invariant of the contact structure. 8.3. Proof of Theorem 1.4. Suppose that there is a contact form α for ξ with finitely many simple periodic orbits γ 1 , ..., γ r . As in Section 7.4 fix a non-degenerate contact form α 0 for ξ, an augmentation 0 for α 0 and a cobordism joining α 0 and α. Choose 0 such that Γ 0 (α 0 ) > 1. Then, arguing as in Section 8.2, we conclude that (8.2) lim sup a→∞ 1 log a log dim ι(HC a, 0 (α)) > 1. Proof of Theorem 1.5. We will prove the result for the positive mean Euler characteristic since the negative case is analogous. Let α be a contact form with finitely many simple periodic orbits and denote by γ 1 , γ 2 , ..., γ r those with positive mean index. We claim that Indeed, let B m and C m be the left and right sides of inequality (7.2) respectively for i = m. Using (7.2) for m + 1 and m and Lemma 8.1 we conclude that there exist constants B and C such that as claimed. Now, notice that for any periodic orbit γ with positive mean index we have for every m ∈ N, where x = max{m ∈ Z; m ≤ x}. The first and fourth inequalities follow from Lemma 8.1. The second and third inequalities hold because if j > m+2(n−1)/∆(γ) then j∆(γ) − 2 > m∆(γ) + 2n − 4 and by (7.3) the local contact homology satisfies HC i (α, γ j ) = 0 if i / ∈ [j∆(γ) − 2, j∆(γ) + 2n − 4] for every j.
By the previous inequalities we arrive at .
Proof of Corollary 1.6. When γ j is non-degenerate for every j it is easy to see that 8.5. Non-hyperbolic periodic orbits. The proofs of Theorems 1.7 and 1.8 follow from the two theorems below.
Theorem 8.2. Suppose that there are finitely many simple closed orbits, all of them hyperbolic. If dim HC n−3 (ξ) < ∞ then there is no closed orbit with Conley-Zehnder index equal to zero.
Proof. Arguing indirectly, let γ be a closed orbit of index zero. It is well known that the index of a hyperbolic periodic orbit ψ satisfies for every k ∈ N. In particular, we conclude that µ CZ (γ k ) = kµ CZ (γ) = 0 for every k ∈ N. By equality (8.3) and our assumption that there are finitely many simple closed orbits, we also conclude that there are finitely many periodic orbits of index −1 and 1. Hence, since the differential decreases the action, we have that a chain generated by orbits of index zero and action big enough cannot be exact. In particular, there exists k 0 > 0 such that every chain i a i γ k i cannot be exact as long as k i > k 0 for every i. Let ψ 1 , . . . , ψ N be the periodic orbits of index −1. The set of chains of degree n − 4 can be naturally identified with Q N : Therefore, given k ∈ N we can identify ∂γ k with a vector v k := (a k 1 , . . . , a k N ) determined by the relation ∂γ k = N i=1 a k i ψ i . Consequently, given k 0 < k 1 < · · · < k N +1 we have that v k 1 , . . . , v k N +1 must be linearly dependent, that is, there exist rational numbers p 1 , . . . , p N +1 such that Thus the chain p 1 γ k 1 + · · · + p N +1 γ k N +1 is closed and not exact. Since one can take k 0 arbitrarily large, we conclude that dim HC for every m ∈ N. Suppose that there are finitely many simple periodic orbits with positive mean index. Then there is a non-hyperbolic closed orbit.
Proof. Arguing indirectly, suppose that every closed orbit is hyperbolic. In particular, every periodic orbit is non-degenerate. Firstly, let us show that there is at least one periodic orbit with positive mean index. Indeed, if there is no such orbit we would have that χ [ ] + (ξ). So let γ 1 , ..., γ r be the simple periodic orbits with positive mean index. By (8.3) we have that ∆(γ k ) = µ CZ (γ k ) is an integer for every 1 ≤ k ≤ r. Let C = lcm{C, 2 r k=1 ∆(γ k )} and c i (γ k ) = j dim HC i (α, γ j k ). By the relation (8.3) we conclude that for every 1 ≤ k ≤ r and m ∈ N, where k = 1 if γ 2 k is good and k = 1/2 if γ 2 k is bad. From this, we arrive at since mC = (mC /∆(γ))∆(γ) and mC /∆(γ) ∈ N. Hence, by Corollary 1.6, Proof. Fix an augmentation 0 for α with homotopy class [ ]. Since every periodic is nondegenerate and there is no periodic orbit of index zero we have that HC Consider the R-invariant Riemannian metric g 0 = da ⊗ da + λ ⊗ λ + ω on R × N . Domains in C or R × S 1 are equipped with their standard Euclidean metric. Norms of maps are taken with respect to these metrics.
The point of the following lemmas is that X may be very degenerate, so the results from [3] are not available if one wants to analyze sequences of J n -holomorphic maps with bounded energy. In any case, we note that the following arguments are contained in [19].
Remark A.2. The proof of Lemma A.1 shows that one can replace the domain (C, i) of the maps F n by (D, i) and assume z n → 0, or by ([0, +∞) × R/Z, i) and assume z n = (s n , t n ) satisfies s n → +∞. The conclusion is exactly the same in both cases. Lemma A.3. Let F = (a, f ) : C → R × K be a non-constant J-holomorphic map satisfying C f * ω = 0. Then there exists a (not necessarily periodic) trajectory x of X and an entire function H : C → C such that F = Z x • H where the J-holomorphic immersion Z x : C → R × N is defined by Z x (s + it) = (s, x(t)). If, in addition, |dF | is bounded then H(z) = αz + β with α = 0.
Proof. The identity∂ J (F ) = 0 tells us C f * ω = 0 ⇒ f * ω ≡ 0, df takes values on RX • f and da(z) = 0 ⇔ df (z) = 0 ⇔ dF (z) = 0. Fix z 1 , z 0 ∈ C and any smooth curve z(t) : (− , 1+ ) → C satisfying z(0) = z 0 and z(1) = z 1 . Let x : R → N be the trajectory of X satisfying x(0) = f (z 0 ). There is a unique function g(t) defined by df (z(t)) · z (t) = g(t)X • f • z(t) since df takes values on RX • f . Then Y (t, p) = g(t)X(p) defines a time-dependent vector field on (− , 1 + ) × N . Consider h(t) := t 0 g(τ )dτ . Then f • z and x • h solve β = Y (t, β) with the same initial condition, and hence are equal. Since z 1 was arbitrary it follows that F (C) ⊂ Z x (C). Assume x is not periodic. Then there exists a unique function H : C → C satisfying F = Z x • H because Z x is 1-1. Since Z x a J-holomorphic immersion we conclude, using the similarity principle, that H is holomorphic, see Lemma 2.4.3 from [26]. When x is periodic, the map Z x descends to a map Z x defined on R × R/T Z, where T > 0 is the minimal period of x. As before there exists a unique holomorphic function H : C → R × R/T Z satisfying F = Z x • H since Z x is 1-1. Clearly H can be lifted to a holomorphic map H : C → C satisfying F = Z x • H.
If w ∈ C and ζ ∈ T w C then there is an estimate |ζ| ≤ k|dZ x (w) · ζ|, the constant k being independent of w, ζ. This follows very easily from the particular form of the function Z x and the nature of the metric g 0 . Thus |dH| is bounded if so is |dF |, and the conclusion follows from Liouville's Theorem.
The next lemma is an important characterization of non-constant finite-energy cylinders in cylindrical cobordisms.
Proof. We can think of F (s + it) defined on C and 1-periodic in t. We claim that |dF | is bounded. If not, let z n satisfy |dF (z n )| → ∞ and consider F n (z) := τ −cn • F (z + z n ) with c n = a(z n ). By Lemma A.1 applied to F n and Lemma A.4 we can assume, up to the choice of a subsequence, that there exists r n → 0 + and z n ∈ C satisfying |z n − z n | → 0 and lim inf n→∞ Br n (z n ) f * ω > 0 contradicting our hypotheses. By Lemma A.3 we find a trajectory x, = ±1 and constants T > 0, α = T + ib, β = a 0 + ib 0 such that F (s + it) = Z x (αz + β) = ( T s − bt + a 0 , x(bs + T t + b 0 )). Since F is 1-periodic on t we have b = 0 and x is T -periodic.
We wish to show that, as in the contact case [19], for finite-energy curves with image in K it is possible to distinguish between positive/negative punctures, and non-removable punctures give periodic orbits for X. In order to do so, we assume that ω has a primitive α on a neighborhood of K such that inf K i X α > 0. Note that α is a contact form near K. In the language of [20] this means that X is Reeb-like near K since it is a positive multiple of the Reeb vector field of α.
Following [19], if F = (a, f ) : [0, +∞) × R/Z → R × K is a finite energy J-holomorphic map then the limit m := lim s→∞ R/Z f * α exists since E ω (F ) < ∞. Under these assumptions the following important result, due to Hofer [19] in the contact case, holds.
Lemma A.6. Assume m = 0 and let = ±1 be its sign. Then a(s, t) → ∞ as s → ∞, and ∀s n → ∞ there exist n j → ∞, a periodic orbit γ = (x, T ) ∈ P(H) and c ∈ R such that f (s n j , t) → x( T t + c) in C ∞ as j → ∞.
We give a proof here since the statement above can not be found in the literature. Note the difference with the results from [19]: X is not the Reeb vector of α near K, and ξ is not a contact structure (it might even be integrable).
Proof. First we show |dF | is bounded. If not let (ρ n , t n ) satisfy |dF (ρ n , t n )| → ∞ and |ρ n | → ∞. Define F n (s, t) := F (s + ρ n , t) and write F n = (a n , f n ). It follows from E(F ) < ∞ that C f * n ω → 0 for every compact C ⊂ R × R/Z. A combined application of lemmas A.1 and A.4 shows that |dF n | is bounded over compact sets, contradicting |dF n (0, t n )| → ∞.
Suppose m > 0. Define F n (s, t) := τ cn • F (s + s n , t) with c n = −a(s n , 0). Thus, by the above, F n is C 1 loc -bounded and elliptic estimates tell us it is C ∞ loc -bounded. We find n j → ∞ and a smooth J-holomorphic map u : R × R/Z → R × N such that F n j → u in C ∞ loc as j → ∞. Clearly E(u) ≤ E(F ), E ω (u) = 0 and image(u) ⊂ R × K. Moreover u is non-constant since 0×R/Z u * α = m > 0. By Lemma A.5 we have u(s, t) = (T s + a 0 , x(T t + b 0 )), for some (x, T ) ∈ P(H) contained in K. Here we used the fact that inf K i X α > 0 to conclude that sign in Lemma A.5 is +1. Clearly m = T .
We claim lim inf s→∞ā (s) ≥ σ. If not let s n → ∞ satisfy sup nā (s n ) ≤ σ − . Arguing as above we can assume, up to the choice of a subsequence, that f (s n , t) → x(T t + c) in C ∞ , for some γ = (x, T ) ∈ P(H) contained in K, and c ∈ R. Let λ be the 1-form defined by (2.1). Then This contradiction proves our claim. Thusā(s) → +∞ as s → +∞. We conclude the case m > 0 since |a(s, t) −ā(s)| is uniformly bounded by sup |a t | < ∞. The case m < 0 is treated similarly.
The following statement, left with no proof, follows easily from the assumption that ω has a primitive on K.
Lemma A.7. If S is a closed Riemann surface, M ⊂ S is finite, and F = (a, f ) : S \ M → R × K is a non-constant finite-energy J-holomorphic then M = ∅ and at least one point of M is a non-removable positive puncture. | 2013-09-22T18:34:22.000Z | 2012-02-14T00:00:00.000 | {
"year": 2012,
"sha1": "752fdb7e6d38f755e74079e8f36b93ce526aef81",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.3122.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5db804a8ec8bb338d07281a7eb5b7430fde09667",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
58939549 | pes2o/s2orc | v3-fos-license | ZEOLITES AS POSSIBLE BIOFORTIFIERS IN MAITAKE CULTIVATION
The levels of Ni, Cu and Mg in Grifola frondosa (also known as Maitake mushroom) fruit body produced on zeolite Minazel Plus (MG)-supplemented substrate were measured with inductively coupled plasma optical emission spectrometry (ICP-OES). Two different concentrations of MG were added to the substrate for mushroom cultivation. Levels of selected metals were measured in cultivated dry carpophores. The content of Ni increased in fruit bodies produced on supplemented substrate, while in case of Cu, a pronounced decrease was observed. When two different concentrations of MG were implemented, the Mg level showed both positive and negative trend, depending on the applied concentration of zeolite. MG in a concentration of 1% showed the strongest influence on the observed elements in the cultivated fruiting body of Maitake mushroom.
INTRODUCTION
Trace elements occur in organisms in small quantities (less than 300 μg/g), and they are indispensable for proper physiological and biochemical processes.
To maintain their optimal level in an organism it is necessary to provide an adequate intake through daily diet.It can be difficult to achieve an optimal intake of microelements considering the differences in food composition and quantity in different parts of the world.Generally, there are two scenarios in traceelement deficiency.In more developed countries, processed and refined food is the predominant form in which food is consumed.During food processing, most of the valuable microelements are lost and consumption of such foodstuffs does not provide an adequate intake of the trace elements (Masironi, 2000).In less developed countries, people generally consume fresh food, but their food lacks variety and quantity.Another problem with trace-element intake is the wide range of factors that affect its actual and available concentration.These factors are interactions among different elements, dose-response effects, oxidation state, chelation, immutability, bioavailability, absorption and excretion, cause-versus-effect, the health condition of an organism and pharmacologic effects (Bogden, 2000).It is estimated that about 40% of the population across the world is at high risk for one or more trace-element deficiency (Bogden, 2000).In these conditions, the consumption of biofortified food with optimally balanced amounts of microelements could be a potential solution.
Copper is regarded as one of the most deficient elements.Copper is contained in metalloproteins that operate as enzymes.Copper is included in energy utilization, oxidation-reduction reactions and other metabolic reactions.Its deficiency could cause a number of health issues such as failure of pigmentation, ataxia, cardiac, vascular and skeletal defects.According to the Food and Nutrition Board of the US National Academy of Sciences, the safe and adequate daily intake of copper from food is in the range from 1.5 to 3.0 mg/day for adults.The WHO suggests that an amount of 0.5 mg/kg of body weight is safe, which is up to 10 mg/day.This value is marked as a No Observed Adverse Effect Level (NOAEL) (Hathcock, 2000).Meats, legumes, nuts and seafood are regarded as good sources of copper.(Kleavay, 1998).
Nickel could be considered as an essential element for as much as studies link it to the function of the vitamin B12, nickeloplasmin, folic acid and as an activator of numerous enzymes (Nielsen et al., 1993).It is essential for many enzymatic processes in microorganisms such as hydrogenation, desulfurization and carboxylation reactions (Hausinger, 1994).The suggested dietary requirement for nickel is 25-35 mg/day, while upper safe intake can be even 600 mg/ day since there has not been observed a life threatening toxic effect.Usually the daily intake does not exceed 100-150 mg/day (Nielsen, 2000).
Magnesium is an essential element.Its recommended dietary allowance (RDA) is from 320-420 mg/day (for females and males, respectively).The human body contains about 21-28 g of magnesium, mostly concentrated in the bones, while in the cells it is the second most abundant element.It is a cofactor of more than 300 enzymes and its deficiency is correlated to osteoporosis, hypertension, diabetes, myocardial infarction, atherosclerosis, mood disorders during menstrual cycle, obstetrical problems, kidney disease, kidney stones, blood clots, bowel disease, cystitis, fatigue and psychiatric disorders (Segado et al., 2003;Singewald et al., 2004;Eisenberg, 1992).It is estimated that about 11% of the world's population have hypomagnesemia manifesting as different heart disorders, psychiatric conditions, neuromuscular hyperactivity and Ca/K abnormality (Singh, 1990).Whole grains and vegetables are good sources of this element, rather than processed foods.
Fungi are widespread in the human diet, especially in Europe and Asia where people have been collecting wild-growing species and using them as food and remedies.Although mushrooms are highly valuable and distinct in nutrient content, they could be a source of heavy metals like Hg or Cd since they are able to accumulate different elements from the growing substrate (Muller et al., 1997).This ability of mushrooms could represent a problem in areas with highly developed industries.Soil around these areas is usually very rich in the toxic elements that are then deposited in mushroom mycelia and fruiting body (Rudawska et al., 2005).The direct consequence of heavy metals accumulation is an excessive intake of these elements.Depending on the age, gender, immune system status and concentration of the specific element, various health conditions could develop.In the case of cultivated species, undesirable elements could be monitored through the choice of substrate, designed not just to keep toxic metals under the allowed limits but also to increase the content of microelements that could contribute to human health.Mushrooms are considered a good source of different nutrients such as proteins, polysaccharides, vitamin B, ergocalciferol, dietetic fibers and essential microelements sodium and potassium (Kalač, 2009).Mattila et al. (2001) state that the riboflavin content of common cultivated species like Agaricus bisporus and Pleurotus ostreatus is significantly higher than in vegetables.Numerous recent studies emphasize the medicinal value of mushrooms, especially their β-glucan content.
G. frondosa is one of the selected species with high medicinal potential.According to Roupas et al. (2012), G. frondosa exhibits anticancer, immunemodulation and bone health-promoting activities.It is well known in traditional Eastern medicine.Its name in Japanese is "Maitake -dancing mushroom" which describes its importance for the local population.Modern medicine also appreciates this mushroom and the increased demand for the fruiting bodies of G. frondosa is understandable.While most of the studies concern trace-element levels in wild mushroom species since they are recognized as microelement accumulators and potentially harm-ful for consumers if picked near polluted areas and smelters (Sivrikaya et al., 2002), data about cultivated species are scarce.
Zeolite is the general name for a wide group of natural and chemically modified minerals.Zeolites are in the form of a 3-D framework constructed from tetrahedral of [SiO 4 ] and [AlO 4 ] (Lobo, 2003).They form a well-organized structure with holes and cavities that act as water and ion containers.Zeolites possess a charge excess that derives from Al and therefore they require external elements for structure stabilization.These external elements are usually Ca and Mg and they are relatively easily exchanged.Chemical characteristics of zeolites are the direct consequence of their basic structure with external ions and guest molecules.The most important characteristics of zeolites are ion-exchange ability and catalytic activity (Pickering et al., 2001).Parham (1984) stated that zeolites possess great potential in improving agriculture, especially as slow-release fertilizers.
The main objectives of this study were to determine the possibility of using zeolites, designated Minazel Plus, as biofortifiers in mushroom cultivation, and to investigate their influence on trace-element levels in the produced fruiting body.
MATERIALS AND METHODS
A pure culture of G. frondosa was obtained from the collection of the Department for Industrial Microbiology of Agriculture Faculty, University of Belgrade, Serbia.Mycelia were cultivated on malt extract agar purchased from Himedia.Wheat straw originated from a local wheat field near Belgrade and oak sawdust was procured from Mt. Goč (Serbia).Sacks for mushroom fruiting body cultivation were obtained from saco2 (Belgium).The Institute for Technology, Nuclear and other Raw Materials (ITNMS, Belgrade, Serbia) kindly supplied us with Minazel Plus (a modified zeolite) as already prepared.HNO 3 (65%) and H 2 O 2 (30%) were obtained from Merck (Germany).Trace-element level analysis was performed on ICP-OES (Spectro Genesis, Genesis FEE, Germany).
G. frondosa cultivation
After boiling, grains of wheat were cooled and mixed with CaCO 3 .Glass jars (1 kg) were filled with grains and sterilized for 40 min at 121°C.The grains were inoculated with mycelia previously cultivated on malt agar.Inoculated jars were kept in the dark for two weeks at 25°C.The wheat grains were completely covered with white mycelium, which was used for substrate inoculation.For substrate preparation, wheat straw, oak sawdust and malt sprouts were mixed at 30:50:20 ratios, respectively, and soaked in tap water overnight.To regulate the pH, 0.5% CaCO 3 was added.Humidity in the mixture was 75%.Minazel Plus in concentrations of 0.25% and 1% was added into the mixture manually, in three repetitions for each concentration.The control did not contain zeolites.Microsacs were loaded with 3 kg of prepared substrate each and sterilized in autoclave for 2 h at 121°C.After cooling, substrate was inoculated with spawn.Cultivation conditions are presented in Table 1.The produced fruiting bodies were collected, brush cleaned, dried to a constant mass and kept in the dark prior to analysis.
Trace-element analysis
Dried samples were subjected to wet digestion in
Statistical analysis
All measurements were done in triplicate and represented as mean ± standard deviation (SD).SPSS program (version 12.0) was used for statistical analysis.The data were subjected to two factorial repeated ANOVA.If significance was established by ANOVA, differences in the means were determined using Duncan's multiple range tests (p <0.05).
RESULTS AND DISCUSSION
Concentrations of Ni, Cu and Mg were measured in the cultivated fruiting body of Maitake and the data are presented in Table 3. Zeolite supplementation of the substrate resulted in a change the profile of trace elements in the mushroom, compared to the control.There was no uniform pattern in terms of the concentration change of the selected metals.Generally, trace elements have shown a high ability of translocation and implementation in the mushroom's fruit-ing body (Jain et al., 1988).Bhatia et al. (2013) reported that Se-biofortification of mushrooms with Se-enriched agricultural residues was very efficient.
Concentration of nickel
Our investigation showed that the nickel content increased in the samples cultivated on supplemented substrate when compared to the control.The observed change was statistically significant.The greatest increase appeared in samples cultivated on substrate supplemented with 1% of zeolite.The measured value ranged from 1.92 to 3.58 mg/kg of dry weight (DW), whereas the content of Ni found in the control was below the limit of detection.Li et al. (2011) reported that the content of nickel was 1.68-3.01mg/ kg DW for Boletus species.In other studies, concentrations of this element in wild-growing species were 1.72-24.1 μg/g DW (Soyak et al, 2005), 0.7-4.2mg/ kg DW (Sarikurkcu et al., 2011) and 44.6-145 mg/ kg DW (Demirbas, 2001), while for cultivated Pleurotus sajor-caju it was 16-18 μg/g DW (Sivrikaya et al., 2002).Clearly, our findings correspond to those in the literature.As regards the substrate, the addition of MG resulted in an increase in the nickel content of the fruiting body.The accumulation of nickel could be due to MG structure, which is a clinoptilolite, modified to adsorb and incorporate the NH 4 + ion.This natural zeolite shows the most affinity for divalent cations (Lobo, 2003), while the most common oxidation state of Ni is +2 (Greenwood, 1997), which ranks it among the preferred elements for ion exchange inside a zeolite's framework.This probably made nickel more available to mycelium utilization.Although substrate supplementation proved to support nickel accumulation in the fruiting body, the achieved amounts are within allowed limits.
Concentration of copper
ICP-OES analysis showed that copper concentration decreased in samples cultivated on zeolite-supplemented substrate.There was no statistically significant difference between the applied concentrations of MG, although 1% of MG resulted in a greater decrease in copper.The content of Cu in the control (Kalač, 2010).The same author stated that mushrooms are excellent copper accumulators and if they grow near Cu smelters, the level of this metal can reach up to 427-505 mg/kg DW.Other authors reported the following: 39-181.5 mg/kg DW (Chen et al., 2009), 5.11-92.5 mg/kg DW (Li et al., 2011), 6-187 mg/kg DW (Sarikurkcu, 2011) for wild growing species, while for the cultivated Pleurotus species it was 10.5-14 μg/g DW (Sivrikaya, 2002).
Although the copper content in our samples was lower than for most of the wild species.G. frondosa is marked as a species with acidic peptide that increases the amount of soluble copper, reflexed in increased Cu-absorption in the intestine (Shimaoka et al., 1993).This is an advantage of Maitake because the availability of copper from mushrooms is low (Kalač and Svoboda, 2000).Human intoxication with copper from food is relatively rare due to its negligible adsorption in the intestine (40-60% according to the WHO, 1996).The bioavailability of this metal could be increased if the diet is rich in proteins (100-150 g/ day), and knowing that mushrooms are rich sources of proteins, the stated amount could be achieved.
Concentration of magnesium
Zeolite supplementation of the substrate appeared beneficial in the case of magnesium level.According to the data from Table 3, Mg in a concentration of 1% provided a statistically significant increase in magnesium level, while for Mg applied in 0.25% concentration and the control there were no significant differences.Mg amount was in the interval 1620.08-2438.42mg/kg DW, which is in agreement with values reported in literature.Demirbas (2001) stated that amounts of magnesium in wild species from the eastern Black Sea region were 850-1320 mg/kg DW.These values are much lower than in our samples.
Other authors reported values from 0.90-4.54mg/g DW (Genccelep et al., 2009), while in species from Poland the magnesium concentration was about 0.7 g/kg DW (Rudawska, 2005).Ouzouni et al. (2009) measured a concentration of magnesium of 688.7-1150.7 mg/kg DW.We assume that the increase in Mg level in our samples was due to the presence of Mg as a zeolite external element and possibility of its ion exchange.In addition, the zeolite's ability to hold water in its cavities and pores might be beneficial in creating the proper environment for electrolyte activity in the substrate, making microelements more available to the mycelium and for accumulation in the fruiting body.It seems that diffusion of ions such as Mg, Ni and Cu were due to van der Waals forces between oxygen atoms in the zeolite structure and elements that diffuse ( Lobo, 2003).
CONCLUSION
Results from our study clearly imply that zeolite application in the cultivation of mushrooms could be beneficial in increasing the concentration of microelements, especially of health-beneficial elements such as Mg.Although the absolute amount of observed elements increased in our samples, further studies into the bioavailability of specific elements from mushrooms are necessary.Supplementation of the substrate gave positive results, suggesting that zeolites could be considered as possible biofortifiers in mushroom production.
Table 1 .
Cultivation conditions for the mushroom G. frondosa.
Table 3 .
Concentrations of Ni, Cu and Mg in cultivated fruit body of G. frondosa.All values are means±SD (n=3).a-c Values with different superscripts within the same columns are significantly different (p < 0.05). | 2018-12-18T09:08:26.557Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "2144a1b533567c973f5e16cb1905ff031ca99639",
"oa_license": "CCBY",
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46641401123V",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2144a1b533567c973f5e16cb1905ff031ca99639",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
238531130 | pes2o/s2orc | v3-fos-license | Associations of insomnia on pregnancy and perinatal outcomes: Findings from Mendelian randomization and conventional observational studies in up to 356,069 women
Background: Insomnia is common and associated with adverse pregnancy and perinatal outcomes in observational studies. Our aim was to test whether insomnia causes stillbirth, miscarriage, gestational diabetes, hypertensive disorders of pregnancy, perinatal depression, preterm birth, or low/high offspring birthweight (LBW/HBW). Methods and Findings: We used two-sample Mendelian randomization (MR) with 81 single nucleotide polymorphisms instrumenting for a lifelong predisposition to insomnia. We used data (N=356,069) from the UK Biobank, FinnGen, and three European birth cohorts (Avon Longitudinal Study of Parents and Children (ALSPAC), Born in Bradford, and Norwegian Mother, Father and Child Cohort Study). Main MR analyses used inverse variance weighting (IVW), with weighted median and MR-Egger as sensitivity analyses. We compared MR estimates with multivariable regression of insomnia in pregnancy on outcomes in ALSPAC (N=11,745). IVW showed evidence of an effect of genetic susceptibility to insomnia on miscarriage (odds ratio (OR): 1.60, 95% confidence interval (CI): 1.18, 2.17), perinatal depression (OR 3.56, 95% CI: 1.49, 8.54) and LBW (OR 3.17, 95% CI: 1.69, 5.96). For other outcomes IVW indicated potentially clinically important adverse effects of insomnia (OR range 1.20 to 2.43), but CIs were wide and included the null. Weighted median and MR Egger results were directionally consistent, except for MR-Egger for gestational diabetes, perinatal depression, and preterm birth. Multivariable regression showed associations of insomnia at 18 weeks of gestation with miscarriage (OR 1.30, 95% CI: 1.12, 1.51), stillbirth (OR 2.10, 95% CI: 1.20, 3.69), and perinatal depression (OR 2.96, 95% CI: 2.42, 3.63), but not with LBW (OR 0.92, 95% CI: 0.69, 1.24). Key limitations are potential horizontal pleiotropy and low statistical power in MR, and residual confounding in multivariable regression. Conclusions: There is evidence of causal effects of insomnia on miscarriage, perinatal depression, and LBW. We highlight the need for larger studies with genomic data and pregnancy outcomes.
Author summary
Why was this study done?
• Insomnia in pregnancy was associated with higher risks of adverse pregnancy and perinatal outcomes in observational studies.
• It is currently no clear whether insomnia causes adverse pregnancy and perinatal outcomes or whether the unfavourable associations are explained by confounding.
• No Mendelian randomization has been conducted to explore the association of insomnia with adverse pregnancy and perinatal outcomes.
What did the researchers do and find?
• We used data on up to 356,069 women from UK Biobank, FinnGen and three birth cohorts, and assessed whether genetic susceptibility to insomnia was associated with stillbirth, miscarriage, gestational diabetes, hypertensive disorders of pregnancy, perinatal depression, preterm birth, low offspring birthweight, and high offspring birthweight in two-sample Mendelian randomization.
• To triangulate with our Mendelian randomization estimates, we conducted multivariable regression in 11,745 women from the Avon Longitudinal Study of Parents and Children, where insomnia was measured in pregnancy.
• We found consistent evidence from Mendelian randomization and multivariable regression that insomnia was associated with higher risks miscarriage and perinatal depression, and Mendelian randomization also suggested an unfavourable effect on low offspring birthweight.
What do these findings mean?
• Interventions to improve healthy sleep in women of reproductive age might be beneficial to a healthy pregnancy. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint Introduction Insomnia, which affects approximately 10 to 20% of the adult population, is usually defined as a difficulty in getting to sleep or remaining asleep, or having a nonrestorative sleep, and such sleep impairment can be associated with daytime sleepiness [1,2]. Physical and hormonal changes during pregnancy increase susceptibility to insomnia [3,4].
Most evidence on the relationship between insomnia during pregnancy and adverse pregnancy and perinatal outcomes has come from observational studies. The most recently updated systematic reviews of observational studies suggest that pregnancy-related insomnia and poor sleep quality are associated with higher risks of gestational diabetes (GD) [5,6], hypertensive disorders of pregnancy (HDP) [6], perinatal depression [7], and preterm birth (PTB) [6]. Other observational studies have shown that specific conditions that relate to insomnia are also associated with adverse pregnancy and perinatal outcomes. For example, meta-analyses combining four case-control studies showed that going to sleep in a supine position (plausibly exacerbating sleep-disordered breathing that contributes to decreased sleep quality [4,8]) was associated with higher risk of stillbirth [9] and small-for-gestational age (SGA) [10]. Sleep-disordered breathing, obstructive sleep apnoea and restless legs syndrome have also been shown to associate with higher risks of GD, HDP, large-forgestational age (LGA) and low offspring birthweight (LBW) [6]. However, it remains unclear whether insomnia causes adverse pregnancy outcomes or whether these associations are explained by confounding, e.g. due to socio-economic status and lifestyle factors. It is also possible that some of these studies reflect reverse causation. For example, all four studies included in the systematic review for perinatal depression were cross-sectional [7]. As disturbed sleep is a symptom of depression it is unclear whether these studies reflect a causal effect of insomnia, or it is part of the diagnostic criteria. Furthermore, most individual studies focus on just one or two outcomes.
Examining potential effects on a range of adverse pregnancy and perinatal outcomes is important to understand the overall health impact of insomnia during pregnancy. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint Three randomized control trials (RCTs) assessing the effects of interventions to prevent insomnia on adverse pregnancy and perinatal outcomes have been published [11][12][13]. All three of these used cognitive behavioural interventions targeted at reducing insomnia, with the primary outcome being Edinburgh Postnatal Depression Scale scores. The first RCT reported a difference in mean score of -0.21 (95% confidence interval (CI): -0.30, -0.11) in 208 randomized women, with equivalent results for the other two being -0.24 (95% CI: -0.67, 0.19, N=194) and 0.34 (95% CI: -0.26, 0.93, N=91) [11][12][13]. The small number of RCTs, their small sample sizes, and directional inconsistency but overlapping CIs make it difficult to draw conclusions, and none of them explored other adverse pregnancy or perinatal outcomes.
Mendelian randomization (MR) provides an alternative way to assess the impact of insomnia on adverse pregnancy and perinatal outcomes by using genetic variants (mostly single nucleotide polymorphisms [SNPs]) as instrumental variables (IVs) for insomnia [14,15]. MR is less prone to confounding than observational studies, as genetic variants are randomly allocated at meiosis and cannot be influenced by the wide range of socio-demographic or behavioural factors which conventionally confound observational studies, nor can they be influenced by health status [14,15].
Under key assumptions (discussed in methods), MR can be used to estimate a causal effect from the associations of the SNPs with the exposure and outcome. In two-sample MR, the SNP-exposure and SNP-outcome associations are estimated using different studies [16]. This approach has previously been used to evaluate causal effects of self-reported insomnia on risks of type 2 diabetes [17,18], hypertension [19] and cardiovascular disease [18,20,21] in non-pregnant populations, but not pregnancy and perinatal outcomes.
The aims of this study are to (I) explore the causal effects of maternal genetic susceptibility to insomnia on stillbirth, miscarriage, GD, HDP, perinatal depression, PTB, LBW, and high offspring birthweight (HBW), using two-sample MR, and (II) compare those findings with conventional . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint 6 multivariable regression analyses of self-reported insomnia during pregnancy with the same outcomes.
. CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint
Study populations
This study was undertaken using data from the MR-PREG consortium, which aims to explore causes and consequences of different pregnancy and perinatal outcomes [22]. We used individual-level data from UKB women (N=208,140, recruited between 2006-2010), and mother-offspring pairs from ALSPAC (N=6,826, recruited between 1991-1992), BiB (N=2,940, recruited between 2007-2010) and MoBa (N=14,584, recruited between 1999-2009). To be comparable across all cohorts, only genetically unrelated women of European descent with qualified genotype data (and with singleton offspring in birth cohorts) were eligible for inclusion in our analyses (S1 Fig in S1 File). We also used summary-level genetic association data from FinnGen -the national wide network of Finnish biobanks (N=up to 123,579 women) [23]. All studies had ethical approval from relevant national or local bodies and participants provided written informed consent. Details of the recruitment, information on genetic data and measurements of baseline characteristics of each cohort are described in Supplementary Text (S1 File).
Outcomes measures
We explored potential effects of insomnia on eight binary outcomes: ever experiencing stillbirth, ever experiencing miscarriage, GD, HDP, perinatal depression, PTB (gestational age <37 completed weeks), LBW (<2,500 grams) and HBW (>4,500 grams). Full details about how these outcomes were measured and derived in each participating study, and how we harmonised them across studies can be found in S1 Table (S2 File). We were not able to measure pre-eclampsia and gestational hypertension separately, because of the small number of definite cases of pre-eclampsia, and because of differences between studies in data collection and definitions.
In UKB gestational age was only available for a small subset of women (N=7280) who delivered a child during or after 1989, the earliest date for which linked hospital labour and perinatal data are is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint available [24]. As a result, numbers with data on PTB are smaller than for any other outcome, and we a priori decided to examine associations with LBW and HBW rather than SGA and LGA. For most outcomes in UKB, women reported their experience retrospectively in a questionnaire completed at recruitment when they were aged 40-60 years.
In the three birth cohorts most outcomes were prospectively obtained (from self-report or clinical records) during an index pregnancy and the perinatal period. The two exceptions were history of stillbirth and miscarriage, which were retrospectively reported at the time of the index pregnancy when women were asked if they had ever experienced a (previous) stillbirth or miscarriage. We explored the possibility of examining associations with miscarriage and stillbirth in the index pregnancy. However, numbers were too small for reliable results in either MR or multivariable regression, and for miscarriage we were concerned about misclassification or selection bias due to women who had experienced a miscarriage prior to recruitment. If multiple pregnancies were enrolled in the birth cohorts, we randomly selected one pregnancy per woman.
Data from FinnGen were available for four of our outcomes: ever experiencing miscarriage, GD, HDP and PTB, which were defined based on International Classification of Diseases codes.
Insomnia measures
Self-reported information on insomnia was obtained from two of the studies. In UKB information on lifetime insomnia was used to generate SNP-insomnia associations in women for use in MR analyses in UKB and the birth cohorts. ALSPAC collected data on insomnia during pregnancy, and this was used for conventional confounder-adjusted multivariable regression. In UKB, insomnia was self-reported at recruitment via the question "Do you have trouble falling asleep at night or do you wake up in the middle of the night?" with responses "never/rarely", "sometimes", "usually" and "prefer not to answer". For our analyses we collapsed these categories to generate a binary variable of usually experiencing insomnia (i.e. "usually" [cases] versus . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint "sometimes" + "never/rarely" [controls]) as this was how the responses were categorised in the published genome-wide association study (GWAS) that we have used to select genetic IVs [18].
In ALSPAC, insomnia in pregnancy was self-reported, at 18 and 32 weeks of gestation, using the question "Can you get off to sleep alright?" with options "Very often", "Often", "Not very often" and "Never". At each time point, we compared "Not very often" + "Never" [cases] versus "Very often" + "Often" [controls]. We acknowledge that the two studies are using different questions and that definitions of insomnia vary across published literature [2]. For ease of reading throughout the paper we refer to results reflecting genetic susceptibility to insomnia (MR) and reporting insomnia in pregnancy (multivariable regression).
SNP selection and SNP-insomnia associations
To identify genetic IVs for insomnia, we searched the GWAS published between January 2017 and February 2021 on PubMed and Neale Lab website [25]. We found seven insomnia GWAS reporting genome-wide significant SNPs (details in S2 Table in S2 File). Of these we selected SNPs from the largest GWAS (total N = 709,986 women, 29% from UKB and 71% from 23andMe), which provided female-specific results [18]. This GWAS identified 83 loci containing 87 lead SNPs that were robustly associated with insomnia (P-value <5×10 -8 ) after pooling UKB and 23andMe women together. We removed 6 SNPs that were correlated to other SNPs (linkage disequilibrium) at an R 2 threshold of 0.01 or higher, based on all European samples from the 1000 genome project [26]. Associations (reported in log odds ratios [ORs]) of the remaining 81 lead SNPs from the women only GWAS were extracted and listed in S3 Table (S2 File).
We fitted linear regression to individual-level data from 208,140 UKB women to recalculate SNPinsomnia associations for two-sample MR analyses, to avoid non-collapsibility of ORs and difficult interpretation of MR estimates' unit [27]. We adjusted the linear models for genotyping batch, top 40 principal components (PCs) and women's age. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint
SNP-outcome associations
We estimated the associations between maternal SNPs and outcomes (log odds ratio (OR) and standard errors) for each of the 81 insomnia-related SNPs. In UKB, we randomly separating women in half (giving two datasets, A and B) for our split cross-over two-sample MR [28], given UKB was also included in the GWAS of insomnia. We then estimated SNP-outcome associations in each split sample using logistic regression, adjusting for genotyping batch, top 40 PCs, and women's age. In the birth cohorts, we estimated the SNP-outcome associations using logistic regression, adjusting for (I) top 20 PCs and women's age in ALSPAC; (II) top 10 PCs and women's age in BiB; and (III) genotyping batch, top 10 PCs and women's age in MoBa. We extracted associations of the 81 SNPs with miscarriage (O15_ABORT_SPONTAN), GD (GEST_DIABETES), HDP (O15_GESTAT_HYPERT), and PTB (O15_PRETERM) from FinnGen, which were generated using SAIGE (mixed-effects logistic regression [29]) adjusting for genotyping batch, top 10 PCs and women's age [23]. Then we meta-analysed those associations from ALSPAC, BiB, MoBa and FinnGen using fixed-effects with inverse variance weights. Three SNPs (i.e. rs10947428, rs9943753, and rs117037340) were excluded from BiB analyses due to their minor allele frequency lower than 1%.
Assessment of confounders in ALSPAC for multivariable regression
We considered maternal age at time of delivery, education, body mass index at 12 weeks of gestation, smoking status in pregnancy, alcohol intake in the first three months of pregnancy and household occupational social class as potential confounders based on their known or plausible associations with maternal insomnia and pregnancy and perinatal outcomes. Details of confounders were based on maternal self-report, and are fully described in Supplementary Text (S1 File).
Statistical analyses
Two-sample MR . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021.
As shown in Fig 1, we conducted two-sample MR analyses of maternal insomnia on pregnancy and perinatal outcomes. In UKB, we conducted a split cross-over two-sample MR [28]. Specifically, we used SNP-insomnia associations from dataset A and SNP-outcomes associations from dataset B (A on B) and vice-versa (B on A), and then meta-analysed the MR estimates from the two together for each insomnia-outcome pair using fixed-effects (with inverse variance weights). For the two-sample MR using the rest of the cohorts, we used SNP-insomnia associations from UKB women and the pooled SNP-outcome associations combining ALSPAC, BiB, MoBa and FinnGen. For each outcome, we pooled MR estimates from all cohorts using fixed-effects (with inverse variance weights), and used leave-one (study)-out analysis to assess the degree of heterogeneity between cohorts.
In the main analyses, we used the MR IVW method, which is a regression of the estimates for SNPoutcomes associations on SNP-insomnia associations weighted by the inverse of the SNP-outcome associations variances, with the intercept of the regression line forced through zero [30]. The IVW estimates should provide an unbiased estimate of a causal effect in the absence of unbalanced horizontal pleiotropy [30]. To explore potential unbalanced horizontal pleiotropy, our sensitivity analyses included (I) estimating between-SNP heterogeneity (which if present may be due to one or more SNPs having horizontal pleiotropic effects on the outcome) using Cochran's Q-statistic and leave-one (SNP)-out analysis, and (II) undertaking analyses with weighted median [31] and MR-Egger [32], which are more likely to be robust in the presence of invalid IVs. The weighted median method is unbiased so long as less than 50% of the weight is from invalid instruments (i.e. if one SNP contributing more than 50% of the weight across the SNP-insomnia associations or several SNPs that contribute more than 50% introduce horizontal pleiotropy the effect estimate is likely to be biased) [31]. MR-Egger is similar to IVW except it does not constrain the regression line to go through zero; if the MR-Egger intercept is not null it suggests the presence of horizontal pleiotropy, and the MR-Egger slope provides an effect estimate corrected for unbalanced horizontal pleiotropy [32].
However, MR-Egger has considerably less statistical power than IVW. Further details of these MR methods are provided in our previous study [33]. When using MR to assess the effect of maternal . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint exposures in pregnancy on offspring outcomes, results might be biased via a path from maternal genotypes to maternal/offspring outcomes due to fetal genotype [34]. To explore this, we compared SNP-outcome associations with versus without adjustments for fetal genotypes in the pooled birth cohort analyses.
We evaluated the strength of IVs using both proportion of variances of maternal insomnia explained by the 81 SNPs (R 2 ) and F-statistic [35]. We selected SNPs robustly related to insomnia in the general female population rather than in pregnant women. Therefore, we explored associations of the 81 SNPs with woman's insomnia measured at 18 and 32 weeks of gestation in ALSPAC using logistic regressions to determine whether those SNPs related similarly to insomnia in pregnancy. We adjusted for the top 20 PCs and women's age.
Multivariable regressions in ALSPAC
In ALSPAC, we explored the observational associations of insomnia at 18 and 32 weeks of gestation with binary outcomes using logistic regression, with adjustment for measured confounders. We had to regress stillbirth history on insomnia at 18 and 32 weeks of gestation, and miscarriage history on insomnia at 18 weeks of gestation, assuming that insomnia in the index pregnancy could also represent the situations in previous pregnancies.
All analyses were performed using R 3.5.1 (R Foundation for Statistical Computing, Vienna, Austria).
Two-sample MR analyses were conducted using the "TwoSampleMR" R package [26]. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint SNPs explained approximately 0.42% variance of insomnia among the 208,140 UKB women included in this study (S4 Table in Table (S2 File).
Two-sample MR
In the analysis combining all cohorts, lifetime tendency to insomnia was associated with higher risks of all outcomes with point estimate OR ranging from 1.20, for GD, to 3.56 for perinatal depression (Fig 2). Despite combining data from the largest genetic studies available estimates were imprecise, with 95% CIs for all but three outcomes including the null. The three that did not include the null were miscarriage (OR 1.60, 95% CI: 1.18, 2.17), perinatal depression (OR 3.56, 95% CI: 1.49, 8.54) and LBW (OR 3.17, 95% CI: 1.69, 5.96). S2 Fig (in S1 File) shows IVW results for leave-one (study)-out analysis. Results were broadly consistent, with the point estimates showing some differences with stillbirth, GD and PTB, though CIs were very wide for some outcomes.
Sensitivity analyses using weighted median and MR-Egger for all outcomes were directionally consistent, with the exception of MR-Egger results for GD, perinatal depression and PTB (Fig 2).
Between-SNP heterogeneity for MR analyses was observed with LBW and HDP (S6 Table in is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint miscarriage, LBW or HBW; SNP-outcome associations with GD, HDP and perinatal depression were slightly attenuated; SNP-PTB associations moved slightly away from the null (S6 Fig in S1 File).
Multivariable regression in ALSPAC
S7 Table (S2 File) summarizes the characteristics of women from ALSPAC who contributed to these analyses. After adjusting for potential confounders, there were associations of insomnia at 18 and 32 weeks of gestation with stillbirth history, miscarriage history (only assessed for 18 weeks), GD (32 weeks only), HDP and perinatal depression, with similar magnitudes of association to those seen for the MR analyses, and with imprecision meaning some 95% CIs included the null (Fig 3). Associations with stillbirth history (OR for insomnia at 18 weeks 2.10, 95% CI: 1.20, 3.69), miscarriage history (OR for insomnia at 18 weeks 1.30, 95% CI: 1.12, 1.51), and perinatal depression (OR for insomnia at 18 weeks 2.96, 95% CI: 2.42, 3.63), were the most reliable with CIs that did not include the null (Fig 3). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint
Discussion
To our knowledge this is the first MR study to explore the relationship of insomnia with pregnancy and perinatal outcomes. We interpreted the MR results as reflecting a lifetime tendency to insomnia on the basis that SNPs are determined at conception, and evidence suggested that with similar analyses of other exposures (e.g. blood pressure and C-reactive protein) this is the case [36,37]. We interpreted the multivariable regression results as reflecting associations of insomnia during pregnancy, though we could not distinguish this from pre-existing insomnia as we did not have information on sleep traits before conception. The associations of the insomnia genetic IVs with reported insomnia during pregnancy in ALSPAC provided some support that the exposures in our MR and multivariable regression analyses had some consistency with each other. Overall, our MR results provide evidence that a lifetime tendency to insomnia may increase the risk of stillbirth, miscarriage, GD, HDP, perinatal depression, PTB, LBW and HBW, with ORs of potential clinical importance (range 1.20 to 3.56) for all of these. However, we acknowledge that MR analyses are statistically inefficient and despite combining all potentially relevant studies in order to increase the sample size, results were imprecise with all but three outcomes having 95% CI that included the null. Thus, we have strongest MR evidence for insomnia increasing the risk of miscarriage, perinatal depression, and LBW. Results for miscarriage and perinatal depression are replicated in multivariable regression analyses,but with the exception of results for LBW, which were much closer to the null.
Our findings in both MR and multivariable regression of increased risks of perinatal depression are consistent with the systematic review and meta-analysis of observational studies [7], and with RCTs suggesting that pregnancy interventions to reduce insomnia decrease perinatal depression [11,12].
Whilst previous studies have shown that pregnancy supine sleeping position (which is associated with insomnia) to be associated with stillbirth [6,9], recent systematic reviews have only identified one cross-sectional study (N=222) of the association of insomnia with stillbirth [6,8], and we did not identify any previous studies of insomnia associations with miscarriage. Thus, our novel findings of a . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. [6], despite the differing definitions of insomnia and outcomes and different assumptions for MR and multivariable regression [38].
Several mechanisms have been suggested for why insomnia might influence pregnancy and perinatal outcomes, including insomnia resulting in increased risks of adiposity, insulin resistance and other cardiometabolic outcomes that could then influence related pregnancy outcomes (GD, HDP, LBW and HBW) and influence placentation and hence miscarriage, stillbirth and PTB. MR analyses support effects of insomnia on coronary heart disease, higher glycated haemoglobin, and higher glycoprotein acetyls (an inflammatory marker) in general populations of women and men [20,39,40]. Thus, an increase in cardio-metabolic risk and inflammation may mediate effects of insomnia on miscarriage and LBW, and outcomes for which our MR analyses are currently imprecise. Similarly, MR analyses have found a potential effect of insomnia on depressive symptoms [20], which is coherent with our findings in relation to perinatal depression.
Study strengths and limitations
Key strengths of our study are that (I) it is the first study to use MR to explore potential effects of insomnia on pregnancy and perinatal outcomes; (II) we compared those MR findings with multivariable regression results of insomnia symptoms in pregnancy, adjusting for a priori defined key confounders; (III) we explored a range of pregnancy and perinatal outcomes. To our knowledge, this is the first study to explore associations with miscarriage and stillbirth, using either multivariable regression or MR. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint Our MR analyses may be biased by horizontal pleiotropy, particularly given our previous research showing that SNPs for insomnia are also associated with several factors that could influence pregnancy and perinatal outcomes, including education, age at first live birth, and smoking [33]. We explored this potential with a range of sensitivity analyses, including exploring between-SNP heterogeneity and using weighted median and MR-Egger methods that are more robust to such bias than IVW [30]. Results from these sensitivity analyses were broadly consistent with our main analyses but were less precise (i.e. had wider CIs). Adjusting for fetal genotype did not alter results suggesting that bias due to fetal genotypic effects is unlikely. We did not further adjust for paternal genotype because of limited data with paternal, maternal and offspring genotype. Furthermore, the most plausible mechanism for paternal genotype to affect pregnancy outcomes is via fetal genotype, which we have adjusted for.
Interpretation of the effect estimates from our MR analyses requires a further assumption of monotonicity in the SNP-insomnia associations. This requires that all of the women with genetic IVs related to higher liability to insomnia symptoms should report more symptoms (compared to those with fewer alleles related to insomnia) -i.e. that they are 'compliers' [41]. Whilst we cannot test this assumption, the similarity of our MR and multivariable regression estimates for miscarriage and perinatal depression, provides some evidence that it may not have been violated for these outcomes.
Both our MR and multivariable regression estimates could be vulnerable to selection bias, which has been extensively discussed in previous papers [42][43][44]. Specific to UKB, it is a highly selective sample that is healthier and better educated than the general UK adult population [45]. Moreover, information on perinatal depression and PTB was only available in a subsample of UKB women and such missingness might not be at random [46,47]. By definition our study only includes women who have experienced at least one pregnancy, and if insomnia influences fertility then our results might be biased [48]. However, we are not aware of robust evidence of insomnia (or SNPs related to . CC-BY 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint insomnia) influencing infertility or number of children [49,50], suggesting any selection bias through only including pregnant women is unlikely to have a meaningful impact on our MR estimates [48,51]. Insomnia was measured via one self-administrated question in both UKB and ALSPAC, which could mean the binary exposure is misclassified. Non-differential misclassification of insomnia would be expected to bias MR results away from the null (given the attenuated genetic IVs -insomnia associations is the denominator), but multivariable regression results towards the null [52,53].
Similarly, there may be misclassification in some of our outcomes because of the absence of universal testing (e.g. GD in ALSPAC [54]), assessment via self-report questionnaires (e.g. BW in UKB) or differences between studies in definitions (e.g. in older women in UKB the gestational age thresholds for defining stillbirth and miscarriage would have differed from those used in the more contemporary birth cohorts). Non-differential misclassification of our binary outcomes would be expected to bias both MR and multivariable regression results towards the null [52,53]. Moreover, the first live-born babies of UKB women are known to be lighter than babies with various birth orders from the more contemporary birth cohorts [55,56].
Both MR and multivariable regression analyses of miscarriage and stillbirth assessed the associations of ever experiencing either of them mainly up to the point of recruitment. For the multivariable regression analysis in ALSPAC, insomnia was reported after the outcomes had occurred and it is possible that previous miscarriage or stillbirth might have influenced sleep in the index pregnancy (where insomnia was reported), e.g. due to anxiety [3]. The similarity of the multivariable regression and MR results for miscarriage, stillbirth, GD, HDP and perinatal depression suggests residual confounding is unlikely to have biased regression results for these outcomes. The attenuated to the null associations for preterm birth, LBW and HBW compared to MR results suggests possible masked confounding or other biases specifically affecting these but not other outcomes. Despite having a large sample size and our multivariable analyses being larger than most previous studies, several of . CC-BY 4.0 International license It is made available under a perpetuity.
is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint our MR and multivariable regression estimates are imprecise. Our study is limited to women of European ancestry, and we cannot assume that our results generalize to other populations.
In conclusion, our study raises the possibility of adverse causal effects of insomnia on miscarriage, perinatal depression, and LBW. Interventions to improve healthy sleep in women of reproductive age might be beneficial to a healthy pregnancy. However, we acknowledge the need for further MR studies based on larger GWAS of pregnancy and perinatal outcomes, larger observational studies, and studies in women from ethnic backgrounds other than White European. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint
Data Availability Statement
We used both individual participant cohort data and publicly available summary statistics. We present summary statistics that we generated from those individual participant cohort data in S4 & 5 Tables. Full information on how to access UKB data can be found at its website (https://www.ukbiobank.ac.uk/researchers/). All ALSPAC data are available to scientists on request to the ALSPAC Executive via this website (http://www.bristol.ac.uk/alspac/researchers/), which also provides full details and distributions of the ALSPAC study variables. Similarly, data from BiB are available on request to the BiB Executive (https://borninbradford.nhs.uk/research/how-to-accessdata/). Data from MoBa are available from the Norwegian Institute of Public Health after application to the MoBa Scientific Management Group (see its website https://www.fhi.no/en/op/data-accessfrom-health-registries-health-studies-and-biobanks/data-access/applying-for-access-to-data/ for details). Summary statistics from FinnGen are publicly available on its website (https://finngen.gitbook.io/documentation/data-download). is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ; https://doi.org/10.1101/2021.10.07.21264689 doi: medRxiv preprint KT has acted as a consultant for CHDI Foundation. DAL has received support from Medtronic LTD and Roche Diagnostics for biomarker research that is not related to the study presented in this paper.
The other authors report no conflicts.
Acknowledgments
This research has been conducted using the UKB Resources under application number 23938. The authors would like to thank the participants and researchers from UKB who contributed or collected data. We are extremely grateful to all the families who took part in this study, the midwives for their help in recruiting them, and the whole ALSPAC team, which includes interviewers, computer and laboratory technicians, clerical workers, research scientists, volunteers, managers, receptionists and nurses. BiB is only possible because of the enthusiasm and commitment of the Children and Parents in BiB. We are grateful to all the participants, teachers, school staff, health professionals and researchers who have made BiB happen. This research has been conducted using MoBa data using application number 2552. MoBa is supported by the Norwegian Ministry of Health and Care services and the Ministry of Education and Research. We are grateful to all the participating families in Norway who take part in this on-going cohort study. We thank the Norwegian Institute of Public Health (NIPH) for generating high-quality genomic data. This research is part of the HARVEST collaboration, supported by the Research Council of Norway (#229624). We also thank the NORMENT Centre for providing genotype data, funded by the Research Council of Norway is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted October 10, 2021. ;
Fig 1. Summary of methods and data contributing to this study.
a Two-sample MR methods include: IVW, MR-Egger, weighted median, and leave-one-out analysis. b Multivariable regression analysis adjusted for maternal age at time of delivery, social class, education, body mass index at 12 weeks of gestation, smoking status in pregnancy and alcohol intake in the first three months of ALSPAC pregnancy. Abbreviations: ALSPAC, Avon Longitudinal Study of Parents and Children; BiB, Born in Bradford; GWAS, genome-wide association study; IVW, inverse variance weighted; MoBa, Norwegian Mother, Father and Child Cohort Study; MR, Mendelian randomization; SNP, single nucleotide polymorphism; UKB, UK Biobank.
Fig 3. Multivariable regression associations of insomnia at 18 and 32 weeks of gestation with adverse pregnancy and perinatal outcomes in Avon Longitudinal Study of Parents and Children (ALSPAC).
a We adjusted for maternal age at time of delivery, education, body mass index at 12 weeks of gestation, smoking status in pregnancy and alcohol intake in the first three month of ALSPAC pregnancy, and household occupational social class. b The numbers of women in adjusted models are slightly smaller than those in crude models due to missingness (<8%) in these covariates. c We only report results for insomnia at 18 weeks for miscarriage as by definition that has to occur early in pregnancy (i.e. prior to 20-24 weeks of gestation in the different populations included in the analyses). | 2021-10-11T01:07:04.580Z | 2021-10-10T00:00:00.000 | {
"year": 2021,
"sha1": "665f7c05ad7812b3b10b145a1f68037029bfc1b3",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/10/10/2021.10.07.21264689.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MedRxiv",
"pdf_hash": "665f7c05ad7812b3b10b145a1f68037029bfc1b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
843956 | pes2o/s2orc | v3-fos-license | Multimodal MRI and cognitive function in patients with breast cancer prior to adjuvant treatment — The role of fatigue
An increasing body of literature indicates that chemotherapy (ChT) for breast cancer (BC) is associated with adverse effects on the brain. Recent research suggests that cognitive and brain function in patients with BC may already be compromised before the start of chemotherapy. This is the first study combining neuropsychological testing, patient-reported outcomes, and multimodal magnetic resonance imaging (MRI) to examine pretreatment cognition and various aspects of brain function and structure in a large sample. Thirty-two patients with BC scheduled to receive ChT (pre-ChT+), 33 patients with BC not indicated to undergo ChT (pre-ChT−), and 38 no-cancer controls (NCs) were included. The examination consisted of a neuropsychological test battery, self-reported aspects of psychosocial functioning, and multimodal MRI. Patients with BC reported worse scores on several aspects of quality of life, such as higher levels of fatigue and stress. However, cortisol levels were not elevated in the patient groups compared to the control group. Overall cognitive performance was lower in the pre-ChT+ and the pre-ChT− groups compared to NC. Further, patients demonstrated prefrontal hyperactivation with increasing task difficulty on a planning task compared to NC, but not during a memory task. White matter integrity was lower in both patient groups. No differences in regional brain volume and brain metabolites were found. The cognitive and imaging data converged to show that symptoms of fatigue were associated with the observed abnormalities; the observed differences were no longer significant when fatigue was accounted for. This study suggests that cancer-related psychological or biological processes may adversely impact cognitive functioning and associated aspects of brain structure and function before the start of adjuvant treatment. Our findings stress the importance to further explore the processes underlying the expression of fatigue and to study whether it has a contributory role in subsequent treatment-related cognitive decline.
Introduction
Neuropsychological and magnetic resonance imaging (MRI) studies show the occurrence of cognitive decline and brain changes following chemotherapy (ChT) in patients with breast cancer (BC) (Pomykala et al., 2013). Preclinical studies support these findings and have demonstrated increased apoptosis in healthy proliferating cells in the central nervous system as well as damage to neural precursor cells (Seigers et al., 2013).
Interestingly, several neuropsychological studies have observed lower than expected cognitive performance in patients with BC already before the start of adjuvant treatment (Lange et al., 2014;Wefel and Schagen, 2012). A small number of MRI studies also explored structural and functional brain differences before exposure to systemic treatment. Of two studies assessing regional brain morphology, one found lower gray matter (GM) density in patients with BC before adjuvant treatment (McDonald and Saykin, 2013). By contrast Deprez et al. (2012) did not find differences in white matter microstructure between patients with BC about to undergo chemotherapy (pre-ChT+), patients with BC who do not require chemotherapy (pre-ChT−), and no-cancer controls (NCs) after correcting for depressive symptoms.
More consistent results on pretreatment cognitive and brain differences between patients with cancer and controls come from functional MRI (fMRI) studies. These studies found a predominant pattern of prefrontal hyperactivation and slightly lower or similar task performance in patients with BC versus NCs before the start of adjuvant treatment, suggestive of compensatory processes (Cimprich et al., 2010;McDonald et al., 2012). One study found that higher levels of worry were associated with lower cognitive performance and lower brain deactivation in patients with BC (Berman et al., 2014). Three fMRI studies assessing various cognitive functions in one sample of pre-ChT+ and NCs showed that group differences in BOLD activation were dependent on the specific cognitive test performed and the inclusion of several covariates, e.g. cortisol or days since surgery (López Zunini et al., 2013;Scherling et al., 2011Scherling et al., , 2012. These studies emphasize the potential relevance of psychosocial and biological factors in cognitive and brain function before the start of chemotherapy. In summary, both neuropsychological and imaging studies point to the potential existence of pretreatment cognitive and brain dysfunction in patients with BC. Psychological and biological mechanisms have been proposed to underlie these impairments including surgical factors and anesthesia, fatigue, comorbidities and cancer staging, but as yet no studies determined and explained this phenomenon convincingly (Ahles et al., 2008;Cimprich et al., 2010;Mandelblatt et al., 2014;Wefel et al., 2004).
The current study sets out to build upon previous findings of cognitive problems and changes in brain function and structure prior to adjuvant treatment by combining neuropsychological testing, MRI, and patient-reported outcomes (PROs). With this set-up, differences in several data types can be related to each other and potential underlying psychological factors can be identified.
Subjects
Participants were patients with BC who had undergone mastectomy or lumpectomy and age-matched no-cancer controls (NC). Patients were either scheduled to receive adjuvant anthracycline-based chemotherapy (pre-CHT+) or did not require chemotherapy (pre-CHT−). Subjects were eligible if they met the following criteria: female, under 70 years of age, sufficient command of the Dutch language, no previous malignancies. Patients additionally had to have a diagnosis of primary breast cancer, no distant metastases, and no other treatment than surgery at the time of baseline assessment. NCs were recruited through participants, as well as through advertisements in the participating hospitals.
The study was approved by the Institutional Review Board of the Netherlands Cancer Institute, serving as the central ethical committee for all participating institutes. Written informed consent was obtained according to the declaration of Helsinki and following institutional guidelines. The experiment was conducted at the Academic Medical Center of the University of Amsterdam and the Spinoza Centre for Neuroimaging.
Follow-up data collection at approximately 6 months after completion of chemotherapy, or at matched intervals, is ongoing and will be presented elsewhere.
Procedures
Seven questionnaires were administered to assess PROs, such as health-related quality of life (QOL), anxiety and depression, mood, stress, cognitive problems, and personality dimensions (Supplementary Table 1). Premorbid verbal IQ was estimated with the use of the Dutch Adult Reading Test (NART) (Schmand et al., 1992). A comprehensive neuropsychological test battery was used, consisting of 18 test indices, grouped into the domains of executive function, attention, visual memory, verbal memory, processing speed, and motor speed (Supplementary Table 2). Assignment into domains was determined based on literature and no outcome could be present in more than one domain.
To objectively assess long-term stress, hair samples were collected and consequently analyzed by the Department of Biopsychology of the Technische Universität of Dresden. Cortisol levels were determined in segments of 2 cm, representing a period of 2 months. Wash and steroid extraction procedures are described elsewhere (Kirschbaum et al., 2009). MRI data were acquired using a 3.0 Tesla Intera full-body MRI scanner (AMC Medical Center) and a 3.0 Tesla Achieva full-body MRI scanner (Spinoza Centre for Neuroimaging) (Philips Medical Systems, Best, The Netherlands). A SENSE 8-channel receiver head coil was used at both locations.
An axial fluid attenuated inversion recovery (FLAIR) scan (TR/TE/ TI = 11,000/100/2600 ms, FOV 230 × 230 mm, 27 slices, voxel size 0.9 × 1.4 × 5.0 mm, slice gap 0.5 mm) was acquired to score white matter abnormalities with the visual rating score of Fazekas (range 0-3) (Fazekas et al., 1987). All ratings were performed by a neuroradiologist (L.R.) blind to the clinical data. A T1 weighted threedimensional magnetization prepared rapid gradient echo (MPRAGE) scan (TR/TE = 6.6/3.0 ms, FOV 270 × 252, 170 slices, voxel size 1.05 × 1.05 × 1.20 mm) was made for anatomical reference and voxel-based morphometry (VBM). Single voxel proton MR spectroscopy (1H-MRS) was acquired in the left semioval center (SC) and the left hippocampus (HC) to assess neurochemical properties of white and gray matter respectively. The semioval center allows for data acquisition in white matter alone and it has been shown to be vulnerable to chemotherapy (De Ruiter et al., 2011). The hippocampus has previously been shown to be affected by cancer treatment and is an interesting brain structure because of its role in memory. The left side of the brain was chosen because of its predominance in cognition. Fully automated point resolved spectroscopy (PRESS) including global shimming (voxel size = 6.0 ml for semioval center and 2.56 ml for hippocampus, TR/TE = 200/35-40 ms, NSA = 64) was obtained. Diffusion Tensor Imaging (DTI) was acquired in 32 directions (TR/TE = 8136/94 ms, FOV 250 × 250 mm, 64 slices, voxel size 2.23 × 2.23 × 2.00 mm, b-value: 1000 s/mm 2 ), covering the entire brain. Functional MRI acquisition was based on T2* weighted gradient echo planar imaging (EPI) of 38 axial slices (voxel size 2.3 × 2.3 × 2.3 mm, interslice gap 0 mm, matrix size 96 × 96, TR = 2.1 s, TE = 25 ms). We acquired 230 volumes during the Tower of London (ToL) task, 170 for memory encoding, and 125 for memory retrieval.
Artrepair was used to detect and repair artifacts (Mazaika et al., 2009). All fMRI preprocessing was performed using SPM8 (Statistical Parametric Mapping; Wellcome Trust Centre for Neuroimaging, London, UK). Slice timing correction was applied to the ToL and Retrieval images. All fMRI images were reoriented and realigned to the first volume. Individual T1 scans were segmented based on gray matter, white matter, and cerebrospinal fluid. Coregistered EPI and T1 scans were normalized to the Montreal Neurologic Institute (MNI) reference brain with the use of the segmentation parameters. Finally, smoothing was applied using an 8-mm full-width half-maximum Gaussian kernel.
An abbreviated version of the Tower of London (ToL) paradigm by van den Heuvel et al. (2003) was used to assess prefrontal function. During planning, five conditions ranging from one to five moves were presented. A starting configuration and a target configuration were displayed. Each consisted of three colored beads placed on three vertical rods, which could accommodate one, two, and three beads respectively. Subjects were instructed to determine the minimum number of steps required to get from the starting to the target configuration by mentally moving the beads one at a time. In the baseline condition, the number of yellow and blue beads had to be counted. The presentation of trials was self-paced with a maximum duration of 1 min per trial. The task lasted 8 min. Participants were instructed to focus on accuracy rather than on speed. The task was practiced outside the scanner. The Paired Associates memory task was based on a task paradigm by Jager et al. and was shown to reliably activate the parahippocampal region (Jager et al., 2007). During associative learning, subjects were asked to indicate if the person shown in the portrait photo was likely to live in the home interior in the simultaneously presented picture. The baseline condition consisted of three arrowheads pointing to ("bbb" or "NNN") indicating a left or right button press respectively. The arrows were superimposed on blurred portrait and interior design pictures to match the visual input of the associative learning condition. The learning and baseline trials were presented in a block design. For the learning condition, six blocks were presented with five trials per block. Stimuli were presented for 5 s. The baseline condition consisted of five blocks with five stimuli, which were presented 3 s. Directly after the learning part of the task, a recognition test was administered. The baseline trials were the same as in the learning part. For the recognition part, all pictures from the learning phase were shown and subjects were asked to indicate whether they had seen the same combination of pictures before. Sixty percent of the pairs were the same as in the learning phase. The order of the trial types was pseudo-randomized. All stimuli were presented for 4 s.
Statistical analysis
Demographic and clinical variables, PROs, neuropsychological data, MR spectra, and fMRI performance data were analyzed with SPSS 20 (IBM, Armonk, NY) by means of ANOVA or chi-squared test, as appropriate. Age and IQ were included in neuropsychological and fMRI analyses. Age and scanner location were included in analyses of all MRI data. Corrections for multiple comparisons were applied and will be specified per data type.
Neuropsychological data were analyzed in line with International Cognition and Cancer Task Force (ICCTF) guidelines (Wefel et al., 2011). Standardized z-scores for all test indices were calculated based on the mean and standard deviation of the NC group. A cut-off for cognitive impairment, based on the 95th percentile of the NCs, was identified as scores of two standard deviations below the mean on at least three test indices (Schagen et al., 2006). The difference in proportion of impaired subjects was tested using logistic regression.
In addition, the Mahalanobis Distance (MHD) was calculated as a summary measure of overall performance Crawford et al., 2011;DeCarlo, 1997;Koppelmans et al., 2012). MHD calculations were based on residual scores, the difference between individual scores and the intercept, adjusted for age and IQ. Residual scores that were greater than their respective mean score were assigned a value of zero, so that negative scores could not be compensated (Koppelmans et al., 2012). The residual scores of the control group were used to calculate a variance-covariance matrix, corresponding to the correlation between tests and the variance within the tests in the control group (DeCarlo, 1997;Koppelmans et al., 2012). The variance-covariance matrix was then used to extract the unique variance of each variable for each subject for all groups. Log2 transformation of the resulting MHD was applied because of skewness of its distribution and between group differences were calculated with an ANOVA. By taking into account the correlations between tests, MHD corrects for multiple comparisons. Domain scores and patient-reported outcomes were corrected for multiple comparisons by lowering the critical p-value to 0.01.
Reaction time (RT) for fMRI tasks was calculated for correct trials. For the ToL, all active versus baseline trials as well as a parametric contrast with increasing task load (ToL Load) were modeled. For the Paired Associates task, encoding trials were contrasted to baseline trials, for retrieval, hits were contrasted to baseline. Group differences for contrasts of interest were evaluated with random effects analyses.
MR spectroscopy was analyzed using a standard protocol within LCModel (Provencher, 1993). The standard VBM8 pipeline within SPM8 was used for analysis of MPRAGE images. For fMRI and VBM, Whole-brain and ROI differences were considered statistically significant at a FWE corrected p-value of 0.05. DTI preprocessing and tensor fitting was performed within the FMRIB Diffusion Toolbox (FDT) (part of FMRIBs Software Library (FSL) (Smith et al., 2006)). Diffusion data were 'skeletonized' with Tract-based spatial statistics (TBSS, part of FSL (Smith et al., 2006)) and nonparametrically tested (Nichols and Holmes, 2002) (for more detailed information on MRI analyses, see supplemental material).
Correlation analyses were only performed when significant group differences were found or when a strong relation was expected. In line with existing literature, we calculated the following correlations. Correlations between various PROs were examined. Neuropsychological performance (MHD, domain scores) was correlated with PROs. Voxel based analyses for BOLD signal, GM volume and FA and MD were performed within SPM8 and FSL to study associations with specific PROs and neuropsychological performance. Wholebrain FA and MD values were extracted, as well as BOLD signal in significantly different clusters and these were correlated with neuropsychological performance and PROs.
Participants
A total of 285 participants were eligible to participate in this study, of which 137 participated in the study. Main reasons given for decline were 'too burdensome' and 'hesitant about MRI'. Four patients were excluded because of incidental findings on MRI scans, 21 patients were excluded because three or more scans of different modalities were missing. The groups were matched on age and IQ leaving 32 pre-ChT+, 33 pre-ChT−, and 38 NC subjects for our final analyses (see Fig. 1).
All patient characteristics and PROs are presented in Table 1. As expected, no significant differences were found between groups on age and premorbid IQ. The patient groups also did not differ on time since surgery.
Patient-reported outcomes
One-way ANOVA showed that the three groups differed on physical, F(2,100) = 7.0, p = .001, role, F(2,100) = 12.3, p b .001, and social functioning, F(2,100) = 10.1, p b .001, global quality of life, F(2,100) = 8.6, p b .001, and pain, F(2,100) = 8.2, p b .001 (as measured with the EORTC QLQ-C30). Post hoc analyses demonstrated significantly lower physical, role, and social functioning, lower global QOL, and more pain in pre-ChT+ as well as pre-ChT− compared to NC. A significant difference in fatigue scores (EORTC QLQ-C30) was found between the groups, F(2,100) = 6.2, p = .003, post hoc testing showed more fatigue in pre-ChT− compared to NC, p = .001. A similar pattern was seen in fatigue scores on the day of the assessment, measured with the POMS, but this did not reach significance. Pre-ChT+ patients reported significantly more stress than the control group, F(2,100) = 5.5, p = .006. A trend was seen for patients reporting more anxiety and depression (measured with the HSCL) than controls. Both patient groups were significantly less active than the control group, as measured with the vigor subscale of the POMS, F(2,100) = 7.2, p = .001. The patient groups indicated more mood disturbances on the POMS total scores, which did not reach significance. None of the other PROs showed significant differences between groups.
Neuropsychological assessment
Log2 transformed MHD was significantly different between the groups, F(2,100) = 4.0, p = .021, indicating worse cognitive performance in pre-ChT+ and pre-ChT− compared to controls (see Table 2). However, when fatigue, perceived stress, or anxiety and depression were included in the model, this difference was no longer significant. Domain scores and the proportion of cognitively impaired subjects were not significantly different between any of the groups (Table 2).
Task-related fMRI
Performance and mean reaction time on the ToL were not significantly different between the three groups (see Supplementary Table 3). For both active versus baseline contrast and task load, all groups showed robust activation of the dorsolateral prefrontal cortex (DLPFC), premotor cortex, precuneus, posterior parietal cortex (PPC), striatum, and cerebellum (see Supplementary Fig. 1 and Supplementary Table 4). Whole-brain as well as ROI analyses showed no significant group differences for the active versus baseline contrast. In the pre-ChT− versus the NC group we observed significant hyperactivation of the dorsomedial prefrontal cortex extending into the DLPFC with increasing task difficulty (see Fig. 2 and Table 3). The pre-ChT+ group demonstrated subthreshold hyperactivation at the same location when compared to NC (see Fig. 2). These differences were no longer significant when fatigue scores were included in the model, while other PROs did not elicit the same effect. ROI analysis did not show any differences between the groups.
Reaction time during memory encoding and retrieval was not significantly different between groups, as was retrieval performance (see Supplementary Table 3). The ventral stream (occipital areas and fusiform gyrus) extending into the parahippocampal gyrus and the hippocampus proper showed significant activation across groups during encoding (see Supplementary Fig. 1 and Supplementary Table 4). During retrieval, the ventral and dorsal stream, and parahippocampal gyrus were significantly activated. Whole brain and ROI analyses revealed no significant group differences during encoding and retrieval.
Structural MRI
Whole brain analyses of regional GM and white matter volume showed no significant differences. GM volume of ROIs in the DLPFC and superior parietal cortex was not significantly different between groups. Voxel-based analyses showed widespread lower FA and higher MD in both patient groups compared to NCs, indicating lower white matter integrity (see Fig. 3 and Supplementary Table 5). The differences in FA were no longer significant when fatigue, but not other PROs, was added to the model.
Correlations
Performance on the verbal memory domain was significantly correlated with HSCL depression scores, r = -.269, p = .006. Processing speed was significantly correlated with physical functioning, r = .290, p = .003, emotional functioning, r = .334, p = .001, global QOL, r = .339, p = .001, and fatigue, r = -.292, p = .003, subscales of EORTC QLQ-C30, HSCL anxiety, r = -.293, p = .003, PSS, r = -.334, p = .001, mood, r = -.418, p b .001, and cognitive complaints, r = .260, p = .008. MHD was significantly correlated with emotional functioning, r = -.295, p = .003, social functioning, r = -.293, p = .003, Fig. 2. Tower of Londontask load contrast. Main task effect and group comparisons with and without fatigue as a covariate (differences were considered statistically significant at clustercorrected p fwe b .05; shown at p b .001, except for non-significant difference pre-ChT+ N NC, shown at p b .05; brighter colors indicate higher T-values); pre-ChT+, patients with BC before chemotherapy; pre-ChT−, patients with BC not scheduled to undergo chemotherapy; NCs, no-cancer controls. Values indicate mean ± SD unless indicated otherwise. All analyses were adjusted for age and IQ. Pre-ChT+, patients with BC before chemotherapy; pre-ChT−, patients with BC not scheduled to undergo chemotherapy; NCs, no-cancer controls; MHD, Mahalanobis Distance, higher score indicates worse overall cognitive performance; domain scores are expressed as z-scores, neuropsychological test scores are raw scores. a Higher scores indicate worse performance.
global QOL, r = -.306, p = .002, and fatigue, r = .267, p = .006, subscales of EORTC QLQ-C30, mood, r = .261, p = .008, and cognitive complaints, r = -.277, p = .005. No significant correlations were found between cortisol levels and self-reported outcomes or test performance. A significant correlation between fatigue and ToL task load BOLD activation across all groups was found in the dorsomedial prefrontal cortex (see Fig. 4 and Table 3). None of the other factors was significantly associated with BOLD signal in any of the tasks. Voxel-based analyses did not show significant correlations between FA and MD and PROs or MHD.
Discussion
To the best of our knowledge, this is the first study combining different MRI modalities, neuropsychological assessment, and PROs to evaluate pretreatment cognitive function and potential mediating factors in patients with various disease stages. Our findings show worse cognitive performance, prefrontal hyperactivation, and lower white matter integrity in breast cancer patients compared to nocancer controls, and revealed fatigue as an important factor contributing to these results. No significant differences in regional brain volume or brain metabolites were found. Although sample size of the current study was relatively large, insufficient power might still have played a role considering the arguably subtle effects of various aspects of cancer and treatment on MRI measures.
In agreement with some earlier reports, lower cognitive performance was observed in patients with BC compared to no-cancer controls, but only on a summary measure of cognitive performance and not when group means or percentages of impaired participants were compared. This finding suggests that small deviations across several tests account for the currently observed lower cognitive performance.
Our fMRI findings of prefrontal hyperactivation in patients with BC versus NC during a task of executive function support earlier results and might point to a specific vulnerability of these areas to the effects of cancer and its treatment. Brain hyperactivation has been reported to be due to compensation for white matter damage (Daselaar et al., 2013). Indeed, the patients with BC had widespread lower brain white matter integrity compared to healthy controls. The finding that patients with cancer also differ in their microstructural integrity compared to controls before the start of therapy is new. The only other DTI study observed no differences in white matter integrity between patients with BC and NCs when controlling for depression score, but whether the groups differed when these depression scores were not included was not reported (Deprez et al., 2012).
Interestingly, pre-ChT− patients were most deviant on PROs, brain activation and white matter integrity. As their disease is on average less advanced, these findings make it unlikely that cancer staging was driving the differences between the two patient groups. However, two previous studies have found an association between disease progression and cognitive function (Ahles et al., 2008;Mandelblatt et al., 2014). Combining the results of these studies, the role of cancer staging in cognitive dysfunction remains uncertain.
A second possible explanation for our finding of pretreatment differences might be the side effects of breast surgery or anesthesia. Since both our patient groups had undergone surgery, we could not study this relation directly but looked at time since surgery as a surrogate. Unlike some previous studies (López Zunini et al., 2013;Scherling et al., 2012), we did not find a relation between time since surgery and any of the outcome measures. A general limitation of our study is the lack of a pre-surgery assessment. Breast cancer surgery has previously been found to be associated with lower cognitive performance (Hedayati et al., 2011). These findings indicate that future studies should preferably include a pre-surgery assessment to rule out effects of anesthesia or other surgical factors.
A third factor that could be put forward to explain pretreatment differences, is the higher levels of comorbidity in cancer patients than NC. One report showed that higher rates of comorbidities were associated with cognitive impairment (Mandelblatt et al., 2014). However, patients in that study were on average 15 years older than those in the current sample. Also, the levels of comorbidities were considerably higher than in our sample, as indicated by low levels of medication use in the current sample. Fig. 3. Group differences in skeletonized FA and MD, with and without fatigue as a covariate (shown at p tfce b .05; green, white matter skeleton; red indicates higher statistical significance, yellow indicates lower statistical significance). Finally, our findings might also be influenced by differences in reasons for participation between the patient groups. About half of the eligible participants were willing to participate in our study, without differences between the patient groups, and no differences were observed with respect to relevant demographics. Still, patients facing chemotherapy are clearly in a different stage of their overall therapy plan than patients for whom chemotherapy is not required. These differences may influence both symptom perception and expression. Patients who have an intense cancer treatment ahead might still operate in 'survival mode'. Patients who do not have the prospect of being exposed to chemotherapy might, in contrast, already have moved to another mental state where they allow negative emotions associated with the disease to surface. Also, it could be that patients not receiving chemotherapy feel that the study is less relevant to them. This could lead to a potential bias in the motivation to participate, with more patients already experiencing cognitive problems in the pre-CHT− group, without influencing participation rates per se.
As mentioned before, symptoms of fatigue appear to be related to the observed impairments in patients with BC compared to NCs. Fatigue levels were markedly higher in cancer patients than in controls. Group differences in cognitive function and various MRI measures did not survive our stringent statistical thresholding when the analyses were adjusted for fatigue levels. Furthermore, fatigue levels were modestly but statistically significantly associated with cognitive function and fMRI.
Higher levels of fatigue in patients with cancer compared to the general population are a common finding (Hofman et al., 2007). Proinflammatory cytokines have been associated with cancer-related fatigue (Bower et al., 2011). Moreover, pro-inflammatory cytokines are frequently proposed as a possible mechanism underlying cancerrelated cognitive dysfunction but clinical studies have not yet shown a clear picture regarding cytokines, cognition and/or fatigue and the way in which these factors may influence one another (Cheung et al., 2013;Vardy et al., 2014). It might be that elevated levels of proinflammatory cytokines cause fatigue as well as cognitive problems without a direct relation between the two factors. Another explanation could be that fatigue leads to changes in cerebral blood flow which in turn leads to changes in brain function and structure and consequently has an effect on cognitive function (Ocon, 2013).
In order to obtain a better insight into the wide era of factors that seem to be relevant for pre-treatment cognitive function, assessment of key aspects of health-related quality of life, i.e. fatigue and distress, has to be taken into account in neuroimaging and neuropsychological studies in patients with cancer.
A major strength of this study is the comprehensive coverage of various outcome measures in one report including different MRI modalities, neuropsychological assessment, and patient-reported outcomes. By combining the data and studying different important factors we present a complete assessment of cognitive dysfunction associated with cancer and treatment. Further, the current study encompasses data from a relatively large sample of patients with BC. This large sample size together with consequent correction for multiple comparisons strengthens the results presented here.
To conclude, our findings show worse cognitive performance, prefrontal hyperactivation, and lower white matter integrity in breast cancer patients compared to no-cancer controls. These results were related to fatigue. The role of fatigue in our data suggests cancer-related psychological or biological processes to negatively influence cognitive functioning and associated aspects of brain structure and function. Because even mild cognitive problems can have functional consequences (Marcotte et al., 2010), these findings should be further investigated in specific hypothesis-driven studies. Our results show the importance of the use of PROs to understand cognitive problems BC patients may already experience before treatment. By further studying these problems, it might be possible to identify patients at risk of developing cognitive dysfunction and determine underlying processes that could be used as a target for interventions. | 2016-05-04T20:20:58.661Z | 2015-02-20T00:00:00.000 | {
"year": 2015,
"sha1": "e0886181fa8fcec6a86c136a2e3d2f8c02054f09",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nicl.2015.02.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0886181fa8fcec6a86c136a2e3d2f8c02054f09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
201969087 | pes2o/s2orc | v3-fos-license | Acute rheumatic fever diagnosis and management: Review of the global implications of the new revised diagnostic criteria with a focus on Saudi Arabia
Rheumatic fever (RF) is a common cause of acquired heart disease in children worldwide. It is a delayed, nonsuppurative, autoimmune phenomenon following pharyngitis, impetigo, or scarlet fever caused by group A β-hemolytic streptococcal (GAS) infection. RF diagnosis is clinical and based on revised Jones criteria. The first version of the criteria was developed by T. Duckett Jones in 1944, then subsequently revised by the American Heart Association (AHA) in 1992 and 2015. However, RF remains a diagnostic challenge for clinicians because of the lack of specific clinical or laboratory findings. As a result, it has been difficult for some time to maintain a balance between over- and underdiagnosis of RF cases. The Jones criteria were revised in 2015 by the AHA, and the main modifications were as follows: the population was subdivided into moderate- to high-risk and low risk; the concept of subclinical carditis was introduced; and monoarthritis was included as a feature of musculoskeletal inflammation in the moderate- to high-risk population. This review will highlight the major changes in the AHA 2015 revised Jones criteria for pediatricians and general practitioners.
Introduction
R heumatic fever (RF) is a common autoimmune disorder, particularly in developing countries. It is caused by group A b-hemolytic streptococcal (GAS) infection in genetically susceptible individuals [1]. The five cardinal manifestations of RF outlined by Dr. Jones and published in 1944 were carditis, arthritis, chorea, erythema marginatum, and subcutaneous nodules [2]. These features have been memorized by health professionals for several decades with little amendment over time. In Saudi Arabia, a few studies from different regions have demonstrated the mild nature of acute rheumatic fever (ARF) attacks, with frequent carditis ranging from 58% to 65% of patients with ARF [3][4][5][6]. Incidence of RF has declined remarkably in developed countries, but it has not yet been eliminated. In developing countries, it remains a major health challenge and results in lifelong, devastating sequelae.
In view of the high prevalence of the disease, especially in developing countries, it is important to emphasize the revision of the Jones criteria for pediatricians and general practitioners to improve diagnostic sensitivity and achieve earlier disease detection. This will consequently lead to better clinical outcomes for the disease.
Methods and discussion
Peer-reviewed articles written in English and indexed in PubMed and EMBASE were screened by four investigators to review ARF diagnosis and management. The following keywords were used: acute rheumatic fever, rheumatic heart dis-ease, Jones criteria, acute rheumatic fever management guidelines, and Saudi Arabia. Article selection was not restricted by year of publication.
Epidemiology
RF affects school-age children in the 5-to 14-years age range [7,8]. Although it occurs all over the world, its epidemiology largely varies. Nowadays, the annual incidence ranges from <0.5/100,000 in developed countries to >100/100,000 in developing countries [9]. The mean incidence rate of RF in the first attack is from five to 51 per 100,000 population [10]. The annual incidence rate is lowest in America and Western European countries (<10/100,000), and there is a relatively higher incidence rate in Eastern Europe, Asia, Australasia, and the Middle East (>10/100,000) [10]. Globally, it has been estimated that approximately 500,000 new RF cases occur annually and that about 230,000 people die each year from the disease [11]. The long-term sequela of acute RF is rheumatic heart disease (RHD), which is the most common cause of heart failure in poor populations [9].
RF
Rheumatic fever RHD Rheumatic heart disease GAS Group A b-hemolytic streptococcal infection AHA American heart association ASO anti-streptolysin O titer ESR Erythrocyte Sedimentation Rate CRP C-reactive protein NSAIDs non-steroidal anti-inflammatory drugs Penicillin V Phenoxymethylpenicillin Penicillin G Benzylpenicillin Over the past few years, the global burden of RHD has dramatically reduced in developed countries [12]. However, RHD remains a significant issue in many developing countries, with approximately 1% of all school-age children showing signs of the disease [12]. Arab Gulf countries, Asia, Africa, the Pacific, and the indigenous populations of Australia and New Zealand are most commonly affected by RHD [12,13]. In Saudi Arabia, published data about the prevalence of RHD are limited. However, it is known that the percentage of children with RHD in Saudi Arabia is still higher than the global rate [14]. In Saudi Arabia, a study was conducted in patients with RF between 1994 and 2003 [15]. The study found that over a 10-year period, 96 children (mean age, 9 years) were diagnosed with RF. The annual incidence was 17 cases in 2 years (1994 and 1995), but only two in 2003, signifying a dramatic decrease in incidence in Saudi Arabia. The average incidence of RF in Saudi Arabia is eight cases annually [15]. Another study conducted on all regions in Saudi Arabia showed a high RF prevalence of 0.3 in 1000 and chronic RHD prevalence of 2.8 in 1000, with an overall rate of 3.1 in 1000 school-age children [4]. Other published studies on different regions in Saudi Arabia reported children older than 5 years as having a high prevalence rate [5,6]. RF could be the first attack or a relapse. For example, in the study of Abbag et al [6], it was found that 34 out of 40 (85%) patients had initial attacks and 12 (30%) were relapse cases, whereas Al-Eissa et al's [5] study reported 51 initial attacks out of 67 (76%) children and 22 (32%) relapse cases.
Rheumatic fever diagnostic criteria
The diagnosis of RF is based on Dr. Jones' criteria, which were recently revised (2015). The criteria include major and minor manifestations, and risk stratification has recently been applied to populations, dividing them into low risk and moderate-to high-risk [9]. The major diagnostic criteria are carditis, arthritis, chorea, erythema marginatum, and subcutaneous nodules, whereas the minor criteria are arthralgia, hyperpyrexia, high erythrocyte sedimentation rate (ESR), and/ or high C-reactive protein (CRP), and prolonged PR interval [9]. To diagnose a patient with RF as a first episode of the disease, a confirmation of two major criteria or one major and two minor criteria is required, along with evidence of antecedent GAS infection. The diagnosis of subsequent episodes of RF requires either two major criteria, one major and two minor criteria, or three minor criteria [9]. The evidence of GAS infection is confirmed by one of the following: a positive throat culture for GAS; increasing trend antistreptolysin O titer (ASO) readings rather than a single titer result; or a positive rapid group A streptococcal carbohydrate antigen test in a child who clinically suggests a high pretest probability of streptococcal pharyngitis [9].
Difference between low-risk and moderate-to high-risk populations
The differences between low-risk and moderate-to high-risk populations in the major criteria are as follows. Arthritis must be polyarthritis in the low risk population, whereas in the moderate-to high-risk population it can be polyarthritis, polyarthralgia, and/or monoarthritis. Meanwhile, the differences in the minor criteria are the following. In arthralgia, the number of affected joints is important in risk stratification. Polyarthralgia is considered a minor criterion in the low-risk population, whereas monoarthralgia is a minor criterion in the moderate-to high-risk population. Moreover, a 30 mm/h ESR is considered a minor criterion in the moderate-to highrisk population, but in the low-risk population it must be 60 mm/h. Regarding fever, 38.5°C is considered febrile in the low-risk population, whereas in the moderate-to high-risk population, 38.0°C is considered as fever [9]. The revised criteria are summarized in Table 1.
In Saudi Arabia, a study published in 2009 reported that the most common presentation of the RF major criteria is arthritis, which was present in 73%, followed by carditis in 17%, and chorea in 10% of cases [15]. None of the patients included in the study presented with erythema marginatum or subcutaneous nodules. For the minor criteria, an elevated ESR was most commonly seen in 94% of patients, followed by high grade fever in 83%, prolonged PR interval in 23%, and arthralgia in the absence of arthritis in 11% [15].
Differences between the 1992 Jones criteria and American heart Association criteria 2015
There have been two substantial changes in the recently published 2015 American Heart Association (AHA) criteria compared to the 1992 Jones criteria [21]. One is that susceptible children are divided into two groups on the grounds of epidemiological variation regarding the risk of developing the disease. The reason for this change is that ARF incidence varies significantly from one country to another. Dividing the population into low and moderate-to highrisk would help prevent overdiagnosis in lowrisk populations and prevent underdiagnosis in moderate-to high-risk populations [9]. The risk stratification depends on the incidence of RF in the area. Low-risk populations are shown in orange and moderate-to high-risk populations in purple on the world map shown in Fig. 1. Further details of each country are listed in Table 2. For example, children aged 5-14 years living in a community with an incidence of RF of <2/100,000/year, or children of any age where the prevalence of chronic rheumatic carditis is one or more/1000 per year are considered low risk (Class IIa, Level of Evidence C). Meanwhile, children living in areas with an incidence of two or more/100,000/year in children aged 5-14 years or a prevalence of chronic rheumatic carditis more than one/1000/year at any age are considered at a moderate to high risk of developing the disease (Class IIa, Level of Evidence C) [21]. The diagnostic criteria for an initial RF episode in low-risk patients have not changed from the Jones criteria published in 1992. These state that the patient should have two major manifestations or one major plus two minor manifestations [9]. However, in the update published in 2015, polyarthralgia and monoarthritis are considered as major criteria in patients belonging to the moderate-to high-risk group. Moreover, in this risk group, monoarthralgia is considered a minor criterion [9]. The minor criteria stipulate an ESR 60 mm in the 1st hour in low-risk individuals but an ESR 30 mm/h in moderate-to high-risk patients. There are no other changes in the minor criteria, regardless of the risk stratification group [9].
Most importantly, regardless of the risk stratification, the latest update recommends using echocardiography with Doppler to diagnose carditis and subclinical carditis in all patients [9]. Echocardiography has a high sensitivity and is more reliable in diagnosing valvular involvement in ARF [51]. In addition, it is recommended to repeat echocardiography in case of uncertainty [9]. Carditis has been accepted globally as a major criterion [9]. However, the concept of subclinical carditis, which is defined as positive findings of mitral or aortic valvitis on an echocardiogram without heart murmurs or other clinical signs, has emerged as a major criterion [9]. The echocardiographic features of rheumatic carditis are focused on the aortic and mitral valves. The American College of Cardiology has described, in brief, mitral valve regurgitation detected in two or more views, jet length 2 cm, peak velocity >3 m/s, and pansystolic. It also described aortic valve regurgitation if detected in two or more views, jet length 1 cm, peak velocity >3 m/s, and pandiastolic. All four above-mentioned criteria must be met to diagnose mitral valve regurgitation and aortic valve regurgitation, respectively [9]. The detailed echocardiographic features of rheumatic carditis are shown in Table 3 [52].
In all patients diagnosed with rheumatic carditis, the mitral valve is usually involved, and the most common finding in color flow imaging is mitral regurgitation [53]. Mitral regurgitation in rheumatic carditis is associated with restriction of leaflet mobility and/or ventricular dilatation [53]. Rheumatic carditis does not result in congestive heart failure without hemodynamically significant valve lesions [53]. Moreover, it is observed in patients with rheumatic carditis that valve nodules could show echocardiographic equivalents of rheumatic verrucae [53]. Echocardiography is widely available worldwide, and numerous studies have reported echocardiography/Doppler evidence of mitral or aortic valve regurgitation in patients with ARF despite the absence of classic auscultatory findings. However, expert cardiac sonographers are not widely available, particularly in developing countries.
Arthritis in the 1992 Jones criteria is described as migratory polyarthritis in the larger joints, mainly the knees, ankles, wrists, and elbows, which tends to improve significantly with salicylates or nonsteroidal anti-inflammatory drugs (NSAIDs) [54]. However, in the 2015 update, the consideration REVIEW ARTICLE Table 2. Reported data on low and moderate-to high-risk areas.
Impact of AHA 2015 on clinical practice in Saudi Arabia
We believe the study conducted by Kumar et al [55] has provided good insight into how the AHA 2015 update on Jones criteria might impact clinical practice, especially in developing countries. The authors compared the updated AHA criteria to the World Health Organization (WHO) 2004 and Australian guidelines 2012 [52,56] and found that newer criteria that incorporated subclinical carditis and monoarthritis as major criteria led to a modest increase in the diagnosis of ARF cases. Until local data are available for Saudi Arabia, we think a similar situation is applicable to our population.
Treatment of ARF
There are two main goals in treating ARF; the first is to eliminate GAS infection via antistreptococcal treatment and the second is to treat the clinical manifestations such as arthritis, carditis, and chorea [1,57].
Antimicrobial options for eradicating group A b-hemolytic streptococcus infection
(1) Phenoxymethylpenicillin (penicillin V) oral is the antistreptococcal management of choice for streptococcal pharyngitis. The correct dose of phenoxymethylpenicillin (penicillin V) for patients weighing >27 kg is 500 mg two or three times a day for 10 days. Children weighing 27 kg should be given 250 mg two or three times a day for 10 days [1,57]. (2) Another choice in anti-streptococcal treatment is benzylpenicillin (penicillin G), which is used only in hospital facilities because it is given intramuscularly. It is given as an individual dose of 1.2 MIU for patients with a body weight >27 kg or age >6 years, and 600,000 IU for pediatric patients with a body weight <27 kg or age <6 years [1,57]. (3) Also, amoxicillin can be given orally for 10 days at a dose of 50 mg per kg with a maximum dose of 1 g every 8 hours. As per AHA guidelines, it is essential to start penicillin treatment because it could prevent initial episodes of RF up to the 9th day of disease onset [57].
Cases in which the patient is allergic to penicillin
In cases of known allergy to penicillin, a narrow-spectrum oral cephalosporin (cefadroxil or cefalexin) should be administered for 10 days. However, in immediate type hypersensitivity, macrolides such as clindamycin 20 mg/kg/d divided into three times daily (max 1.8 g/d) or clarithromycin 15 mg/kg/d divided into twice daily (max 250 mg, BID) should be administered orally for 10 days, except for azithromycin, which is given for 5 days at a dose of 12 mg/kg once daily (max 500 mg) [57].
One of the causative agents of ARF is GAS infection. A study was conducted by Bhardwaj et al. [58] in 2018 on 296 patients who had GAS infection and who showed antimicrobial resistance. Seventy-nine percent of these patients were resistant to tetracycline, 46% to erythromycin, and 9.5% to ciprofloxacin. Meanwhile, 30.6% showed resistance to both erythromycin and tetracycline [52]. In Saudi Arabia, in the most recent study conducted among 13,750 patients, 7.1% had GAS infection and 1% of them showed antimicrobial resistance to cancomycin [59].
Managing clinical manifestations of ARF
Arthritis: -NSAIDs such as acetyl-salicylic acid, ibuprofen (30-40 mg/kg/d), and ketoprofen (1.5 mg/kg/d) are used in cases of arthritis with or without mild carditis [21]. Acetyl-salicylic acid (aspirin) is used for 2-3 weeks at a dose of 100 mg/kg/d, and once the symptoms completely improve, aspirin should be tapered down to 60-70 mg/ kg/d. In case of aspirin sensitivity, naproxen at a dose of 10-20 mg/kg/d divided into twice daily can be used instead [60]. Carditis: -In case of mild cardiac involvement, treatment is aspirin in a similar dose to arthritis mentioned above [60].
How to prevent recurrent attacks of ARF
Relapsing episodes of RF could result in RHD or exacerbate the current cardiac condition. The best way to prevent severe RHD is by preventing relapsing episodes of GAS pharyngitis (secondary prevention). Moreover, a patient with a history of RF episodes will be at higher risk of another episode of RF. Benzathine penicillin G is administered intramuscularly as a longstanding prophylaxis every 28 days in this case. (In highrisk patients, it should be administered every 3 weeks.) In children weighing >27 kg, 1.2 million U is administered; however, in children weighing 27 kg, 600,000 U is administered. Penicillin V potassium 250 mg orally every 12 hours can be given as an alternative. The duration of the prophylaxis mainly depends on the extent of cardiac involvement; details are summarized in Table 4 [57,61].
Challenges in diagnosis and management of ARF in Saudi Arabia
A previous study conducted by Al Qurashi [15] showed that GAS was isolated in only 11% of Saudi children compared to 20-25% in the general literature [62,63], and this has been attributed to previous antibiotic use. Prevalence of brucellosisrelated arthritis and post-streptococcal reactive arthritis make diagnosis of ARF-isolated arthritis very difficult. Al Qurashi [15] also described a high recurrence rate of ARF because of poor compliance with prophylaxis.
Conclusion
ARF is still prevalent in developing countries. The updated 2015 AHA criteria provide clinicians with more insight to classify the population into two major groups based on the risk stratification: low risk and moderate to high risk. The concept of subclinical carditis is becoming a widely accepted major criterion in all patients, regardless of their risk group. Primary prevention with eradication of GAS infection remains the most important step in the management of acute RF. However, the high index of suspicion helps in early detection and avoiding the devastating sequela of RF. Extrapolating data from developing countries with a similar ARF risk to Saudi Arabia and the application of newer criteria incorporating subclinical carditis and monoarthritis as major criteria might lead to an increase in the diagnosis of ARF cases. However, this is an area that should invite the interest of local researchers. Table 4. Duration of prophylaxis based on the extent of cardiac involvement.
Category of patient based on cardiac involvement Duration of prophylaxis
In children with no cardiac involvement Prophylaxis should continue for 5 yr subsequent to the recent episode or until the age of 21 yr, whichever is longer In children with preceding carditis and mild residual mitral regurgitation or valve lesion which resolved completely Prophylaxis should continue for 10 yr subsequent to the recent episode or until the age of 21 yr, whichever is longer In children with preceding carditis with moderate to severe valve damage Prophylaxis should continue for 10 yr subsequent to the recent episode or until the age of 40 yr, whichever is longer In children with relapses or high risk of infection Prophylaxis must continue forever In children with valve replacement Prophylaxis must continue forever | 2019-09-09T18:39:13.548Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "bfa3d65eb1aaaef0507d7afc734d2f7172aa2e0d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jsha.2019.07.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "847e54b0d1c27f476718ac4851f8ef948458a7ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216330347 | pes2o/s2orc | v3-fos-license | Development of a Rigidity Tunable Flexible Joint Using Magneto-Rheological Compounds—Toward a Multijoint Manipulator for Laparoscopic Surgery
Laparoscopic surgery is a representative operative method of minimally invasive surgery. However, most laparoscopic hand instruments consist of rigid and straight structures, which have serious limitations such as interference by the instruments and limited field of view of the endoscope. To improve the flexibility and dexterity of these instruments, we propose a new concept of a multijoint manipulator using a variable stiffness mechanism. The manipulator uses a magneto-rheological compound (MRC) whose rheological properties can be tuned by an external magnetic field. In this study, we changed the shape of the electromagnet and MRC to improve the performance of the variable stiffness joint we previously fabricated; further, we fabricated a prototype and performed basic evaluation of the joint using this prototype. The MRC was fabricated by mixing carbonyl iron particles and glycerol. The prototype single joint was assembled by combining MRC and electromagnets. The configuration of the joint indicates that it has a closed magnetic circuit. To examine the basic properties of the joint, we conducted preliminary experiments such as elastic modulus measurement and rigidity evaluation. We confirmed that the elastic modulus increased when a magnetic field was applied. The rigidity of the joint was also verified under bending conditions. Our results confirmed that the stiffness of the new joint changed significantly compared with the old joint depending on the presence or absence of a magnetic field, and the performance of the new joint also improved.
INTRODUCTION
Laparoscopic surgery is rapidly replacing traditional open surgery because it is less painful, has shorter recovery times, and has better cosmetic results (Li et al., 2014). However, the current surgical instruments including graspers (Hannaford et al., 2013), laparoscopes, and trocars have disadvantages such as limited flexibility because of their rigid and straight structure. These mechanical restrictions cause internal interferences with other instruments and a limited view of the surgical area. To solve these structural problems, various types of surgical instruments and robots have been developed.
Among the developed methods, multijoint manipulators can improve the degree of freedom (DOF) of movement in a constrained space and thus are widely used not only in the medical field but also in search and rescue operations (Singh and Krishna, 2014). Surgical manipulators comprising rigid links and tendon-driven mechanisms are typical structures of flexible surgical devices (Kim et al., 2014;Rosen et al., 2017;Julie et al., 2019). However, in such types of manipulators, it is difficult to stiffen the arbitrary part of the robot while keeping the distal and the proximal ends floppy (Cianchetti et al., 2014). Takanobu et al. (2007) reported a multiple DOF manipulator for creating space inside the brain in minimally invasive surgery. The manipulator consists of discrete segments that can be tuned with respect to each other using wires and servo motors. However, the manipulator is very complex and expensive because it needs a considerably large automatic system and controller.
To overcome these drawbacks, we devised a new concept for a variable stiffness surgical manipulator using a magnetorheological compound (MRC) (Tanaka et al., 2010). MRC is a smart material whose rheological properties can be quickly and reversibly tuned by applied magnetic fields. Typical MRCs are prepared by dispersing magnetic particles in a non-magnetic medium (de Vicente et al., 2011). The tunable property of an MRC results from the polarization induced in the suspended particles by an external magnetic field. Polarization causes particles to form columnar structures parallel to the magnetic field. Then, these chain-like structures become resistant to external forces (Carlson and Jolly, 2000). This unique feature makes MRCs useful in some applications such as brakes (Yu et al., 2016), dynamic vibration absorbers (Komatsuzaki et al., 2016), and even industrial robot grippers (Pettrtsson et al., 2010;Nishida et al., 2016). Current studies have mainly focused on dynamic properties of the material (Xu et al., 2013). In our variable stiffness manipulator, however, an MRC was employed under a compressive and static load. In our previous study (Tanaka et al., 2010), we fabricated a prototype of a single joint using silicon rubber-based magneto-rheological elastomer (MRE) and carried out experiments to verify the basic concept. We found that the stiffness of the joint can be changed by an external magnetic field. However, the rigidity difference of the joint under magnetic and non-magnetic fields was not satisfactorily high. The main reason for this is the dispersion state of the magnetic particles. In the MRE, silicon elastomer was used as the dispersion medium; therefore, the mobility of the magnetic particles was not as high as that of the fluid or gel. In this study, therefore, we changed the medium from the elastomer to a viscous fluid and prepared a novel magneto-rheological gel (MRG), expecting particles with a higher mobility and a consequent improvement in the rigidity difference of the joint. In addition, we evaluated the appropriate shape of the electromagnet through which a large magnetic field can be applied to the MRG.
METHOD Basic Concept of the Manipulator
The structural outline of the variable stiffness manipulator is shown in the upper part of Figure 1. It consists of MRG rings, electromagnets for generating the magnetic field, four wires, and spacers. As shown in the figure, the MRG rings are inserted between two electromagnets. Non-magnetic spacers prevent the magnetic field from leaking to the next joint. When an MRG ring is exposed to an attractive magnetic field (AMF), the stiffness of the joint increases. In contrast, when the applied current to the electromagnet is zero, it is exposed to a non-magnetic field (NMF), and the stiffness decreases. As a result, only the joint with NMF (e.g., joints 3 and 5) will bend when one of the four wires is pulled, as shown in the lower part of Figure 1.
MRG Fabrication
Carbonyl iron particles with an average diameter of 2 µm were selected as dispersed magnetic particles in this study because it is easily available and commonly used for general MRC. Commercial glycerol (Kenei Pharmaceutical Co., Ltd.) was used for the non-magnetic medium because it is non-volatile and has low reactivity with silicon, which is the material of the casing. MRG was fabricated with 83% weight fraction of carbonyl iron particles by direct mixing. The MRC was sealed in a flexible silicon casing (25 mm external diameter, 5 mm internal diameter, 5 mm height, 0.50 mm thickness, in Figure 2A) as it exhibits fluid property with NMF.
Magnetic Field Analysis
The closed magnetic circuit prevents leakage flux and hence can generate an efficient magnetic field. Therefore, we designed a prototype joint using the closed magnetic circuit. A schematic diagram of the joint is shown in the left part of Figure 2A.
The prototype joint comprises an MRG ring, inner and outer cores, and magnetic coils. The magnetic field was analyzed using finite element analysis software ANSYS. The components of the simulation model are shown in the left part of Figure 2B. The MRG ring and cores formed a closed magnetic loop for generating the magnetic field as shown in the middle part of Figure 2B.
Elastic Modulus Measurement
To investigate the validity of the basic concept, we fabricated a prototype single joint based on the above magnetic field analysis. The inner and outer cores were made of pure iron. Polyurethane copper wire with a dimeter of 0.30 mm was used as the magnetic coil (186 turns, 20.6 mm external diameter, 15 mm internal diameter, and 7 mm height). The magnetic flux density at the measurement points shown in the right part of Figure 2B was measured using a tesla meter in the manufactured joint. When an electric current of 1.5 A was applied to the coil, the magnetic flux densities were 98, 86, and 73 mT for point 1, point 2, and point 3, respectively.
The elastic modulus of the prototype joint was measured using the static compression test described below. Figure 3A shows the apparatus specially designed for loading weights on the joint. Figure 3B shows the compression test process. First, when the load on the left and right wires is 0 N as shown in the upper part of Figure 3B, the displacement of the joint in the compression direction is 0 mm, and the distance from the laser displacement meter (LB-02, LB-62, and KEYENCE) to the joint end face (x 1 ) is measured. Then, a static compression load parallel to the axis of the joint is applied by adding the same number of weights to the left and right wires, as shown in the lower part of Figure 3B. Under this condition, the distance from the laser displacement meter to the joint (x 2 ) is measured. The difference between x 1 and x 2 is the displacement of MRG due to the compression load, and the strain can be obtained by dividing this value by the initial length of MRG. As a result, we obtained the stressstrain relationships of the joint under different magnetic field conditions. The stress was calculated by dividing the load weight by the cross section of MRG.
Flexion Angle Measurement
To ascertain whether the flexion angle of the joint can be tuned by varying the magnetic field, we measured the flexion angle of the joint under the following conditions. First, the weights (1.74 + 5 × 0.42 = 3.82 N) were loaded only to the right wire under AMF conditions, and the joint angle (θ) was manually measured from the photograph taken from the top. Then, the same procedures were carried out under NMF conditions. To calculate the angle from the photograph, we used ImageJ image processing software.
RESULTS
The stress-strain relationships of the prototype joint under NMF and AMF are shown in Figure 4A. The figure shows that under NMF and AMF conditions, the stress was nearly proportional to the entire strain range. The slope of the line corresponds to the elastic modulus. The elastic moduli were determined to be 107 and 563 kPa under NMF and AMF conditions, respectively. MRG exhibits fluidity and easily deforms under load, particularly under NMF. Therefore, the value of the elastic modulus includes the effect of the elasticity of the silicon casing used in enclosing the MRG. Figure 4B shows the relationships between the load weight to the right wire and the prototype joint angle (θ). In either case, θ increased with the increase in the load weight. As shown in the figure, the value of θ under AMF was considerably smaller than that under NMF, and the largest difference in θ was 10.6 • at 3.82 N. Figure 4B also shows photographs of the prototype joint with different weights and magnetic field conditions. The photographs show that the stiffness of the prototype joint can be changed by applying a magnetic field; however, the maximum bending angle under NMF is small, and the value is considered to be insufficient for practical use.
DISCUSSION
By measuring the elastic modulus, we verified that the stiffness of the prototype joint using MRC can be tuned by an external magnetic field. The elastic modulus relatively changed, and the value under AMF was approximately 5.3 times larger than that under NMF. In our previous study, the elastic moduli of the joint under NMF and AMF were 108 and 187 kPa, respectively, and the relative change in the elastic modulus was 1.7 times (Tanaka et al., 2010). In the present study, the newly designed joint elastic modulus measured without applying a magnetic field is considered to be significantly affected by the silicon casing; however, a comparison between these two results confirmed that the stiffness of the joint was improved in the newly designed joint under AMF. The main factor responsible for this improvement is the change in the matrix of MRC from silicon rubber to glycerol. When the properties of MRC changes due to an external magnetic field, the chain-like structures of magnetic particles give rise to resistance to external forces (Carlson and Jolly, 2000). In the previous MRC, the movement of these particles would be hindered by the silicon rubber matrix. In contrast, because the glycerol used as the matrix is a liquid, it hardly prevents movement. Thus, particles could easily move along the magnetic lines, and the stiffness increased under AMF. In the newly fabricated MRG, we observed the sedimentation of particles caused by a density difference between the dispersed particles and the matrix. The stiffness of MRG increased as the particles inside are aligned in a chain with the application of a magnetic field. Therefore, if the particles settle in the casing, the distribution of the particles will have significant bias, and will be divided into a region where sufficient rigidity is exhibited when a magnetic field is applied and a region where it is not. This problem can be solved by introducing the so called "magnetic bias structure" using permanent magnets. Detail of this concept will be described later.
Various mechanisms have been reported for tuning the stiffness of joints (Loeve et al., 2010). Maghooa et al. (2015) reported a soft manipulator with antagonistic actuation. The manipulator can overcome the limitation of DOF and can create a wide workspace by combining a tendon-drive and pneumatic-actuation. However, each joint requires four wires to control the joints of the manipulator. This control method increases the complexity of the system, like the multiple DOF manipulator (Takanobu et al., 2007). In contrast, the system in our manipulator can be actuated using four wires regardless of the number of joints. Although Maghooa et al. (2015) did not mention the responsiveness of the various stiffness joints in their report, it is considered that it will take some time to change the pressure inside the manipulator. In contrast, the characteristics of MRC change rapidly with an external magnetic field (Goncalves and Carlson, 2007). Therefore, the use of MRC for tuning variable stiffness joints is considered effective.
From results of the flexing joint experiment, we confirmed the possibility of tuning the bending angle by applying a magnetic field. The difference in the angle according to the magnetic field conditions is important to realize a variable stiffness joint. Figure 4C shows the relations between the angles of the prototype joint under NMF and AMF. Each plot is a measurement result of the bending angle under NMF and AMF at each load weight. In this figure, the line of y = x with a slope of 1 indicates 0% performance because the bending angle at each load weight is the same regardless of the state of the magnetic field. In contrast, the line of y = 0 (an ideal line) with a slope 0 indicates 100% performance because the angle at each load weight when a magnetic field is applying is 0 • . Hence, we quantitatively evaluated the performance of the joint using the slope in this figure. We confirmed that the performance of the new joint was (1-0.07) × 100 = 93% and that of the previous joint (Tanaka et al., 2010) was (1-0.69) × 100 = 31%. This result shows that the bending stiffness of the newly designed prototype joint improved under a magnetic field. However, the maximum bending angle in the experiment was 11.7 • , which does not satisfy the range of motion of a single joint for clinical applications. For example, the tip of forceps used in robot-assisted surgery typically bends at approximately 90 • , at the least. Because our current maximum bending angle of a joint is approximately 10 • , nine joints are required to achieve this 90 • bending. If the number of joints is more, more components will be required, thereby increasing the complexity of the system; therefore, improving the maximum bending angle and increasing the drive range per joint to be important. To solve this problem, the shape of the joint should be optimized to improve its flexibility. Moreover, we observed a high temperature rise in the magnetic coil due to joule heating. This problem can also be solved by employing the concept mentioned above, that is introducing the "magnetic bias structure." This structure has been frequently used to decrease power loss due to joule heating (Jinji and Jiancheng, 2011). If strong permanent magnets are used to maintain the rigidity of the joints, and electric magnets are used only for bending the joints, the duration of power supply to the electric magnets could be considerably shortened. This will be highly effective not only for solving the power loss problem, but also the "sedimentation" problem mentioned earlier. This is because the MRG rings are always exposed to the strong magnetic field formed by the permanent magnets if the magnetic bias structure is used, and sedimentation of the magnetic particles can be prevented. We plan to implement this improvement in future research.
CONCLUSIONS
In this study, we focused on improving the performance of a variable stiffness joint using MRG. To increase the joint stiffness upon applying a magnetic field, an MRG was prepared by mixing glycerol as a dispersion medium with carbonyl iron particles. A prototype single joint was fabricated based on magnetic field analysis. Results of preliminary experiments confirmed that the stiffness of the joint changed significantly compared with the conventional joint depending on the presence or absence of a magnetic field. However, several limitations of the joint, such as the large size and low movable range, need to be resolved for future practical use.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher.
AUTHOR CONTRIBUTIONS
SK drafted the manuscript and carried out the tests. ST and TK conceived the study. IS fabricated the joint. MN and HN performed the analysis. ST supervised the study and revised the paper.
FUNDING
Part of this research was supported by Shibuya Science and Sports Culture Foundation (2015). | 2020-04-27T16:23:51.127Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "b9d9d57d1154bece05264563ad61d75cae1cc229",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2020.00059/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0ad0196adb9e028cfa63f897251dde9173db235",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Materials Science"
]
} |
251647708 | pes2o/s2orc | v3-fos-license | EEG signature of breaks in embodiment in VR
The brain mechanism of embodiment in a virtual body has grown a scientific interest recently, with a particular focus on providing optimal virtual reality (VR) experiences. Disruptions from an embodied state to a less- or non-embodied state, denominated Breaks in Embodiment (BiE), are however rarely studied despite their importance for designing interactions in VR. Here we use electroencephalography (EEG) to monitor the brain’s reaction to a BiE, and investigate how this reaction depends on previous embodiment conditions. The experimental protocol consisted of two sequential steps; an induction step where participants were either embodied or non-embodied in an avatar, and a monitoring step where, in some cases, participants saw the avatar’s hand move while their hand remained still. Our results show the occurrence of error-related potentials linked to observation of the BiE event in the monitoring step. Importantly, this EEG signature shows amplified potentials following the non-embodied condition, which is indicative of an accumulation of errors across steps. These results provide neurophysiological indications on how progressive disruptions impact the expectation of embodiment for a virtual body.
Introduction
The integration of an avatar in virtual reality (VR) applications can evoke users' Sense of Embodiment (SoE) towards their virtual avatar. It is often proposed [1][2][3][4][5] that the SoE yields from the congruent association of the following three neurological processes related to the neuroscientific study of bodily self-consciousness; i) the sense of agency [6,7], ii) the sense of body ownership [8][9][10][11] and iii) the sense of self-location [12][13][14]. Importantly, the subjective experience of embodiment is significantly altered if at least one of the three components is not respected [4,5,15]. It was further observed that violations of these conditions occurring after the successful induction of embodiment cause a disruption of SoE. Kokkinara et al. [16] denoted such disruptions as "Breaks", further identified as "Breaks in Embodiment" (BiE) by Porssut et al. [17] thereby providing a working definition for events interrupting embodiment and lowering the level of SoE to the point of impeding the immersive VR experience. These observations outline that the embodiment for a virtual body is, in essence, a simulation of the natural experience of embodying a real body [3,18,19]. As such, the subjective experience of embodiment for an avatar would be more an expectation that is satisfied than a new feeling that is induced. This view corroborates the observation that, once embodied in an avatar body, "people have some subjective and physiological responses as if it were their own body" [4]. Investigating BiE is key for understanding the cognitive mechanisms of virtual embodiment because, to the opposite of studies evaluating the overall subjective experience under given conditions and over relatively long periods of time, they are time-locked to a disembodiment event and less sensitive to cognitive biases. Importantly, research on BiE provide the appropriate conditions for the electrical neuroimaging study of their associated brain activity with electroencephalography (EEG), and can thus help in understanding if the building and disruption of embodiment are more continuous or discrete processes. Recent works [17,[20][21][22][23] observing the modulation of Error-related Potentials (ErrPs) induced by disruptions in VR already showed that the investigation of BiE can provide indirect assessments of embodiment. The neural mechanisms observed in these studies however overlapped with reactions that are not specific to embodiment, linked for instance to the violation of motor intentions [20][21][22], or did not allow concluding on the specificity of embodiment in absence of condition without error or of contrast with/without embodiment [23].
To circumvent former works' limitations and evaluate how virtual embodiment for an avatar is altered in a BiE event, we introduce a new task aiming at eliciting error perceptions specifically targeting a disruption of the SoE. In particular, we are interested in observing if the detection of a BiE depends on previous alterations in the embodiment conditions. Showing a modulation of the BiE would indeed reflect a successive accumulation of evidence contradicting the expectation for embodying an avatar. The absence of modulation would conversely depict BiEs as transient points of rupture in an established mental state. To test this hypothesis, we designed an experiment in which participant are asked, following an induction phase providing either reinforcement or violation of agency, to evaluate the presence of a potential disruption of body ownership. EEG was recorded in order to analyse the modulation of the ErrPs provoked by this disruption, and all factors are manipulated in a factorial design. 19 healthy, right-handed subjects participated in the study (6 women, 22.9 ± 2.1 years (mean ± standard deviation (SD)). All participants had normal or corrected-to-normal vision and gave informed consent prior to their participation. The study was undertaken in accordance with the ethical standards as defined in the Declaration of Helsinki and was approved by the Ethical commission of Canton de Vaud on research involving human subjects (n˚2018-01601).
Experimental protocol
During the experiment, participants wore a head mounted display with 1440 × 1440 resolution per eye, covering 110 degree of field of view at 90 Hz (The Explorer Headset, Lenovo). To eliminate auditory noise, we used a pair of in-ear headphones with active noise cancelling (QuietComfort 20, Bose) to play a non-localized white noise. Participants' full body motion was captured by a motion capture system with 18 cameras and 17 markers on their body (ImpulseX2, PhaseSpace). Fig 1 shows the experimental setup.
Participants immersed in a 3D environment saw their avatar in first person view. The virtual chair, table and 3D avatar were calibrated to match the position, orientation and dimensions of the actual environment. Both the participant and the avatar held a cylindrical object in their right hand, preventing participants from moving their fingers while maintaining a visuoproprioceptive and visuo-tactile coherence. The VR environment was implemented with Unity 3D (2019.2.0f1). Participants' movements were reproduced through animation of the virtual avatar using FinalIK (https://root-motion.com).
The experiment took place in different phases as follows (Fig 2A). After calibration and training, the experimental blocks started with a reaching task phase of 6s, with the aim to establish a baseline giving enough time for participants to make the experience of a high level of embodiment towards the virtual avatar. During this phase, participants were instructed to perform reaching movements to four different targets.
The experimental task itself consisted of two steps ( Fig 2B). Each trial started with an active induction step during which participants were instructed to turn the hand twice (wrist rotation). In the following monitoring step, a fixation cross appeared above the hand, and participants should remain still and fix their gaze on the cross. The succession of the two steps is key for our observations; first we induce a sense of embodiment (or not) for the avatar and second, we disrupt this sense of embodiment (or not) while participants are passively monitoring the virtual arm (to prevent any eye movement or motion artifacts in the EEG signals). The experimental conditions thus correspond to the 2 × 2 conditional matrix affecting the induction step, Embodied or Non-Embodied, and the monitoring step, Disruption or No Disruption. In the Embodied condition, the virtual hand followed the participant's hand doing the wrist rotation. Conversely, the virtual hand remained still in the Non-Embodied condition. The two conditions of the monitoring step occurred after a randomized fixation time (randomly picked from 0.9 s, 1.5 s and 2.1 s). These randomized time intervals were to prevent participants from knowing the exact starting time of the next step. In the No Disruption condition, the virtual hand and the physical hand of the participant remained still. In the Disruption condition, the virtual hand autonomously performed a wrist rotation while the physical hand remained still. At the end of the monitoring step (0.6 s after the event if it occurred), participants were prompted to answer a questionnaire (see in the next section Subjective Rating). We employed the different task between step 1 and step 2 to remove active motor action of their wrist and to temporally align when participants perceive Break-in-Embodiment. Due to time constraints, no questions were asked at the end of step 1. We ensured that participants were embodied or not during this first step thanks to the answers during Embodied/No Disruption and Non-Embodied/No Disruption condition.
Participants performed 360 trials of the experimental task, distributed in 5 blocks of 72 trials. The number of trials in Embodied and Non-Embodied conditions was identical (50%). Because the successful elicitation of ErrPs depends on the unexpected nature of the occurrence of an event, the experiment presented more often trials without disruption than with. The ratio of 33% of Disruption trials for the monitoring step was determined based on previous studies [18,19,[22][23][24]. In each block, we ensured the ratio of each condition and the trials were counterbalanced to compensate for the order effect. Thus, each participant performed 120 trials of Non-Embodied/No Disruption and Embodied/No Disruption, 60 trials of Non-Embodied/Disruption and Embodied/Disruption The number of trials was not available to The experiment consisted of six phases; i) calibration, ii) explanation, iii) training, iv) reaching task, v) experimental task and some breaks. The explanation phase consisted of four trials to illustrate each experimental condition. Subjects performed eight more trials during the training phase to ensure that they understood and performed the task correctly. B: The experimental task is the succession of 2 steps. During the active induction step, two conditions can be presented to the subject, either Embodied or Non-Embodied with the avatar. In the monitoring step, two conditions can occur to either produce a Disruption (the right virtual hand rotates by itself) or No Disruption (nothing happens). In the figures, the grey arm is avatar's arm movement (as seen by the subject in VR) and the red arm corresponds to participants' physical arm movement (not shown in VR, shown here for illustration purposes). Following the 2 steps, participant's subjective rating of body ownership is gathered. https://doi.org/10.1371/journal.pone.0282967.g002 participants. Participants took a break for at least three minutes after each block, starting with the reaching task again before resuming the experimental tasks.
Subjective rating
As in previous studies [17,24] subjects were asked to rate their agreement to the affirmation "I felt as if the virtual body was my body" on a visual color gradient scale (see Fig 2B) ranging from "No at all (red, on the left) to Very Much (green, to the right). This measure of body ownership was adapted from previous work [16,25]. Subjects were asked to use the full scale by moving horizontally a cursor with the head and to validate with a trigger button in their left hand. Although no numerical feedback was provided to subjects, answers were recorded in a continuous scale from 0 to 100. Because the experimental design requests a large number of repetitions, we wanted to minimize the number of questionnaire items. We decided not to ask for a subjective rating of the sense of agency as it would only have been useful to confirm that participants were aware of our manipulation, and we had no reason to hypothesize that it would not be the case [17].
As we observed that data were not normally distributed using one-sample KolmogorovSmirnov test, we performed two-way repeated measures Friedman ANOVA to investigate the main effect of embodiment (Embodied vs Non-Embodied) and disruption (Disruption and No Disruption). To investigate the interaction effects, Wilcoxon signed-rank test was applied for each pair of the conditions. The false discovery rate (FDR) was corrected for the six withingroup comparison using the Benjamini-Hochberg procedure. The effect size is computed using the scaled robust Cohen's standardized mean difference (dr) for non-normal residuals [26,27].
EEG signal processing
EEG signals were recorded throughout the experiment using three synchronized g.USBAmp (g.tec medical technologies, Austria) and 32 active electrodes located following 10/10 international system [28]. EOG signal was simultaneously recorded with 3 active electrodes from above the nasion and below the outer canthi of the eyes. The ground electrode was placed on the forehead (AFz) and the reference electrode on the left earlobe. EEG and EOG signals were recorded at 512 Hz.
A channel rejection and subsequent spherical interpolation was performed on the EEG signals based on established methods [29,30]. This process removed 1 ± 1 channels per participant. Subsequently, Independent Component Analysis (ICA) was performed after the highpass filter (Noncausal 2nd order Butterworth filter with 1 Hz cutoff frequency) to remove artifactual independent components correlated with at least one of EOG signals. This process removed 2 ± 2 ICs. After artifactual ICs removal, signals were projected back to the channel space, and low-pass filtered at 30 Hz [31].
Processed EEG signals were then segmented into epochs within a time window of [-0.2, 0.6] s with respect to the onset of monitoring step. We restricted the subsequent EEG analysis to the monitoring step as participants performed motor actions in the induction step, which may be a confounding factor when evaluating humans' cognitive process. Contaminated EEG epochs were identified and rejected based on the probability of occurrence [29,30]. Three participants were removed from the subsequent analysis due to the limited number of clean EEG epochs; i.e., only less than 50% of epochs were kept after epoch rejection. For remaining participants, this process removed 3.2 ± 1.6% of trials on average.
In order to identify a specific time window in which EEG was significantly modulated between the conditions, we used a non-parametric cluster-based permutation test [32].
Specifically, we applied this test to the temporal signals at Pz, Cz and FCz channels as these electrodes have been identified to provide maximal modulation induced by BiE [20-23, 33, 34]. Each sample underwent a one-way repeated measures ANOVA with 4 conditions (Embodied/Disruption (ED), Non-Embodied/Disruption (NED), Embodied/No Disruption (END), Non-Embodied/No Disruption(NEND)), from which significant bins (α = 0.01) were further clustered. We applied cluster correction to keep the cluster which is significantly larger than the regions identified during the permutation test (α = 0.01). For post-hoc analyses, we computed the averaged amplitude at each identified time window and performed paired Student's t-tests to compare the mean amplitude within the clusters. Permutation tests were performed to estimate the significance of the results.
Subjective ratings
Statistical analysis revealed a significant main effects of both embodiment (X 2 = 45.35, p < 0.0001) and disruption factors (X 2 = 9.53, p < 0.01) on the subjective ratings of embodiment. The Embodied condition (73.1 ± 19.9) yielded a higher sense of embodiment than the
Electrophysiological measures
Fig 4B shows grand averaged signals at the Pz, Cz and FCz electrodes for each condition with respect to the onset of disruption events (t = 0) and in gray the time windows showing significant differences between conditions. Three different time windows have been identified, each of them corresponding to different event-related potentials. The first window corresponds to an error-related negativity [35] (ERN, ms) at Pz. The second window corresponds to an error-positivity [36] (Pe, [218-294] ms) at Pz, Cz and FCz. Finally, the third window corresponds to a N400 wave [35] (N400 [447-600] ms) at Cz and FCz. The presence of these components have been reported in the previous ErrP studies in VR setup [20,21]. Topographical representation of the Disruption condition (Fig 4A) revealed the focal activation of the parietal, central and frontal area for ERN, Pe and N400, respectively. The event related potentials of the No Disruption condition did not exhibit any prominent electrophysiological deflections in all the three channels. We did not observe any effect in the EEG signals associated with different interval durations.
Regarding ERN, the two-way repeated measures Friedman ANOVA used to investigate the main effect of the embodiment condition (Embodied or Non-Embodied) and the disruption Table 1).
Regarding Pe, we observed the main effect of the disruption condition for all the three channels (p < 0.001 for Pz and p < 0.0001 for Cz and FCz). However, we did not observe any main effect of the embodiment condition (p = 0.38 for Pz, p = 0.47 for Cz and p = 0.52 for FCz). Subsequent post-hoc analyses showed that the Pe amplitude was significantly different between all possible pairs; except a pair of Embodied/No Disruption and Non-Embodied/No Disruption at FCz channel (see Fig 4C and Table 2).
Regarding N400, as for the ERN and Pe, statistical analysis revealed a significant main effect of the disruption condition (p < 0.0001 for Cz and FCz). However, we did not observe any main effect of the embodiment condition (p = 0.11 for Cz and p = 0.14 for FCz). Subsequent post-hoc analyses showed that N400 amplitude was significantly different between Non-Embodied/Disruption against Embodied/No Disruption, Non-Embodied/Disruption against Non-Embodied/No Disruption, Embodied/Disruption against Embodied/No Disruption and Embodied/Disruption against Non-Embodied/No Disruption for both Cz and FCz channels (see Fig 4C and Table 3).
Discussion
Our results show that both manipulations, during a first induction step and in a subsequent disruption step, affected the subjective ratings of body ownership. First, we observe a strong and significant effect of the manipulation of agency done in the first experimental step (Embodied vs. Non-Embodied conditions). Second, we also confirm that the visuo-proprioceptive disruption occurring in the second experimental step (with or without disruption) was Table 1. Results of post-hoc analysis for ERN at the electrode Pz, Cz and FCz. Cz n/a n/a n/a n/a n/a n/a FCz n/a n/a n/a n/a n/a n/a https://doi.org/10.1371/journal.pone.0282967.t001 Table 2. Results of the post-hoc analysis for Pe at the electrode Pz, Cz and FCz. clearly noticed and led to significantly different ratings in the Embodied condition. Third and most interestingly, this difference is not observed in the Non-Embodied condition, thus showing the influence of the first step on the subsequent manipulation.
Pe NED/ED NED/END NED/NEND ED/END ED/NEND END/NEND
The electrophysiological signature of ErrPs shows a successful induction of ERN and Pe following our experimental disruption. As expected, no ErrPs were elicited in absence of visuoproprioceptive conflict [37]. More importantly, the ErrPs components were modulated by the prior step of manipulation of the sense of agency. We observe an increased level of amplitude of ERN and Pe for the Non-Embodied condition compared to the Embodied condition. These results corroborate the principle of the accumulation of errors revealed by Steinhauser et al. [38] and the work of Chang et al. [39,40] who provided neurophysiological evidence of posterror adjustment by showing amplified ErrPs in trials following errors. In line with this observation, our data reveal a mechanism of error monitoring, as for post-error adjustments, accumulating evidence against the expected state and leading to an eventual disruption (i.e. possible BiE). Extending these observations, our results demonstrate that accumulation of errors can occur when different types of cognitive process were disrupted, i.e. the sense of agency and the sense of body ownership, suggesting that these different factors accumulate into a global error of the experience of embodiment. This can be further corroborated with the mutual influence between manipulations observed with subjective ratings.
Furthermore and extending the ERN and Pe results, we observe N400 in the front-central area of the brain. This potential was originally identified when participants perceived semantic errors during linguistic processing [41], or in a performance monitoring tasks upon the perception of the erroneous actions [42]. Modulations of the N400 were also observed in VR when subjects were prevented from achieving their movement while embodied in a virtual avatar [20,21]. In our study, the amplitude of N400 is the same regardless of the condition in the induction step, inline with results of previous similar works [22].
Conclusion
The present study investigated the participants' subjective ratings of embodiment and their electrical neuroimaging data in order to reveal the mechanism of breaks in embodiment. In line with previous experiments [23,43,44] our manipulation successfully elicited ErrPs when participants experienced a conflict between their real and virtual bodies (visuo-proprioceptive disruption). Extending these previous works, our results show that the level of SoE and the amplitude of ErrPs are modulated by prior conditions manipulating the sense of embodiment. Specifically, we observe significantly lower subjective ratings of body ownership and larger amplitudes of ERN and Pe components when participants previously had an unfavorable experience of embodiment (no agency for the avatar hand), as compared to cases following a Table 3. Results of the post-hoc analysis for N400 at the electrode Cz and FCz.
N400 NED/ED NED/END NED/NEND ED/END ED/NEND END/NEND
Pz n/a n/a n/a n/a n/a n/a positive embodiment induction (visuo-motor synchrony with avatar hand). These results thus tend to show that an accumulation of evidence against the expectation of embodiment leads to higher reactions to what are otherwise identical disruptions. Importantly, we demonstrate with our two-steps manipulation that different conflicts of embodiment (visuo-motor and visuo-proprioceptive) can combine into what could be a unified error monitoring mechanisms of embodiment, providing one of the first evidence of a neural response to a BiE. These observations give some insight on the neural mechanisms of virtual embodiment. If the embodiment for a virtual body was conflicting with the embodiment for the real one, a VR experience would start with a low a priori embodiment for the avatar, that builds up only if conditions are favorable to confirm that the virtual body is the one owned, and that abruptly breaks upon any contradicting evidence. What we rather observe is an accumulation of contradicting evidences against the expectation of embodiment for the avatar. Our experimental design however does not allow observing the neural response to each successive disruptions. This is mostly due to difficulties in performing a clean ErrPs analysis at times when participants are performing motor actions (artefacts from movement of participants, widespread brain activity during motor execution), but would be worth investigating with different disruptions and more advanced EEG analysis. Going further with the combination of conditions of disruptions in reality and in mixed reality would even allow investigating the specific nature of a break in virtual embodiment, eventually even allowing to determine if the expectation of embodiment for the virtual body is due to an expectation of continuity from the embodiment for the real body.
As an outlook, and since ErrPs are not subject to the same degree of introspection as the standard presence questionnaires and can be done without interrupting for explicit user feedback, they could be used to implicitly detect disruptions of embodiment. This would allow conducting background evaluations of the subject's immersive experience, and be used for quality assessment of VR systems and paradigms. This approach could also be beneficial for the clinical evaluation of embodiment of prosthetic limbs, with benefits for reducing phantom limb distortions and pain in hand [45] or leg [46] amputee. | 2022-08-19T13:27:22.912Z | 2022-08-13T00:00:00.000 | {
"year": 2023,
"sha1": "7adb569e65ab7c6faaeae57844a6dbdebbb5506e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2fb565536bd7b8a4082012dbbff9a368d048827b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
207939381 | pes2o/s2orc | v3-fos-license | Multiple-Step Melting/Irradiation: A Strategy to Fabricate Thermoplastic Polymers with Improved Mechanical Performance
To fabricate thermoplastic polymers exhibiting improved ductility without the loss of strength, a novel multiple-step melting/irradiation (MUSMI) strategy was developed by taking poly(vinylidene fluoride)/triallyl isocyanate (PVDF/TAIC) as an example, in which alternate melting and irradiation were adopted and repeated for several times. The initial irradiation with a low dose produced some local crosslinked points (not 3-dimensional network). When the specimen was reheated above the melting temperature, they redistributed in the PVDF matrix, which is an efficient way to avoid the high crosslinking density at certain positions and the disappearance of thermoplastic properties. During the subsequent cooling process, the crosslinked domains in the thermoplastic polymer matrix is expected to play double roles in turning crystal structures for enhancing the ductility without reducing strength. On one hand, they can act as heterogeneous nucleation agents, resulting in higher nucleation density and smaller spherulites; on the other hand, the existence of crosslinked structures restricts the lamellar thickening, accounting for the thinner crystal lamellae. Both smaller spherulites and thinner lamellae contribute to better ductility. At the same time, these local crosslinked points enhance the connectivity of crystal structures (including lamellae and spherulites), which is beneficial to the improvement of strength. Based on the influence of local crosslinked points on the ductility and strength, thermoplastic PVDF with much higher elongation at break and comparable yielding stress (relative to the reference specimen upon strong irradiation only once) was prepared via MUSMI successfully.
Introduction
In the wide applications of various materials, mechanical performance plays an important role [1,2]. Much attention has been paid to strength and ductility, which can be evaluated by the parameters of yielding stress and elongation at break, respectively. There have been many strategies to enhance the strength or ductility separately [3,4]. The improvement of one parameter, however, always leads to the loss of the other. This is well known as the strength-ductility trade-off effect [5,6]. Different than conventional metals and ceramics, the mechanical performance of polymers is determined by structures in various scales [7][8][9][10]. On one hand, the configuration of polymer chain (the first order structure) where ∆Hm is melting enthalpy and ∆Hm° is 290 J/g for the melting enthalpy of perfectly crystalline 115 PVDF. The reaction of TAIC in specimens after γ-ray irradiation was evaluated by Fourier transform 116 infrared spectroscopy (FTIR, Bruker Tensor, Beerlika, MA, USA) with a resolution of 2 cm -1 . The
117
Instron universal materials testing system (Model 5966) was used for tensile tests at the speed of 10
Sample Preparation
PVDF/TAIC specimens were prepared by solution casting. PVDF and TAIC (with the weight fractions of 0.5%, 3%, and 10%) were added to DMF and stirred at 80 • C for 3 h to obtain a homogeneous solution with a concentration of 15% (mass fraction). The solution was dried in-oven at 120 • C for 24 h to remove the residual DMF. The dried samples were hot-pressed into films with a thickness of 0.5 mm at 200 • C and 10 MPa. The specimens were vacuum-sealed and then irradiated by γ-ray from a 60 Co source with the dose of 10 kGy at room temperature (named as 10 kGy*1, shown in Figure 1). Then, the irradiated specimen was heated to 210 • C, followed by hot-press and irradiation for the second time (10 kGy*2). This process was repeated for the third time (10 kGy*3). The reference specimen was irradiated only once at 30 kGy (17 h) after solution casting and hot-press.
Characterization
A field emission scanning electron microscope (FESEM, Hitachi S-4800, Tokyo, Japan) was used to examine the fracture surface of PVDF/TAIC specimens. The differential scanning calorimeter (DSC, TA, Q2000) was adopted to investigate the thermal behaviors of the specimens. The samples were heated from 30 to 210 • C at a speed of 10 • C/min, held at this temperature for 10 min to erase previous thermal history, and then cooled to 30 • C at a rate of 10 • C/min. The crystallinity (X c ) was computed via Equation (1) [30]: where ∆H m is melting enthalpy and ∆H m • is 290 J/g for the melting enthalpy of perfectly crystalline PVDF. The reaction of TAIC in specimens after γ-ray irradiation was evaluated by Fourier transform infrared spectroscopy (FTIR, Bruker Tensor, Beerlika, MA, USA) with a resolution of 2 cm -1 . The Instron universal materials testing system (Model 5966) was used for tensile tests at the speed of 10 mm/min. The long periods of PVDF were tested by the small-angle X-ray scattering measurements (SAXS, BL16B1, Shanghai Synchrotron Radiation Facility, China). The wavelength of the monochromatic X-ray beam is 1.24 Å. One dimensional density correlation functions K(z) calculated the Fourier transformation of the scattering curve, following Equation (2) [25]: where q is the characteristic wave number, and I is scattering intensity. The long periods of PVDF crystals were calculated according to Equation (3): The morphologies of PVDF spherulites were observed by a polarizing light microscope (POM, Olympus BX51) with a Linkam LTS 350 hot stage. The samples were heated to 210 • C for 10 min, followed by isothermal crystallization at 150 • C.
Results and Discussion
First of all, it is necessary to assess the thermodynamic miscibility between PVDF and TAIC. For this purpose, the blend specimens with various weight fractions of TAIC (up to 10%) were prepared by solution casting, followed by hot-press. In SEM images (Figure 2A-D) of the fracture surface, there is no obvious aggregation of TAIC. The surface is homogeneous even when the weight fraction of TAIC reaches 10% ( Figure 2D). In the DSC curves ( Figure 2E), there are two melting peaks located at 169.7 and 175.1 • C in neat PVDF (black curve). Both of these moves to the lower temperature direction upon blending with TAIC (red, green, and blue curves). In the result of TAIC 10%, the values of two melting peaks are 161.9 and 170.1 • C. The remarkable decrease of T m indicates that the crystallization of PVDF during cooling was influenced significantly because of the existence of TAIC. This is well known as the "T m depression effect" [31]. This result suggests that PVDF and TAIC exhibit excellent miscibility, which has good agreement with the homogeneous distribution of TAIC in PVDF shown in SEM images ( Figure 2A to 2D), comparable solubility parameters (25.8 for PVDF and 29.2 for TAIC) [27,28], and the reported results, in which the decrease of melting temperature was also observed [26].
where q is the characteristic wave number, and I is scattering intensity. The long periods of PVDF 123 crystals were calculated according to Equation (3): The morphologies of PVDF spherulites were observed by a polarizing light microscope (POM,
125
Olympus BX51) with a Linkam LTS 350 hot stage. The samples were heated to 210 °C for 10 min, 126 followed by isothermal crystallization at 150 °C.
128
First of all, it is necessary to assess the thermodynamic miscibility between PVDF and TAIC. For 129 this purpose, the blend specimens with various weight fractions of TAIC (up to 10%) were prepared 130 by solution casting, followed by hot-press. In SEM images ( 140 [27,28], and the reported results, in which the decrease of melting temperature was also observed 141 [26].
146
Two different irradiation methods were adopted in this work. Specimens were irradiated with 147 the dose of 10 kGy (named as 10 kGy*1), followed by melting at 210 °C, hot-press and irradiation for 148 the second (10 kGy*2) and third (10 kGy*3) time ( Figure 1). This is so-called "multiple-step 149 melting/irradiation (MUSMI)". The specimen irradiated with 30 kGy only once (30 kGy*1) acts as the 150 reference specimen. During irradiation, the radicals can be created in PVDF because of its polarity 151 [32,33], which is the reason for the formation of crosslinked structures in the presence of an agent, Two different irradiation methods were adopted in this work. Specimens were irradiated with the dose of 10 kGy (named as 10 kGy*1), followed by melting at 210 • C, hot-press and irradiation for the second (10 kGy*2) and third (10 kGy*3) time ( Figure 1). This is so-called "multiple-step melting/irradiation (MUSMI)". The specimen irradiated with 30 kGy only once (30 kGy*1) acts as the reference specimen. During irradiation, the radicals can be created in PVDF because of its polarity [32,33], which is the reason for the formation of crosslinked structures in the presence of an agent, e.g., TAIC. In this process, the carbon-carbon double bonds in TAIC participate in the radical reaction. The intensity of carbon-carbon double bonds in FTIR, therefore, is a good parameter to use to describe the reaction. As shown in Figure 3, the absorbance at 1645 cm -1 , corresponding to the characteristic peak of carbon-carbon double bonds, is obvious before irradiation [26]. It decreases upon MUSMI (from 10 kGy*1 to 10 kGy*3). In the green curve (10 kGy*3), the intensity of this peak exhibits very low magnitude, which is similar with that in the reference specimen (purple curve in Figure 3). The results discussed above clarify that the reaction between PVDF and TAIC was induced by gamma irradiation, accounting for the intensity decrease at 1645 cm -1 . The comparison of the green and purple curves in Figure 3 indicates that the reaction degrees in the specimen of 10 kGy*3 and the reference are comparable. The structures of these, however, are different. In the former, MUSMI produces local crosslinked points distributed in the whole PVDF matrix. The gel fraction, therefore, is close to zero, corresponding to the thermoplastic properties. The latter suffered from high crosslinking density at certain regions and cannot be dissolved by DMF completely, indicating the disappearance of thermoplastic properties. When the weight fraction of TAIC reaches 5%, the irradiation of 10 kGy*3 also produces thermoset performance. As a result, our discussion focuses on the specimen of PVDF/TAIC (3%) in the following sections. e.g., TAIC. In this process, the carbon-carbon double bonds in TAIC participate in the radical 153 reaction. The intensity of carbon-carbon double bonds in FTIR, therefore, is a good parameter to use 154 to describe the reaction. As shown in Figure 3, the absorbance at 1645 cm -1 , corresponding to the 155 characteristic peak of carbon-carbon double bonds, is obvious before irradiation [26]. It decreases 156 upon MUSMI (from 10 kGy*1 to 10 kGy*3). In the green curve (10 kGy*3), the intensity of this peak 157 exhibits very low magnitude, which is similar with that in the reference specimen (purple curve in 158 Figure 3). The results discussed above clarify that the reaction between PVDF and TAIC was induced
171
The mechanical performances of neat PVDF and PVDF/TAIC were assessed ( Figure 4). In neat 172 PVDF, the film prepared by solution casting and hot-press exhibits a yielding stress of 46 MPa and 173 elongation at break of 138% (data not shown here). The former is not sensitive to gamma irradiation.
183
MUSMI is a more efficient way to enhance the ductility without the loss of strength relative to strong 184 irradiation. The mechanical performances of neat PVDF and PVDF/TAIC were assessed ( Figure 4). In neat PVDF, the film prepared by solution casting and hot-press exhibits a yielding stress of 46 MPa and elongation at break of 138% (data not shown here). The former is not sensitive to gamma irradiation. The latter, however, depends crucially on it ( Figure 4A,C,D). The value increases to 140%, 165%, and 226% upon alternate melting/irradiation (10 kGy), for one, two, and three times, respectively. In the case of PVDF blended with TAIC ( Figure 4B-D), the yielding stresses of all specimens exhibit similar magnitudes (40-47 MPa, Figure 4D), while the variation of elongation at break becomes more remarkable ( Figure 4C). It reaches the maximum of 393% in 10 kGy*3. By contrast, the failure of the reference specimen occurs at an elongation of only 87%. In these results, our attention should be paid to the following issues. Firstly, all specimens (including PVDF and PVDF/TAIC) exhibit similar yielding stress ( Figure 4D); secondly, the values of the elongation at break increase upon further melting/irradiation, while they exhibit a lower magnitude in the reference ( Figure 4C); finally, MUSMI is a more efficient way to enhance the ductility without the loss of strength relative to strong irradiation.
Both DSC and SAXS were employed to investigate the crystal structures, which play important roles in determining mechanical performance in polymer materials [30,34]. In the first heating curves of DSC ( Figure 5A), the double melting peaks of 10 kGy*1 were located at 165.4 and 172.6 • C, both of which move to a lower temperature direction upon further melting/irradiation (red and blue curves). In the specimen of 10 kGy*3, the values are 162.0 and 170.3 • C. In the reference, the melting temperatures are similar with that in the specimen of 10 kGy*1. The reason for the variation of melting temperatures will be discussed in the following parts. The crystallinities of the specimens, calculated according to the DSC curves, are shown in Figure 5B. All of these exhibit close magnitudes ranging from 33.2% to 35.4%. The crystallinities of PVDF, therefore, are not dominating factors of the strength and ductility. In the Lorentz-corrected SAXS profiles ( Figure 5C), there are scattering peaks in all specimens. Based on the peak positions emphasized by arrows and one-dimension correlation functions, the long periods and lamellae thicknesses can be calculated. As shown in Figure 5D, both long periods and lamellae thicknesses decrease upon further melting/irradiation. The difference between them, representing the thickness of amorphous parts, remains almost constant (ranging from 7.2 to 7.5 nm). This result indicates that the variation of lamellae thickness dominates the different long periods. Furthermore, the thinner lamellae in Figure 5D accounts for the lower melting temperatures in Figure 5A, and vice versa (crystallinity, lamellae thickness and melting temperatures are listed in Table 1) [35].
189
Both DSC and SAXS were employed to investigate the crystal structures, which play important 190 roles in determining mechanical performance in polymer materials [30,34]. In the first heating curves Figure 5D accounts for the lower melting 205 temperatures in Figure 5A, and vice versa (crystallinity, lamellae thickness and melting temperatures 206 are listed in Table 1) [35]. In the following sections, our attention was paid to the PVDF spherulites. The PVDF/TAIC blends (before or after irradiation) were heated to 210 • C (above its equilibrium melting temperature), then cooled down to 150 • C. POM was employed to examine the well-developed spherulites upon isothermal crystallization at this temperature. In the results of the PVDF/TAIC blend before irradiation, the size of spherulites exhibits higher magnitudes, indicating its lower nucleation density ( Figure 6A) [36]. When the irradiated specimen was hot-pressed, the subsequent crystallization behavior in the cooling process produced spherulites with smaller diameters (Figure 6B,C). In Figure 6D, there are so many spherulites that it is hard to get the exact size and number of them in the POM images with the current magnification. MUSMI produces the following effects: On one hand, the number of local crosslinked points increases significantly, which can be supported by the intensity decrease of the carbon-carbon double bonds characteristic peak in FTIR ( Figure 3); on the other hand, the nucleation density during the cooling process exhibits much higher magnitude, as shown in Figure 6A-D. Obviously, the heterogeneous nucleation effect was enhanced by the crosslinked points resulting from MUSMI, which can be validated by the reference specimen [37]. Relative to MUSMI, the strong irradiation (30 kGy) results in the higher crosslinking density in certain regions. This is the reason for the lower nucleation density and big spherulites during the subsequent cooling process ( Figure 6E). Figure 6A) [36]. When the irradiated specimen was hot-pressed, the subsequent crystallization 219 behavior in the cooling process produced spherulites with smaller diameters (Figure 6B,C). In Figure 220 6D, there are so many spherulites that it is hard to get the exact size and number of them in the POM The heterogeneous nucleation effect was validated further by checking the nucleation position during multiple melting-crystallization and the crystallization temperature (T c ) in the cooling process measured by means of DSC (Figures 7 and 8). On one hand, the specimen of PVDF/TAIC blends upon irradiation with 10 kGy for three times (10 kGy*3) was heated to 210 • C, which was followed by cooling down to 150 • C, and isothermal crystallization for 30 s. The corresponding POM images are shown in Figure 7A. There are many spherulites with diameters of several microns. In the image with higher magnification at the indicated position, the immature spherulites can be observed. After this specimen was melted for the second time, its crystallization behaviors were tracked by POM again. The spherulites occur at exactly the same position ( Figure 7B). The same thing happens upon melting/crystallization for the third time ( Figure 7C). This result indicates that the crosslinked PVDF undergoes heterogeneous nucleation during cooling. On the other hand, the crystallization temperatures of PVDF upon MUSMI were measured by DSC in the cooling process (Figure 8). In the specimen of 10 kGy*1, the crystallization temperature is at 141.0 • C. This value increases to 141.8 and 144.3 • C in 10 kGy*2 and 10 kGy*3, respectively. The higher crystallization temperature ( Figure 8) and nucleation density ( Figure 6) suggest that there are extra heterogeneous nucleation agents in the specimens upon MUSMI. Therefore, it is the local crosslinked points that act as the nucleation agent during the crystallization of PVDF, since there are only PVDF and TAIC in this system [36,37]. In the reference specimen, the high crosslinking density in certain regions results in less nucleation points ( Figure 6E) and lower crystallization temperature (Figure 8). As a result, its T c exhibits a similar value to the specimen of 10 kGy*1, corresponding to the comparable nucleation density and spherulite size, shown in Figure 6B,E.
234
The heterogeneous nucleation effect was validated further by checking the nucleation position 235 during multiple melting-crystallization and the crystallization temperature (Tc) in the cooling process 236 measured by means of DSC (Figures 7 and 8). On one hand, the specimen of PVDF/TAIC blends upon 237 irradiation with 10 kGy for three times (10 kGy*3) was heated to 210 °C, which was followed by 238 cooling down to 150 °C, and isothermal crystallization for 30 s. The corresponding POM images are 239 shown in Figure 7A. There are many spherulites with diameters of several microns. In the image with 240 higher magnification at the indicated position, the immature spherulites can be observed. After this 241 specimen was melted for the second time, its crystallization behaviors were tracked by POM again.
242
The spherulites occur at exactly the same position ( Figure 7B). The same thing happens upon 243 melting/crystallization for the third time ( Figure 7C). This result indicates that the crosslinked PVDF
234
The heterogeneous nucleation effect was validated further by checking the nucleation position 235 during multiple melting-crystallization and the crystallization temperature (Tc) in the cooling process 236 measured by means of DSC (Figures 7 and 8). On one hand, the specimen of PVDF/TAIC blends upon 237 irradiation with 10 kGy for three times (10 kGy*3) was heated to 210 °C, which was followed by 238 cooling down to 150 °C, and isothermal crystallization for 30 s. The corresponding POM images are 239 shown in Figure 7A. There are many spherulites with diameters of several microns. In the image with 240 higher magnification at the indicated position, the immature spherulites can be observed. After this 241 specimen was melted for the second time, its crystallization behaviors were tracked by POM again.
242
The spherulites occur at exactly the same position ( Figure 7B). The same thing happens upon 243 melting/crystallization for the third time ( Figure 7C). This result indicates that the crosslinked PVDF According to the discussion above, we can describe the formation of thermoplastic polymers, as well as the enhanced mechanical performance, as follows ( Figure 9). PVDF and TAIC exhibit excellent miscibility, which was confirmed by the DSC and SEM results (Figure 1) [27][28][29]. After hot-pressing, PVDF crystallizes, expelling TAIC into the interlamellar regions ( Figure 9A) [38]. Upon irradiation for the first time, only a part of TAIC participates in the radical reaction because of the low irradiation dose (10 kGy), producing not 3D networks but some local crosslinked points ( Figure 9B) [39,40]. When the specimen is reheated to a temperature above T m of PVDF, its crystals collapse, leading to the free diffusion of unreacted TAIC in the molten PVDF matrix. The local crosslinked points can also migrate with the neighboring polymer chains. The melting process, therefore, results in the redistribution of crosslinked points and unreacted TAIC ( Figure 9C). The latter participates in the reaction at a "new" position, during the following irradiation. In the process of MUSMI, this redistribution is repeated for several times, contributing to the uniform density of crosslinked points in the whole specimen. This is the reason for the lower gel fraction and thermoplastic properties. The local crosslinked points located in different regions produce remarkable effects on the crystallization behavior during the cooling process and mechanical performance of PVDF. Firstly, some local crosslinked points act as heterogeneous nucleation agents due to the difference of chemical structures with un-crosslinked PVDF matrix ( Figures 9D and 7) [41]. This is the reason for the much higher nucleation density, the smaller spherulites ( Figure 6), and the elevated crystallization temperatures ( Figure 8). Secondly, there are some crosslinked points among the crystal lamellae. The existence of these restricts the lamellar thickening, accounting for the thinner crystal lamellae ( Figure 5D) Finally, some crosslinked points distributed in the interspherulitic or interlamellar regions result in enhanced connectivity among spherulites and crystal lamellae. Both thinner crystal lamellae and smaller spherulites endow PVDF with excellent ductility [42]. The better connectivity among crystals is beneficial to the improvement of strength. The synergism of these produces higher ductility without the loss of strength (relative to the reference specimen). The reference specimen suffered from the strong irradiation; however, it exhibited high crosslinking density in certain regions, accounting for the thicker crystal lamellae, lower nucleation density, bigger spherulites, poor ductility ( Figure 4B,D), and thermoset properties.
Conclusions
A multiple-step melting/irradiation (MUSMI) strategy was developed by taking PVDF/TAIC as an example. The alternate melting and irradiation accounts for the redistribution of the local crosslinked points, which is the reason for the thermoplastic properties. During the crystallization of PVDF in the cooling process after melting, the heterogeneous nucleation and restriction effects of these points resulted in smaller spherulites and thinner crystal lamellae, respectively, both of which contribute to the excellent ductility. At the same time, the better connectivity among crystals due to crosslinked points is beneficial to the improvement of strength. As a result of the synergism effect, the prepared PVDF exhibits enhanced ductility without the loss of strength. Our results open up an avenue to fabricate thermoplastic polymers with improved mechanical performance. | 2019-11-07T15:30:11.848Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "3a18321ed539e7eb06b5292c1073a2a8186b0945",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/11/1812/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "55664e2125da850067aa7b63c97341103cd92fcf",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
27012881 | pes2o/s2orc | v3-fos-license | Disruption of TR4 Orphan Nuclear Receptor Reduces the Expression of Liver Apolipoprotein E/C-I/C-II Gene Cluster*
Apolipoprotein E (apoE) is synthesized in many tissues, and the liver is the primary site from which apoE redistributes cholesterol and other lipids to peripheral tissues. Here we demonstrate that the TR4 orphan nuclear receptor (TR4) can induce apoE expression in HepG2 cells. This TR4-mediated regulation of apoE gene expression was further confirmed in vivo using TR4 knockout mice. Both serum apoE protein and liver apoE mRNA levels were significantly reduced in TR4 knockout mice. Gel shift and luciferase reporter gene assays further demonstrated that TR4 can induce apoE gene expression via a TR4 response element located in the hepatic control region that is 15 kb downstream of the apoE gene. Furthermore our in vivo data from TR4 knockout mice prove that TR4 can also regulate apolipoprotein C-I and C-II gene expression via the TR4 response element within the hepatic control region. Together our data show that loss of TR4 down-regulates expression of the apoE/C-I/C-II gene cluster in liver cells, demonstrating important roles of TR4 in the modulation of lipoprotein metabolism.
Apolipoprotein E (apoE) is synthesized in many tissues, and the liver is the primary site from which apoE redistributes cholesterol and other lipids to peripheral tissues. Here we demonstrate that the TR4 orphan nuclear receptor (TR4) can induce apoE expression in HepG2 cells. This TR4-mediated regulation of apoE gene expression was further confirmed in vivo using TR4 knockout mice. Both serum apoE protein and liver apoE mRNA levels were significantly reduced in TR4 knockout mice. Gel shift and luciferase reporter gene assays further demonstrated that TR4 can induce apoE gene expression via a TR4 response element located in the hepatic control region that is 15 kb downstream of the apoE gene. Furthermore our in vivo data from TR4 knockout mice prove that TR4 can also regulate apolipoprotein C-I and C-II gene expression via the TR4 response element within the hepatic control region. Together our data show that loss of TR4 down-regulates expression of the apoE/C-I/C-II gene cluster in liver cells, demonstrating important roles of TR4 in the modulation of lipoprotein metabolism.
Apolipoprotein E (apoE) 1 is primarily synthesized in the liver, although it is widely expressed in various tissues (1,2). ApoE is an important constituent of plasma lipoproteins, such as very low density lipoprotein and chylomicrons, and serves as a ligand for the receptor-mediated uptake of these lipoproteins by the liver (3). ApoE is involved in the pathogenesis of atherosclerosis through the modulation of cholesterol efflux from macrophages (4), and liver-derived apoE also has access to arterial intima and induces regression of atherosclerotic lesions (5).
ApoE expression has been shown to be modulated by tissuespecific enhancers in different tissues. The nuclear receptor liver X receptor/retinoid X receptor ␣ (RXR␣) heterodimer has been reported to regulate expression of the apoE/C-I/C-IV/C-II gene cluster in macrophages by binding to direct repeat (DR) 4 sequences in multienhancer domains located within the gene cluster (6,7). Previous studies using transgenic mice show that in liver, the major source of plasma apoE, the expression of apoE, apoC-I, apoC-IV, and apoC-II are promoted by liverspecific enhancers called hepatic control regions (HCR-1 and HCR-2) (8 -10). However, the molecular mechanism controlling HCR regulation of apoE expression in liver remains unclear. Recently apoC-II, one of the members of the apoE/C-I/C-IV/C-II gene cluster, was shown to be induced by farnesoid X receptor (FXR)/RXR␣ heterodimer via HCRs (11). However, further study will be needed to determine whether the FXR/RXR␣ heterodimer also regulates the expression of other apolipoproteins (apoE, apoC-I, and apoC-IV) within the apoE/C-I/C-IV/ C-II gene cluster.
Members of the nuclear receptor superfamily are transcription factors that regulate gene expression through binding to specific DNA sequences known as hormone response elements (HREs). These nuclear receptors include those for steroids, thyroid hormones, vitamin D 3 , and retinoids as well as a large number of orphan receptors with no known ligands (12).
A particular member of the nuclear receptor family, the TR4 orphan nuclear receptor (TR4), is able to regulate the expression of target genes through binding to DR AGGTCA core motif sequences with variable numbers of spacer nucleotides (13)(14)(15)(16)(17). TR4 has been shown to be highly expressed in rat primary hepatocytes (18). To understand the physiological role of TR4, we sought to identify TR4 target genes based on the response element binding preferences of the receptor. In vitro binding assays showed that TR4 has the highest affinity for DR1 elements, and we found a DR1 site in the HCR-1, which represents a potential TR4 response element (TR4RE). We hypothesize that the apoE gene, regulated by HCR-1, is a TR4 target gene.
Transcription factors and their effects on gene expression have largely been studied via in vitro binding and transfection assays in cultured cells. However, many genes have multiple response elements for different transcription factors, including nuclear receptors. Nuclear receptors have overlapping binding sites in many genes and often compete with other transcription factors for the same binding site under particular conditions (19 -22). In many cases, it is not easy to determine which transcription factors play major roles in vivo.
Here we demonstrate that TR4 can regulate apoE expression via a DR1 element in the HCR-1 of the apoE/C-I/C-IV/C-II gene cluster and have further confirmed the role of TR4 in vivo through analysis of TR4 knockout (TR4KO) mice. Moreover, consistent with previous transgenic mouse studies showing HCR-based gene regulation, TR4 is able to modulate the expression of apoC-I and apoC-II via binding to a response element within the HCR of the apoE/C-I/C-IV/C-II gene cluster.
Immunohistochemistry-Two-month-old C57BL6J mice under pentobarbital anesthesia were perfused with 4% paraformaldehyde in phosphate-buffered saline. The liver was removed after adequate perfusion, and then it was further fixed in 4% paraformaldehyde in phosphate-buffered saline for 6 h. After processing and embedding in paraffin, tissue blocks were cut for staining. The liver sections were rehydrated, washed in phosphate-buffered saline, treated with 3% hydrogen peroxide in methanol for 30 min, blocked in 10% normal goat serum in phosphate-buffered saline for 30 min, and immunostained using the EnVisionϩ system (Dako, Carpinteria, CA). The primary antibody was a rabbit anti-TR4 polyclonal antibody (150-fold dilution). Preimmune rabbit serum (150-fold dilution) was used as a negative control in adjacent sections. After staining, the sections were developed using a 3,3Ј-diaminobenzidine substrate kit (Vector Laboratories, Burlingame, CA). Nuclear counterstain was performed with Gill's hematoxylin (Thermo-Shandon, Pittsburgh, PA) in all sections.
Cell Culture and Transfections-HepG2 and COS-1 cells were maintained in Dulbecco's minimum essential medium containing 10% fetal calf serum. Transfections were performed by using the calcium phosphate precipitation method (14) or SuperFect (Qiagen, Valencia, CA). pRL-TK was used to normalize transfection efficiency in the dual luciferase reporter assay system (Promega).
Metabolic Labeling of Cells and Immunoprecipitation-HepG2 cells were plated at 5 ϫ 10 5 cells/60-mm dish and transfected with TR4 expression vector or empty vector (pCMX-TR4 or pCMX). After 16 h of transfection, the medium was changed, and cells were grown for another 12 h for recovery. Cells were incubated with serum-and methionine (Met)-free Dulbecco's minimum essential medium for 1 h and then pulse-labeled for 45 min with [ 35 S]Met at 150 Ci/ml in medium. After incubation, the medium was collected, and cells were lysed by addition of 0.5 ml of lysis buffer (10 mM Na 2 HPO 4 , pH 7.5, 15 mM NaCl, 1% Nonidet P-40, 1% deoxycholate, 10 g/ml aprotinin, 0.1 mM phenylmethylsulfonyl fluoride). Proteins newly synthesized in cell lysates and secreted into the medium were determined using trichloroacetic acid precipitation, and the required amount of each sample was aliquoted into a new tube. The volume of each sample was adjusted to 1 ml with lysis buffer, and samples were immunoprecipitated with an anti-apoE polyclonal antibody (Calbiochem). The immunoprecipitated proteins were separated by 10% SDS-PAGE, and radiolabeled apoE was quantified using a PhosphorImager (Amersham Biosciences).
Western Blot Analysis-Serum samples were tested for the presence of apoE by Western blot analyses. Samples were separated by 10% SDS-PAGE. After electrophoresis, proteins were transferred from the gel to Immobilon P transfer membrane (Millipore). ApoE was resolved using an anti-apoE polyclonal antibody (Chemicon) and an alkaline phosphatase-conjugated secondary antibody (Bio-Rad), and then the relative amount of each sample was quantified using a VersaDoc imaging system (Bio-Rad).
RT-PCR-Total RNA was isolated from wild-type and TR4KO mouse livers using TriZol reagent (Invitrogen), and RT-PCR was carried out using the SuperScript TM II (Invitrogen) according to the manufacturer's protocols. Briefly, after denaturation for 5 min at 65°C in the presence of 0.5 g of random hexamers, 3 g of total RNA was reverse transcribed for 1 h at 43°C with 200 units of SuperScript II in a 20-l reaction (containing a 0.5 mM concentration of each dNTP). Two microliters of the cDNA sample were used as template for PCR amplification with a forward primer (5Ј-CAGCAGTTCATCCTAACCAGCCC-3Ј) specific to a region of exon 3 present in the TR4 gene in wild-type mice as well as in the targeting construct present in TR4KO mice and a reverse primer (5Ј-CTGCTCCGACAGCTGTAGGTC-3Ј) specific to a region of exon 5 replaced by the targeting construct in TR4KO mice. Hypoxanthine phosphoribosyltransferase expression was analyzed (primers: forward, 5Ј-GCTGGTGAAAAGGACCTCT-3Ј; reverse, 5Ј-CACAGGACTA-GAACACCTGC-3Ј) as an internal control in the same run.
Northern Blot Analysis-Total RNA was isolated using TriZol reagent (Invitrogen), and Northern blots were performed as described previously (16). An apoE probe was prepared from human apoE cDNA by AatII and DraIII digestion. Probes for mouse apoC-I (GenBank TM accession number NM_007469), apoC-II (GenBank TM accession number NM_009695), -actin, and 18 S rRNA were generated by RT-PCR. Membrane-immobilized mRNA was hybridized with radiolabeled cDNA probes, and hybridization signals were quantified using a Phosphor-Imager (Amersham Biosciences). Ratios of apoE, apoC-I, and apoC-II mRNA levels relative to either 18 S rRNA or -actin were calculated.
TR4 Induces ApoE Gene Expression in Vitro-TR4 has been
shown to be highly expressed in rat primary hepatocytes (18). To verify the physiological relevance for studying the effect of TR4 on hepatic apoE expression, we performed staining of mouse liver tissue with a rabbit polyclonal antibody specific to the N-terminal domain of TR4. In mouse liver sections, the anti-TR4 antibody stained both the cytoplasm and nuclei of hepatocytes with stronger staining in the nuclei compared with the cytoplasm (Fig. 1, A and C, arrows). We also performed staining of adjacent sections with preimmune rabbit serum as a negative control to show whether this staining is TR4-specific. As shown in Fig. 1, B and D, only a weak background appears when staining with preimmune serum. These results supported our interest in further characterizing the potential regulation of hepatic apoE gene expression by TR4.
We then applied a pulse labeling assay in HepG2 cells, transfected with either a TR4 expression vector or an empty vector, to study the effect of TR4 on apoE expression. After transfection and overnight recovery, cells were pulsed with [ 35 S]Met for 45 min, and the levels of newly synthesized apoE were determined by immunoprecipitation using an anti-apoE antibody. As shown in Fig. 1E, addition of TR4 can significantly increase apoE protein expression. This induction was further confirmed at the mRNA level using Northern blot analysis. As shown in Fig. 1F, the level of apoE mRNA was higher in HepG2 cells transfected with a TR4 expression vector than in empty vectortransfected HepG2 cells. Together the data from Fig. 1 demonstrate that TR4 can induce apoE expression at both the protein and mRNA levels.
Hepatic Control Region Contains a TR4 Response Element-Human hepatic control regions (HCR-1 and HCR-2) were identified in the studies of apoE/C-I/C-IV/C-II gene expression using transgenic mice (8 -10). HCR-1 and HCR-2 have 85% nucleotide identity and are located 18.4 and 29.4 kb downstream of the apoE transcription initiation site, respectively ( Fig. 2A). In studies with transgenic mice, HCRs have been
TR4 Disruption Reduces ApoE⁄C-I⁄C-II Gene Cluster Expression
shown to have critical roles in the regulation of hepatic expression of the apoE/C-I/C-IV/C-II gene cluster. Within the entire HCR-1 774-bp region, the 319 bp at the 5Ј terminus confer full HCR-1 functional activity (27), and sequence analysis demonstrated that this 319-bp region contains a DR1 element. Of the DR elements recognized by TR4, the receptor binds to DR1 sequences with the highest affinity (13)(14)(15)(16)(17). We next performed a gel shift assay to determine that this DR1 element (GGGGCAGAGGTCA named TR4RE-DR1-apoE; core motifs are underlined) functions as a TR4RE. As shown in Fig. 2B, in vitro translated TR4 protein formed a specific complex with 32 P-labeled TR4RE-DR1-apoE. In contrast, the mock-trans- We confirmed the specific TR4-DR1 interaction when we replaced in vitro translated TR4 with HepG2 cell nuclear extracts containing endogenous TR4 (Fig. 2B, right panel). This result also confirms that TR4 protein is present in liver cells. We then used luciferase reporter assays to test whether TR4 regulates apoE gene expression through interaction with the HCR-1 region. We first linked HCR-1 to TK-luciferase (pGL-TK-HCR-1-Luc) and tested whether TR4 has any influence on the transcriptional activity of a reporter regulated by HCR-1. This construct shows high basal transcriptional activity in HepG2 cells, and co-transfection of a TR4 expression vector (pCMX-TR4) can induce luciferase activity (Fig. 2C, lane 1 versus lane 2). In contrast, the reporter construct showed very low basal transcriptional activity in COS-1 cells with significantly increased luciferase activity upon addition of the TR4 expression vector (Fig. 2C, lane 3 versus lane 4). The difference in the basal activity of the reporter in HepG2 versus COS-1 cells could be due to variation in the availability of endogenous TR4 and cooperation between endogenous TR4 and other liverspecific transcription factors to modulate HCR-1-regulated gene expression in HepG2 cells.
We next addressed whether the proximal promoter of the apoE gene has any effect on HCR-1 activity. Either pGL-apoE-Luc, containing the apoE gene promoter (Ϫ1046/ϩ872 bp) only, or pGL-apoE/HCR-1-Luc, containing the apoE gene promoter fused with the HCR-1 region, were transfected into HepG2 cells in the absence or presence of the TR4 expression vector. TR4 was found to enhance the transcriptional activity of apoE promoter and apoE promoter/HCR-1-driven luciferase reporters (Fig. 2D). However, we were unable to see any further enhancement of the TR4 effect on HCR-1 activity by addition of the apoE proximal promoter (Fig. 2, C versus D). Although TR4 stimulates apoE promoter activity, the induced promoter activity was even lower than the basal activity of pGL-apoE/HCR-1-Luc. This suggests that the apoE promoter may not have an important role in hepatic expression of the apoE gene. Indeed previous reports have indicated that the apoE promoter has no significant role in hepatic apoE expression (8 -10). The apoE promoter may have potential TR4 binding sites as suggested by reporter gene assay even though the promoter activity does not have a significant effect on hepatic apoE expression. To define the role of TR4 relative to non-tissue-specific apoE promoter function, further study will be needed.
To further confirm that TR4RE-DR1-apoE within the HCR-1 region can mediate TR4 induction of apoE expression in hepatic cells, we co-transfected a reporter with three copies of the TR4-DR1-apoE element (pGL-TK-(DR1) 3 -Luc). Luciferase activity was expressed based on the induction -fold relative to transfection of empty vector (pCMX, set as 1.0-fold) in each reporter gene assay. As demonstrated in Fig. 3, TR4 was able to significantly activate this pGL-TK-(DR1) 3 3 versus lane 4). In contrast, TR4 only had marginal induction effects when we replaced TR4RE-DR1-apoE with mutated TR4RE-DR1-apoE (pGL-TK-(mtDR1) 3 -Luc) or with parent reporter plasmid (pGL-TK-Luc). Together these data strongly suggest that the DR1 element in HCR-1 is a TR4 response element important for TR4-induced transcriptional activation of the apoE gene.
-Luc reporter in HepG2 and COS-1 cells (lane
ApoE Expression Is Reduced in TR4KO Mouse Liver-Recently we generated TR4KO mice (in collaboration with Lexicon Genetics Inc.) by the insertion of a LacZ/Neo selection cassette between exons 4 and 5 of the TR4 gene. 2 RT-PCR analysis of total RNAs from wild-type and TR4KO mouse liver After obtaining the required amount of each sample, using trichloroacetic acid precipitation as described under "Experimental Procedures," pulse-labeled medium and cell lysates were immunoprecipitated with anti-apoE antibody. Immunoprecipitated proteins were separated by 10% SDS-PAGE and quantitated by PhosphorImager. F, HepG2 cells were transfected with pCMX-TR4 or pCMX as described above. Total RNA (10 g) from transfected cells was used for Northern blot analysis, and 28 S rRNA stained with 0.004% methylene blue was used as a loading control for the RNA.
tissues confirmed the deletion of TR4 gene exons 4 and 5 in homozygous TR4KO mouse liver tissues (Fig. 4A). To determine whether TR4 could also regulate apoE expression in vivo, we examined apoE expression in wild-type (Fig. 4B, lanes 1, 3, and 5) and TR4KO mice (Fig. 4B, lanes 2, 4, and 6). From Northern blot analysis using three sets of wild-type and TR4KO mouse liver total RNAs, it was found that TR4KO apoE mRNA levels were reduced to 70% of those in wild-type mice (Fig. 4, B and D, left panel). Western blot analysis of serum apoE protein revealed that serum apoE levels of TR4KO mice were decreased by about 50% compared with wild-type serum apoE levels (Fig. 4, C and D, right panel). Considering that most apoE present in the serum is derived from the liver, the low level of serum apoE protein in TR4KO mice could be a result of the reduction of apoE mRNA expression in the liver. In vivo data collected from TR4KO mice confirm our in vitro data from HepG2 cells, which show that TR4 can induce apoE expression.
Influence of Hepatocyte Nuclear Factor 4 (HNF-4) on TR4induced ApoE Expression-Many nuclear receptors are able to bind to DR1-HRE sites, although the relative binding affinity could be influenced by the nucleotide sequences of core motifs as well as the spacer nucleotide within the particular DR1-HRE (34,35). One of these nuclear receptors is HNF-4. We were interested in determining the relative influence of TR4 and HNF-4 on apoE expression. As shown in Fig. 5A, TR4 highly induced transcriptional activity of the reporter gene (pGL-TK-(DR1) 3 -Luc), whereas HNF-4 showed only a marginal effect on this reporter gene. To explore the mechanism of the differential induction effects of TR4 and HNF-4 on apoE expression, we performed gel shift assays. As shown in Fig. 5B, in 3, arrow). Previous reports have demonstrated that changing the spacer nucleotide from A to G reduces the affinity of HNF-4 for DR1-HRE (34,35). This suggests that the nucleotide sequence of TR4RE-DR1-apoE may contribute to weak binding affinity for HNF-4. The bind-ing affinity of TR4 for TR4RE-DR1-apoE was decreased when in vitro translated HNF-4 was added together with in vitro translated TR4 and 32 P-TR4RE-DR1-apoE (lane 4). This result suggests that HNF-4 may affect TR4 binding affinity for TR4RE-DR1-apoE, although HNF-4 itself shows very weak binding affinity for TR4RE-DR1-apoE. The specific complex consisting of TR4-TR4RE-DR1-apoE was supershifted by addition of an anti-TR4 antibody in the presence of in vitro translated HNF-4 (lane 5, closed arrowhead). Together both reporter and gel shift assays suggest that TR4 may strongly bind to this special DR1 site in the HCR-1 region to induce apoE expression.
ApoC-I and ApoC-II mRNA Expression Is Also Reduced in TR4KO Mice-In previous studies with transgenic mice, apoC-I and C-II genes, located in the same gene cluster with the apoE gene, were reported to be regulated by tissue-specific enhancer regions, HCRs (8 -10). Furthermore Kast et al. (11) reported that the FXR/RXR␣ heterodimer was able to modulate apoC-II gene expression via an IR1 element in the HCR, a region that partially overlaps with the TR4RE-DR1-apoE element in an arrangement of three consensus hexameric AG-GTCA motifs. Since our data showed that TR4 could induce apoE gene expression via binding to the DR1 DNA element in the HCR-1 region, we were interested to see whether TR4 could also regulate the apoC-I and apoC-II genes via this HCR region. Using three sets of liver tissues from wild-type (Fig. 6A, lanes 1, 3, and 5) and TR4KO mice (Fig. 6A, lanes 2, 4, and 6), Northern blot analyses clearly showed that loss of the TR4 gene results in the reduction of apoC-I and apoC-II mRNA levels to 70 and 65% of those in wild-type liver tissues, respectively (Fig. 6, A and B). We next carried out transient transfections of the pGL-apoE/HCR-1-Luc reporter to compare the effects of TR4 and the FXR/RXR␣ heterodimer on apoE gene expression. As shown in Fig. 7A, TR4
. Expression of the apoE gene is reduced in TR4KO mice.
A, RT-PCR analysis of wild-type (ϩ/ϩ) and homozygous TR4KO (Ϫ/Ϫ) mice using a 5Ј primer specific to a region of exon 3 present in the TR4 gene in wild-type mice, as well as to a region in the targeting construct present in TR4KO mice, and a 3Ј primer specific to a region of exon 5 deleted in the TR4 targeting construct. The PCR product (323 bp for wild-type) is amplified from wild-type but not from TR4KO cDNAs. A hypoxanthine phosphoribosyltransferase (HPRT) fragment (249 bp) was amplified as an internal control. B, Northern blot analysis of apoE mRNA from wild-type and TR4KO mouse liver samples. Total RNAs (25 g each) from three sets of wild-type (lanes 1, 3, and 5) and TR4KO (lanes 2, 4, and 6) mouse liver samples were subjected to Northern blot analysis using the indicated 32 P-labeled probes followed by quantification with a PhosphorImager. After correction for loading differences using the ratio of apoE to 18 S rRNA, the amount of each sample was expressed relative to the sample in lane 1 (set as 1.0). C, serum apoE levels of wild-type and TR4KO mice. Proteins in the serum (1.0 l) from three sets of wild-type (lanes 1, 3, and 5) and TR4KO mice (lanes 2, 4, and 6) were separated by 10% SDS-PAGE and immunoblotted using an anti-apoE antibody followed by quantification with a VersaDoc imaging system. The apoE level of each sample was expressed as the ratio relative to the sample in lane 1 (set as 1.0). D, relative apoE mRNA (left panel) and protein (right panel) levels were determined relative to the sample in lane 1 (set as 100%). Results are means Ϯ S.D. of samples from three different animals per group (wild-type or TR4KO).
TR4 Disruption Reduces ApoE⁄C-I⁄C-II Gene Cluster Expression
tively. However, when the FXR/RXR␣ heterodimer was cotransfected with TR4, the induced transcriptional activity of the reporter gene was reduced to the basal level (lane 4).
Several nuclear receptor signaling pathways may be involved in this suppressive effect between TR4 and the FXR/ RXR␣ heterodimer relative to transcriptional activity. To determine whether competition between TR4 and RXR␣ for FXR binding results in these antagonistic effects, we used mammalian two-hybrid assays to test whether TR4 could form heterodimeric complexes with FXR. The expression vector VP16-TR4, consisting of the full-length human TR4 fused to transcriptional activator VP16, was co-transfected with the pG5-Luc reporter plasmid and either GAL4-FXR or GAL4-RXR␣ into COS-1 cells. As shown in Fig. 7B, co-transfection of VP16-TR4, VP16-RXR␣, GAL4-FXR, or GAL4-RXR␣ together with either GAL4 or VP16 empty vector showed a low background level of transcriptional activity. Consistent with a previous report (28), significant induction was observed when VP16-RXR␣ was co-transfected with GAL4-FXR (Fig. 7B, lane 6). However, co-transfection of VP16-TR4 with either GAL4-FXR or GAL4-RXR␣ showed near background levels, suggesting that TR4 does not form a heterodimer with either FXR or RXR␣ (Fig. 7B, lanes 7 and 8). This result suggests that TR4 and the FXR/RXR␣ heterodimer may compete with each other for binding to partially overlapping response elements. To confirm this hypothesis, a gel shift assay was performed using 32 P-DR1/IR1. As shown in Fig. 7C, both in vitro translated TR4 and the FXR/RXR␣ heterodimer could form complexes, but TR4 showed a higher affinity for the 32 Fig. 7D, TR4 protein level was similar to or lower than the levels of FXR and RXR␣, suggesting that TR4 had much higher affinity for the DR1/IR1 element than did the FXR/RXR␣ heterodimer. When in vitro translated TR4 was added together with in vitro translated FXR and RXR␣, the TR4 binding band was obviously reduced (Fig. 7C, lane 2 versus lane 6), and the FXR/RXR␣-DR1/IR1 complex disappeared (Fig. 7C, lane 5 versus lane 6). To further confirm whether endogenous TR4 can interact with this DR1/IR1 element, we performed a gel shift assay using HepG2 cell nuclear extracts. As shown in Fig. 7E, TR4 binding disappeared when the nuclear extracts were incubated with excessive amounts of cold wild-type DR1 probe but was still intact in the presence of mutant DR1 probe (Fig. 7E, lane 2 versus lane 3). This binding complex showed a supershift with the addition of anti-TR4 antibody (lane 4). This result clearly shows that endogenous TR4 interacts with TR4RE-DR1-apoE even in the presence of an overlapping IR1, the FXR/RXR␣ response element. DISCUSSION Hepatic apoE expression has previously been reported to be regulated by its liver-specific enhancer region, HCR (8 -10). However, the molecular mechanism mediating HCR regulation of apoE gene expression remains unclear. Based on the presence of a potential DR1 response element (TR4RE-DR1-apoE) in the HCR-1, we demonstrated that TR4 induces apoE expression in HepG2 hepatoma cells via the TR4RE-DR1-apoE element in HCR-1. Data from TR4KO mice further confirmed the importance of TR4 in the regulation of apoE expression as well as in the expression of other apolipoproteins, apoC-I and C-II, in the same gene cluster.
Early studies suggested that DR1-HRE sequences could bind various nuclear receptors, including TR4, TR2, retinoic acid receptors, RXRs, peroxisome proliferator-activated receptors, and HNF-4 (14, 29 -33). However, the affinity and specificity of these nuclear receptors for DR1-HRE may depend on the sequences of core motifs and spacer nucleotides of such elements (34,35). HNF-4, a liver enriched factor, has been shown to play important roles in lipid metabolism and transport via induction of several apolipoprotein genes, including apoE (27, 36 -40). While HCR-1 is HNF-4-responsive in gel shift assays, the detailed binding site(s) for HNF-4 in the HCR-1 remain unclear. Here we demonstrate that HNF-4 has only a marginal effect on the modulation of apoE expression via TR4RE-DR1-apoE in the HCR-1, which may explain why we can see a strong TR4 effect on apoE expression in HepG2 cells and in the TR4KO mouse model, yet there is no obvious change in apoE expression in HNF-4KO mice as 1, 3, and 5) and TR4KO mouse liver samples (lanes 2, 4, and 6) were subjected to Northern blot analysis using the indicated 32 P-labeled probes followed by quantification with a Phosphor-Imager. After correction for loading differences using the ratio of apoC-I or apoC-II to either 18 S rRNA (for apoC-I) or -actin (for apoC-II), the amount of each sample was expressed relative to the sample in lane 1 (for apoC-I) or the samples in lanes 1 and 5 (for apoC-II) (set as 1.0). B, relative apoC-I and apoC-II mRNA levels were determined relative to the sample in lane 1 (for apoC-I) or the samples in lanes 1 and 5 (for apoC-II) (set as 100%). Results are means Ϯ S.D. of samples from three different animals per group (wild type or TR4KO). described in recent studies (41). The reason for the lack of apoE response to HNF-4 may be due to the particular sequences of the potential DR1-HRE site in the HCR-1 region. Although it is considered a DR1 consensus site, the specific sequence of the TR4RE-DR1 in the HCR-1, including the spacer nucleotide G, may be unfavorable to HNF-4 binding. Previous reports have demonstrated that changing the spacer nucleotide from A to G reduces the affinity of HNF-4 for DR1-HRE (34,35).
Early studies have shown that TR4 might be able to modulate gene expression via binding to AGGTCA DRs with variable length spacer sequences, from one to five (DR1-DR5) nucleotides, with its highest affinity for DR1 elements (13)(14)(15)(16)(17). Here we show that loss of TR4 results in reduction of the expression of the apoE, apoC-I, and apoC-II genes, which are members of the same gene cluster. We do not know whether TR4 regulates apoE, apoC-I, and apoC-II solely through TR4RE-DR1-apoE in the HCR-1. Recently the FXR/RXR␣ heterodimer was shown to regulate apoC-II expression via an IR1 element that partially overlaps the TR4-DR1-apoE element in HCR-1 (11). This complex arrangement suggests the involvement of various nuclear receptors in transcriptional regulation via the HCR. In support of this idea, previous studies have demonstrated that a variety of nuclear receptors regulate many genes through binding either the same, or partially overlapping, response elements (19 -22). At present, the effects of the FXR/RXR␣ heterodimer on transcription of the apoE gene is unclear even though transcription of the apoE and apoC-II genes is under the control of the tissue-specific enhancer region HCR in the liver. In reporter gene assays in HepG2 cells, we were able to see TR4and FXR/RXR␣ heterodimer-mediated transcriptional induction of the apoE gene and that these two receptors show antagonistic effects. Many nuclear receptors can form heterodimers with other receptors, such as RXR, and cross-talk between nuclear receptor signaling pathways has occurred via heterodimerization between nuclear receptors (23,28,(42)(43)(44)(45).
In this study, TR4 shows no interaction with either FXR or FIG. 7. The effect of the FXR/RXR␣ heterodimer on TR4-mediated transcriptional activation. A, a reporter gene construct (300 ng of pGL-apoE/HCR-1-Luc) was co-transfected with constant amounts (150 ng) of TR4 (pCMV-TR4) or FXR and RXR␣ expression vectors (pSG5-FXR and pCMV-RXR␣) into HepG2 cells as indicated. B, mammalian two-hybrid interaction of FXR with RXR␣, but not with TR4, in COS-1 cells. COS-1 cells were transiently transfected with 500 ng of pG5-Luc and 200 ng of different fusion plasmids as indicated. The two-hybrid interaction was expressed as -fold induction relative to that of the GAL4/VP16-transfected sample (lane 1, set as 1.0-fold). Data presented in A and B represent the mean Ϯ S.D. of at least three individual assays. C, a gel shift assay was performed with in vitro translated TR4, FXR, or RXR␣ as indicated. The 32 P-labeled DR1/ IR1 element was used as a probe. Retarded complexes containing TR4 and the FXR/RXR␣ heterodimer as well as the supershift induced by an anti-TR4 antibody are indicated by the open arrowhead, arrow, and closed arrowhead, respectively. Free probes are shown as indicated. n.s., nonspecific binding. D, analysis of in vitro translated products. After expression with [ 35 S]Met in a coupled transcription and translation system (50-l reaction), 1.5 l of in vitro translated RXR␣ and FXR or 3 l of in vitro translated TR4 were subjected to 10% SDS-PAGE for analysis of relative expression between samples. E, a gel shift assay was performed using nuclear extracts of HepG2 cells with the 32 P-labeled DR1/IR1 element. Twenty molar excesses of unlabeled DR1 oligonucleotides (Wt) or mutated DR1 oligonucleotides (mt) were added as competitor DNA (lanes 2 and 3). For the supershift assay, an anti-TR4 antibody was added as indicated. The retarded complex and the supershifted band are indicated by open and closed arrowheads, respectively. n.s., nonspecific binding. RXR␣, suggesting that antagonistic effects may be through competition between TR4 and the FXR/RXR␣ heterodimer for binding to partially overlapping response elements (DR1/IR1). Recently we have shown that TR4 can bind to a single AG-GTCA core motif, preceded by an AT-rich sequence, as a monomer (46). Thus, TR4 may not only bind to TR4RE-DR1-apoE as a homodimer but may also occupy a single AGGTCA motif of the IR1 FXR response element as a monomer, thereby moving the FXR/RXR␣ heterodimer away from this DR1/IR1 element.
However, our results and previous studies show that TR4 and the FXR/RXR␣ heterodimer can regulate expression of apoC-II, another apolipoprotein gene in the apoE/apoC-I/apoC-IV/apoC-II gene cluster (11). This is a puzzling observation even though it has been reported in many studies (19 -22). One possibility is that the availability of RXR␣ may determine the transition from FXR control to TR4 control of hepatic expression of the apoE gene. RXR␣ is required for heterodimerization with many liver abundant nuclear receptors, including FXR (28,(42)(43)(44)(45). Sequestration of RXR␣ away from FXR by other nuclear receptors would result in reduction in the numbers of available FXR/RXR␣ heterodimers for transcriptional induction of the apoE gene.
An alternative explanation is that under physiological conditions various metabolic cues may signal TR4 and the FXR/ RXR␣ heterodimer to regulate these genes dynamically, leading to multiple transcriptional responses. Unfortunately the endogenous ligand for TR4 has not yet been determined, so identification of such a ligand may provide further insight regarding the mechanisms of TR4-mediated gene regulation.
Although further study is necessary to determine the mechanism that controls TR4 regulation of apoE, apoC-I, and apoC-II expression, it seems that multiple nuclear receptors, such as TR4 and the FXR/RXR␣ heterodimer, regulate the same gene via partially overlapping response elements (DR1 and IR1). This may explain why we are unable to see the disappearance of expression of these genes in either TR4KO or FXR knockout mice (11).
In summary, our data demonstrate that TR4 modulates hepatic apoE expression and the expression of other apolipoproteins, such as apoC-I and apoC-II, via the tissue-specific enhancer HCR-1. The present study suggests that TR4 is involved in regulation of lipoprotein metabolism, and the finding of the natural ligand for TR4 will help in understanding the role of TR4 in various complex, hormone nuclear receptor-related processes in the liver. | 2018-04-03T00:53:27.506Z | 2003-11-21T00:00:00.000 | {
"year": 2003,
"sha1": "2519a2ce210cdd5bca72632e6e83ed4226a9cc14",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/47/46919.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "84486cb14ca62b48aa1399665a9296a3fa5b2602",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
232101124 | pes2o/s2orc | v3-fos-license | Comparative transcriptome profiles of large and small bodied large-scale loaches cultivated in paddy fields
Fish culture in paddy fields is a traditional aquaculture mode, which has a long history in East Asia. Large-scale loach (Paramisgurnus dabryanus) fast growth is suitable for paddy fields aquaculture in China. The objective of this study was to identify differential expression genes (DEGs) in the brain, liver and muscle tissues between large (LG, top 5% of maximum total length) and small (SG, top 5% of minimum total length) groups using RNA-seq. In total, 150 fish were collected each week and 450 fish were collected at twelfth week from three paddy fields for all the experimental. Histological observation found that the muscle fibre diameter of LG loaches was greater than that of SG loaches. Transcriptome results revealed that the high expression genes (HEGs) in LG loaches (fold change ≥ 2, p < 0.05) were mainly concentrated in metabolic pathways, such as “Thyroid hormone signalling pathway”, “Citrate cycle (TCA cycle)”, “Carbon metabolism”, “Fatty acid metabolism”, and “Cholesterol metabolism”, and the HEGs in SG loaches were enriched in the pathways related to environmental information processing such as “Cell adhesion molecules (CAMs)”, “ECM− receptor interaction” and “Rap1 signalling pathway”; cellular processes such as “Tight junction”, “Focal adhesion”, “Phagosome” and “Adherens junction”. Furthermore, IGFs gene family may play an important role in loach growth for their different expression pattern between the two groups. These findings can enhance our understanding about the molecular mechanism of different growth and development levels of loaches in paddy fields.
www.nature.com/scientificreports/ been identified from previous studies, such as neuropeptide Y (NPY) and Insulin-like growth factors (IGFs), etc. 11,12 . As the neural body center, the brain plays an important role in regulating various life activities. It can secrete a variety of growth factors (such as GH) to promote the growth of the tissues 13 , or regulate the secretion of growth factors from other tissues (such as liver) 14 . The interaction of these hormones is critical to the regulation of muscle growth, which directly determine the body size of the fish. In this study, RNA-Seq was used to identify differentially expressed genes of three tissues (brain, liver, and muscle) between large (LG, top 5% of maximum total length) and small (SG, top 5% of minimum total length) large-scale loaches cultivated in paddy fields with the same genetic background. The purpose of this research is to improve our understanding the biological mechanism behind the loach growth as well as providing basis for future breeding at a molecular level.
Results
Morphological characteristics and histological observation. Weekly morphological characteristics (total length) of LG fish were significantly higher than those of SG fish (p < 0.05), and the gap between two groups was getting more enlarge during the 12 weeks (Fig. 1A). Specifically, the total length of LG fish reached twice that of SG fish in the twelfth week (p < 0.05). The normal distribution of total length of large-scale loaches at week 12 was shown in Fig. 1B, and the total length was mainly concentrated from 60 to 80 cm.
The comparisons of muscle histology between the LG and SG fish were presented in Fig. 2, in which we could find that the muscle fibers diameter and density were different between the two groups. Then, the muscle fibre diameter and density of two groups were measured and the statistical results were demonstrated in Fig. 3. The muscle fibers diameter was greater in the LG fish, while the density was lower in the LG (Fig. 3).
RNA-seq data and annotation of unigenes.
To identify the differentially expressed genes associated with differential body size in large-scale loach, a total of 18 cDNA libraries were constructed from LG and SG and generated about 123.63 Gb sequencing data in total. After trimming and quality control, we obtained an average of 45.79 M clean reads from the 18 cDNA libraries, and all clean Q30 base rates were over 94% (Table S1). After assembly and remove redundancy, there were 28,620 (22.52%) unigenes were over 1000 bp in length among 127,062 unigenes (mean length: 820 bp, N50: 1546 bp, percent GC: 40.90%). More detailed information of the filtered reads was presented in Table S2. Finally, 55,933 unigenes (44.02%) were functionally aligned by BLASTx against databases totally, including GO, KEGG, NCBI-nr, and UniProt.
GO and KEGG enrichment analysis of the DEGs. After DEGs analysis, 189, 175 and 320 high expression genes (HEGs) were identified in the LG fish brain, liver and muscle, and 241, 481 and 668 HEGs were identified in the SG fish brain, liver and muscle, respectively ( Fig. 4A-C). Excluded the redundancy DEGs in two or three tissues, a total of 1861 DEGs were identified among the three samples (Supplementary excel). For the next GO and KEGG analysis, the DEGs were divided into two parts, which were composed of 684 HEGs in LG fish and 1631 HEGs in SG fish. According to Go Term and previous studies, we finally selected growth-related genes for heatmap analysis, which showed the IGFs gene family may play an important role in loach growth as their expression levels were different between the two groups (Fig. 4D). These growth-related genes were fibroblast growth factor (FGF1, 6, 7 and 8), fibroblast growth factor binding protein 1 (FGFBP1), myogenic factor 5 (MYF5), myocyte enhancer factor 2 (MEF2), myogenic differentiation (MYOD), insulin like growth factor (IGF1 and 2), insulin like growth factor binding protein (IGFBP1, 2, 3 and 7, IGF2BP1). www.nature.com/scientificreports/ GO enrichment analysis was performed to investigate the putative roles of these DEGs, which were classified into biological process (BPs), cellular component (CCs) and molecular function (MFs) categories (Fig. 5A,C). For the BPs, the major categories were cellular processes, single-organism process, metabolic process, regulation of biological processes, and cellular responses to stimulus. The major categories of the CCs were cell, cell parts, membrane parts, organelle parts and extracellular parts. The DEGs involved in binding and catalytic activity were also most represented among the MFs.
In order to identify the functional biochemical pathways of predicted proteins encoded by DEGs, KEGG pathway enrichment analysis was performed for both the HEGs in two groups (three tissues together). The top 20 of KEGG pathway enrichment analysis of two groups HEGs were showed in Fig. 5B,D, respectively. KEGG pathway enrichment analysis of HEGs in LG fish (p < 0.05) showed a significant enrichment of metabolic pathways, such as "Thyroid hormone signalling pathway", "Citrate cycle (TCA cycle)", "Carbon metabolism", "Fatty acid metabolism", and "Cholesterol metabolism". The HEGs in SG fish were enriched in the pathways related to environmental information processing such as "Cell adhesion molecules (CAMs)", "ECM-receptor interaction" and "Rap1 signalling pathway"; cellular processes such as "Tight junction", "Focal adhesion", "Phagosome" and "Adherens junction".
Validation of RNA-Seq analysis.
To confirm the RNA-seq data, four genes of each tissue were selected for qRT-PCR validation. These genes included IGFBP1 (insulin-like growth factor binding protein-1), ID1 (inhibitor of DNA binding 1), MDKa (midkine a), and PPA1b (pyrophosphatase 1b). To adjust for variations in starting template, gene expression was be normalized against β-actin for each tissue, then target genes mRNA were quantified using the 2 −ΔΔCT method, and the level of significance was determined by one-way analysis of variance (ANOVA) with SPSS Statistics 22.0. While the relative expression levels were not perfectly consistent, the qRT-PCR can provide additional basis for the RNA-seq results (Fig. S1).
Discussion
Intra-specific differences were first proposed by Darwin, who argued that individual differences and overproduction within a species or variety were the basis of natural selection 15 . The individual growth varies a lot in different species. The coefficient of individual growth variation in most mammals is 7%-10%, while in most fish is approximate 20%-35% 16 . Fish body size differences are mainly formed by differences in trunk skeletal muscle growth, which depends on the proliferation and hypertrophy of muscle fibers (muscle cells) 17 . During the growth process, mammalian muscle growth mainly depends on the increase of muscle fibers diameter, but less on the increase of muscle fibers quantity 18,19 . Unlike mammals and birds, fish muscle fibers maintain the ability to proliferate and hypertrophy for life 20 . For example, in larval sea bass (Dicentrarchus labrax), white muscle growth is mainly through differentiation to form new muscle fibers, while in larval guppy (Poecilia reticulata), white muscle growth is mainly achieved by increasing the size of muscle fibers 21,22 . Considering the development in other fish species, we hypothesize that loach muscles grow predominantly by means of increasing the diameter of muscle fibers in this study (Figs. 2 and 3).
The results of transcriptome sequencing revealed that the metabolism of LG loaches was more vigorous compared with that of SG loaches. The HEGs of the large loach in this study were significantly enriched in "Thyroid hormone signalling pathway", "Citrate cycle (TCA cycle)", "Carbon metabolism" and "Fatty acid metabolism", and "Cholesterol metabolism", which are all important regulators of growth, development and metabolism [23][24][25] . www.nature.com/scientificreports/ It was also found in rainbow trout that the expression of lipid-metabolism-related genes was up-regulated in fast growth fish compared with slow growth fish, which was consistent with our findings 26 . In addition, the genes related to the growth were mainly involved in energy metabolism, carbohydrate and lipid metabolism, and cytoskeletal composition, which the similar expression pattern were also observed in the study of salmonid liver and muscle gene expression [27][28][29] . Generally, insulin-like growth factors (IGFs) were generally thought to be up-regulated in tissues of rapidly growing individuals, such as Nile tilapia (Oreochromis Niloticus) 30 , channel catfish (Ictalurus punctatus) 31 , and Arctic charr (Salvelinus alpinus) 32 . It was found in this study that the gene IGF1 and IGF2 were different in two groups, but the difference was not significant (FDR > 0.05), which was similar to previous studies 11,33,34 . However, IGFBP1 (insulin-like growth factor binding protein-1) of brain and liver and IGFBP7 of liver expressed in SG group were both higher than that in LG loaches in this study (FDR < 0.05; Fig. 4D). In a recent study of rainbow trout, it was also found that small fish had higher expression of IGFBP1 in the liver than in large fish 26 . Previous studies demonstrated that IGFBP1 could inhibit IGF binding to cell surface receptors and thereby inhibit IGF-mediated mitogenic and cell metabolic actions 35 . Likewise, overexpression of IGFBP1a/b in zebrafish would retard embryonic development and growth 36,37 . Therefore, in addition to having stronger metabolic activity, one of the internal factors for the fast growth of large fish may be related to the lower expression of IGFBP1 and IGFBP7. The binding protein of IGFs genes (IGFBPs) are known to inhibit cell growth and differentiation by binding specifically IGFs 38,39 , which can be influenced by many factors. Previous studies have shown that fasting increases hepatic IGFBPs levels, and IGFBPs drops back to normal after refeeding 11,40 . Whether in large-scale farming or in the natural environment, the smaller individuals have lower social status, as well as mating and feeding rights 41,42 . We could speculate that it might be more difficult for small individual loach to obtain food in paddy fields culture environment, which led to its more delayed growth and development. Furthermore, the HEGs in SG fish were enriched in the pathways related to environmental information processing such as cell adhesion molecules (CAMs), ECM-receptor interaction and Rap1 signalling pathway; cellular processes such as tight LG and SG loach. LB, LL, LM: the brain, liver, and muscle of large group, SB, SL, SM: the brain, liver, and muscle of small group. FDR false discovery rate; fc fold change. www.nature.com/scientificreports/ junction, focal adhesion, phagosome and adherens junction. These functional pathways consist of a complex mixture of structural and functional macromolecules and have their important roles in tissue and organ morphogenesis in maintaining cell and tissue structure and functions [43][44][45][46] . Thus, our results provide additional evidence there was a lag in the development of the small fish compared to the large fish in the paddy cultivation system.
Conclusion
In this study, RNA-seq successfully identified that the differences in transcription levels of the loaches with differential body sizes in integrated paddy field aquaculture. Compared with the slow-growing loach, the fastgrowing loach haver higher expression of metabolic genes. Furthermore, the transcription level of IGFBPs was relatively low in the fast-growing loach, which were known to inhibit cell growth and differentiation. In addition, large-scale loaches growth may be through the enlargement of muscle fibers. Fish culture in paddy fields. The three paddy fields used in this experiment were performed disinfection operation to kill wild aquatic animals with quicklime before the experiment. In addition, the border of the paddy fields surrounded with a net were used to block other aquatic animals from entering the pond, such as frogs, ensured that there were only experimental fish in the paddy fields. www.nature.com/scientificreports/ All experimental fish with the same genetic background came from a professional aquaculture farm of Neijiang, China (N: 104°56′27.16″, E: 29°27′32.52″). The experimental fish were cultivated in three paddy fields with an approximate area of 866.67 m 2 , when their total length was nearly 4 cm in an indoor pond. The experiment was performed from July to September, and the water quality parameters were as follows: temperature was 12-24 °C; dissolved oxygen was 5.3-6.7 mg/ml; pH was 7.5-8.1. Culture density was approximately twenty thousand fish per paddy field. The fish were fed twice a day at 09:00 and 17:00 (2 ~ 4% body weight feeding rate), and were subjected to the same daily management.
Materials and methods
Partition of large and small size fish. For weekly samples, the total length of fish was measured from 50 random individuals in each paddy field, and normal distributions were made based on the total length of 150 individuals. Then, top 5% of maximum total length was defined as the large group (LG); similarly, top 5% of minimum total length was defined as the small group (SG). At the twelfth week, 150 individuals were measured for the total length from each paddy field and used to divide LG and SG fish (LG and SG fish were showed in Fig. 6).
Sample collection.
Based on the definition of LG and SG fish, 3 large fish and 3 small fish were sampled weekly and fixed in fresh Bouin's solution after being anaesthetized with MS-222 (Tricaine methane sulfonate, 100 ppm) for histological observation. For the first four weeks, the entire body of the fish was preserved from dorsal fin to caudal fin, while a piece of muscle (0.5 cm × 0.5 cm × 0.5 cm) was obtained from fish during subsequent weeks.
At the twelfth week, 3 large fish and 3 small fish from each paddy field were rapidly sampled brain (B), liver (L) and muscle (M) for RNA-Seq, after being anaesthetized using MS-222 (100 ppm). The samples were immediately frozen in liquid nitrogen, and then stored in -80 °C freezer for subsequent analysis.
Histological characteristics and observation. Fixed samples were wrapped in gauze and dehydrated in an ethanol series, then infiltrated in xylene, and embedded in paraffin wax finally. Tissues each were cut into 6 μm sections serially using a rotary microtome according to routine procedures. Muscle sections were stained with haematoxylin and eosin (HE) for histological analysis and examined on microscope slides. Digital images were captured using a Nikon Eclipse Ti-S (Nikon Instruments Inc, Japan) and measured with Image Pro Plus software (Media Cybernetics, USA). The muscle fiber density was the number of muscle fibers per square millimetre. In detail, the muscle fibers numbers were counted within a 100 μm × 100 μm square under microscope after HE staining (200 × digital images). Only muscle fibers more than half its size within the square were counted. The muscle fiber diameter was the geometric mean of the long and short diameter of the 100 muscle fibers per fish (400 × digital images; geometric mean formula: G = √ ab ; G: geometric mean; a: long diameter; b: short diameter).
Total RNA extraction and cDNA library construction. After three large fish or three small fish from each paddy filed were pooled into one sequencing sample, three LG samples and three SG samples were used sequencing, respectively. Three tissues (B: brain, L: liver and M: muscle) were used to sequence in each sample groups. Total RNA was extracted using the total RNA extraction reagent RNAiso Plus (Takara Bio Inc, Japan) according to the manufacturer's protocol. After confirming the quality of RNA with agarose gel electrophoresis and Nanodrop 2000 (Thermo Scientific, USA), RNA that integrity number (RIN) greater than 8 and OD260/280 > 1.80 was used for mRNA library construction 47 . The qualified RNA was sent to the Annoroad Gene Technology Corporation (Beijing, China) for library preparation and sequenced using Illumina HiSeq 2000 by sequencing strategy: paired-end sequencing and raw reads length 150 bp. www.nature.com/scientificreports/ Sequencing data analysis. Low-quality, adaptor-polluted and high content of unknown bases (N)-containing sequencing reads were filtered (the adaptor sequences; unknown bases more than 10%; the percentage of no more than Q 5 bases is over 50% in a read). The Q30 of the clean data was calculated, and all downstream analyses were performed using the clean, high-quality data. Trinity (http://trini tyrna seq.sourc eforg e.net/, version: v2.0.6) 48 was used to perform de novo assembly with clean reads, and Tgicl (http://trini tyrna seq.sourc eforg e.net/, version: v2.0.6) 49 then used to cluster transcripts to unigenes. After assembly, unigenes were used for functional annotation against the NT (Non-redundant protein sequences Database), NR (Nucleotide Sequence Database), Uniprot (Universal Protein), COG (Cluster of Orthologous Groups of proteins), GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) databases. Differential expression analysis was performed using edgeR 50 in the OmicShare tools, an online platform for data analysis (www.omics hare.com/tools ). The default parameters of edgeR were used, and differential expression genes (DEGs) were selected according to log 2 (fold change) ≥ 1 and p value < 0.05. The GO and KEGG pathway analyses were then carried out with the differentially expressed genes of all the three tissues 50 .
Validation of RNA-seq analysis by real-time PCR. Four related genes were randomly selected for quantitative real-time PCR (qRT-PCR) to identify the accuracy of RNA-seq results. 3 fish (three tissues: brain, liver, and muscle) were randomly selected from large and small group fish, respectively. β-actin was used as the reference gene. Total RNA was used to synthesize mRNA cDNA by using TianGen FastKing RT Kit (With gDNase). The emission intensity was detected by Step One real-time PCR system (Applied Biosystems) under the following steps: initial denaturation step at 95 °C for 20 s, 40 thermal cycling steps consisted of 3 s at 95 °C, 30 s at 60 °C. The target gene qRT-PCR primers were designed with reference to the sequences data of this study by Primer 5.0 (The reference gene primers are shown in Table S3). All reactions were run in triplicate and included no template controls for each gene and the quantitative results were quantified using the 2 −ΔΔCT method 51 .
Statistical analysis. Statistical analysis was performed using SPSS 22.0 software (SPSS, Chicago, IL, USA).
Data were presented as the mean ± SEM, and significant differences (p < 0.05) were identified using one-way analysis of variance (ANOVA) 52 .
Data availability
Data has been deposited in the SRA under the study accession code PRJNA623189. | 2021-03-04T06:16:48.244Z | 2021-03-02T00:00:00.000 | {
"year": 2021,
"sha1": "60f7409272b114384e8206d352674e02d0a03629",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-84519-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82489d02b5770e6679a011040968ef7d21c1ccab",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11616244 | pes2o/s2orc | v3-fos-license | Characterization of Natural Antisense Transcript, Sclerotia Development and Secondary Metabolism by Strand-Specific RNA Sequencing of Aspergillus flavus
Aspergillus flavus has received much attention owing to its severe impact on agriculture and fermented products induced by aflatoxin. Sclerotia morphogenesis is an important process related to A. flavus reproduction and aflatoxin biosynthesis. In order to obtain an extensive transcriptome profile of A. flavus and provide a comprehensive understanding of these physiological processes, the isolated mRNA of A. flavus CA43 cultures was subjected to high-throughput strand-specific RNA sequencing (ssRNA-seq). Our ssRNA-seq data profiled widespread transcription across the A. flavus genome, quantified vast transcripts (73% of total genes) and annotated precise transcript structures, including untranslated regions, upstream open reading frames (ORFs), alternative splicing variants and novel transcripts. We propose natural antisense transcripts in A. flavus might regulate gene expression mainly on the post-transcriptional level. This regulation might be relevant to tune biological processes such as aflatoxin biosynthesis and sclerotia development. Gene Ontology annotation of differentially expressed genes between the mycelia and sclerotia cultures indicated sclerotia development was related closely to A. flavus reproduction. Additionally, we have established the transcriptional profile of aflatoxin biosynthesis and its regulation model. We identified potential genes linking sclerotia development and aflatoxin biosynthesis. These genes could be used as targets for controlled regulation of aflatoxigenic strains of A. flavus.
Introduction
Aspergillus flavus is a ubiquitous pathogenic fungus that infects plants and animals. Recently, studies of A. flavus gained tremendous attention owing to its health impact on agricultural commodities and related fermented products induced by aflatoxin contamination. Initially, studies using A. flavus as a model organism have been focused mainly on the aflatoxin biosynthesis pathway and on mechanisms of regulation of aflatoxin formation [1][2][3][4][5]. The highly negative impact of infection of agriculturally relevant plants by A. flavus caused gradual expansion of studies into the related fundamental areas of biology of this fungus, including the A. flavus secondary metabolism, sclerotia morphogenesis and propagation [2,4,[6][7][8][9]. The recently revealed genomic sequence of A. flavus NRRL3357 provided a powerful tool for detailed analysis of biology of this fungus [10].
Sclerotia morphogenesis is an physiological process important for A. flavus propagation and involves various secondary metabolism pathways, including aflatoxin biosynthesis. Sclerotia are pigmented, specialized structures composed of compact A. flavus mycelia, which makes A. flavus resistant to harsh environmental conditions. Sclerotia likely derive from cleistothecia and might represent a vestige of sexual ascospore production [6]. According to the size of sclerotia, A. flavus could be divided into two groups: the L strain has sclerotia that are .400 mm in diameter and the S strain has sclerotia that are ,400 mm in diameter [11,12]. The S strains produce greater amounts of sclerotia and aflatoxin compared to the L strains under the same conditions of medium and cultivation [13], suggesting sclerotia morphogenesis and aflatoxin biosynthesis are closely related. To interpret this correlation, Chang and colleagues proposed the ''substrate (acetate) competition'' hypothesis. In their proposal, the increased production of aflatoxin results in a progressive decrease in sclerotial size, alterations in sclerotial shape and causes weakening of the sclerotial structure [6]. Recent efforts provided insights into the regulation of sclerotia morphogenesis. Ammonium, light, oxidative stress, temperature, organic acids and endogenous levels of cAMP might influence sclerotia formation and maintenance [14][15][16][17][18][19]. It was reported that A. flavus laeA and veA mutants are not able to form sclerotia [20,21]. Cary and colleagues used DNA array to analyze sclerotia-related genes by comparing the A. flavus wild type strain and a veA mutant strain [22]; however, a detailed comprehensive description of sclerotia morphogenesis is lacking. Natural antisense transcripts (NATs), a subset of non-coding RNAs (ncRNAs), are endogenous RNA molecules transcribed from the opposite DNA strand and can be complementary to the sense RNA through base pairing [23][24][25]. Expressed sequence tag (EST) sequencing, tiling microarrays, SAGE libraries, asymmetric strand-specific analysis of gene expression and global run-on sequencing (GRO-seq) could be used to identify NATs [26]. NATs are involved in transcriptional and post-transcriptional gene regulation by RNA interference (RNAi) [27,28], chromatin-level gene silencing [23,26,[29][30][31], chromatin remodeling [28,32] and local chromatin modifications [33][34][35]. Global analysis of NATs has been done for mammals [23], insects [36], worms [37] and plants [38,39]. It was reported that a large portion of the mammalian genome could produce transcripts from both strands [23,40,41]. For instance, .70% of mouse transcripts contain NATs and, owing to their prevalence, NATs are treated as pervasive features of mammalian genomes [26]. Aspergillus spp. have beenused as model organisms to investigate many molecular processes that govern the life of eukaryotic cells. Moreover, Aspergillus spp. have been used extensively to study fungal-specific pathways. NATs have been found in a divergent group of fungi, including the ascomycetes Saccharomyces cerevisiae, Candida albicans, A. flavus, Magnaporthe oryzae, Tuber melanosporum and Schizosaccharomyces pombe, and the basidiomycetes Cryptococcus neoformans, Ustilago maydis and Schizophyllum commune [28]. There are two distinct drawbacks in the study of A. flavus NATs, however; first, there were only 352 NATs (2.8% of the total open reading frames (ORFs)) found in A. flavus by analyzing ,23,000 A. flavus cDNAs from cells grown under different nutritional conditions [25], while this number soars to 16.7-85.2% in comprehensively analyzed transcriptomes such as S. cerevisiae (16.7%), C. albicans (40.0%) and S. pombe (85.2%) [28]. Second, the biological function of NATs in A. flavus remained elusive due to the limited amount of NATs detected [42].
Owing to the importance of transcriptional regulation in the development of fungi, transcriptome profiling could be a valuable tool for establishing an in-depth understanding of A. flavus biology. The transcriptome of A. flavus has been studied by several groups, but these studies were not focused on sclerotia development or secondary metabolism. ESTs and microarrays have been used to identify genes involved in aflatoxin production using 7218 unique ESTs and microarrays containing more than 5000 unique A. flavus gene amplicons [1][2][3]. However, only 263 genes were expressed differentially and only 20 of the 29 aflatoxin pathway genes were identified owing to the limitation of microarrays in detecting genes with low levels of expression. Even an important transcriptional factor in the aflatoxin biosynthesis pathway, aflR, has not been detected [1]. Chang et al identified 22 features common to the aflatoxigenic/non-aflatoxigenic pairs by cross comparison. Possible roles of these identified genes have been discussed in relation to the regulation of aflatoxin biosynthesis [3]. Yu et al profiled the transcriptome of A. flavus under various temperature conditions in order to understand the effect of temperature on mycotoxin biosynthesis using RNA-seq technology. Only 23-29% of the total reads were mapped to genes, however, and .50% of the reads were mapped to rRNA genes and mitochondria and, therefore, were regarded as useless data [4].
We used strand-specific RNA sequencing (ssRNA-seq) to obtain an extensive transcriptome profile of A. flavus that might facilitate comprehensive understanding of the physiological processes of A. flavus. ssRNA-seq technology has all of the advantages of conventional RNA-seq technology, including low detection background, high dynamic detection range, high reproducibility and precise definition of transcript structure [43]. Furthermore, in contrast to conventional RNA-seq technology, ssRNA-seq data contain RNA polarity information and could decode a complex eukaryote transcriptome, including genome annotation, de novo transcriptome assembly and accurate digital gene expression analysis [44,45]. ssRNA-seq technology has been used to parse the transcriptome of many organisms, including S. cerevisiae [46], Mycoplasma pneumonia [47], Mus musculus [47] and Oryza sativa [48]. By using ssRNA-seq data of the A. flavus mycelia and sclerotia cultures, we profiled genome-scale transcription, annotated precise transcript structure and provided in-depth depiction of A. flavus NATs. Analysis of differentially expressed genes might contribute to explicating sclerotia development and aflatoxin biosynthesis. In summary, our ssRNA-seq-based annotation provided a more extensive depiction of the A. flavus transcriptome and might contribute to the reduction of detrimental effects of A. flavus infection on agriculture.
Strain and Culture Conditions
A. flavus CA43 was kindly provided by Professor Perng-Kuang Chang (Southern Regional Research Center, Agricultural Research Service, U.S. Department of Agriculture, Washington, DC, USA). A. flavus CA43 belongs to the S strain isolates, which can produce numerous sclerotia and high amounts of aflatoxin [13].
Potato dextrose agar (PDA) medium (20 g of dextrose, 15 g of agar and the infusion from 200 g of potatoes per 1 L of medium, pH = 6.0) was used for A. flavus cultivation. For harvesting the A. flavus culture, a layer of cellophane was placed over the PDA medium plate (PDA-cellophane plate). A. flavus sclerotia (1.2610 6 ) were inoculated onto each PDA-cellophane plate and cultivated at 30uC in darkness. For RNA sequencing, the A. flavus mycelia were harvested after 48 h of cultivation (Aflavus_CA43_M) and sclerotia were collected after 7 days of cultivation (Aflavus_-CA43_S).
Strand-specific RNA-seq Library and Sequencing
The total RNA of each sample was extracted using RNAiso TM Plus (TaKaRa, Japan), treated with RNase-free DNase I (TaKaRa, Japan) and purified with a NucleoSpinH RNA Cleanup Kit (Macherey-Nagel, Germany). RNA integrity was analyzed by an Agilent Technologies 2100 Bioanalyzer. All samples had an RNA integrity number (RIN) .7.
The strand-specific RNA-seq library was prepared essentially as described but with some modifications [49]. Sera-mag magnetic oligo(dT) beads were used to isolate poly(A) mRNA from 20 mg of purified total RNA. mRNA was eluted by 10 mmol/L Tris buffer and fragmented into small pieces in the range of 100-500 bp by treatment with divalent cations. Taking these short fragments as templates, random hexamer primers were used to synthesize the first-strand cDNA using SuperScript II, RNaseH and DNA polymerase I. The first-strand cDNA was purified using a QIAquick PCR Purification Kit and used as the template for synthesizing dUTP-containing second-strand cDNA by adding buffer, dNTPs with dTTP replaced by dUTP, RNAseH and DNA polymerase I. Double-stranded cDNA fragments were purified with a QIAquick PCR Purification Kit and eluted with elution buffer for end-repair, phosphorylation and 39-adenylation. After that, cDNA fragments were purified with a Qiagen MinElute PCR Purification Kit. Illumina sequencing adapters were ligated to the 39-adenylated cDNA fragments followed by purification using a MinElute PCR Purification Kit. Fragments of 100-500 bp were purified using a QIAquick Gel Extraction Kit from the modified double-stranded cDNA fragments described above separated by TAE-agarose gel electrophoresis (2% (w/v) agarose; Certified Low-Range Ultra Agarose, Biorad). Uracil-DNA glycosylase (UNG) was added to digest dUTP-containing second-strand cDNA in an alkalescent medium at high temperature. The remaining first-strand cDNA was purified using a MinElute PCR Purification Kit and enriched by 15 rounds of PCR amplification using primers P7 and P5, which were homologous to the Illumina sequencing adapters. After purification using a QIAquick Gel Extraction Kit, fragments of 100-500 bp (average 200 bp) were selected as a strand-specific RNA-seq cDNA library. Finally, after quantification on an Agilent Technologies 2100 Bioanalyzer, the cDNA library was sequenced on the Illumina HiSeq2000 platform by a 90 bp paired-end sequencing strategy (using primers P7 and P5). The raw Illumina sequencing data for the A. flavus CA43 transcriptome was deposited in the SRA (http://www.ncbi.nlm. nih.gov/sra/) with ID no.SRP018670.
Primary Splitting of Raw Strand-specific Sequencing Data
In the course of strand-specific paired-end (PE) sequencing, reads consistent with the reverse strand of a transcript were first obtained by primer P5-triggered sequencing (read1), and reads consistent with the forward strand of the same transcript were then obtained by primer P7-triggered sequencing (read2). Read1 and read2 are PE reads.
According to strand-specific sequencing features, read1 mapped to the forward strand of a transcript and read2 reversely mapped to the forward strand of the transcript could not reflect the real transcription and was discarded. The remaining mapping reads were used to analyze gene expression and genes expressed differentially.
As the forward strand of a gene might be located in either the forward or the reverse strand of the genome, we classified read1 reversely mapped to the forward strand of the genome and read2 mapped to the forward strand of the genome into one group representing transcription from the forward strand of the genome. We classified read1 mapped to the forward strand of the genome and read2 reversely mapped to the forward strand of the genome into a group representing transcription from the reverse strand of the genome. The mapping result with genome sequence was used for the following analysis, including novel transcript predication, untranslated region (UTR) analysis and alternative splicing (AS) events. This splitting strategy was designed according to the BGI analysis pipeline (http://www.genomics.cn/index).
Read Mapping and Normalization of Gene Expression
The reference genome (A. flavus NRRL3357) was downloaded from GenBank (accession no. AAIH00000000.2) and the gene annotation data were downloaded from the AspGD website (http://www.aspergillusgenome.org/download/sequence/ A_flavus_NRRL_3357/current/). After removing the reads containing sequencing adapters and low-quality reads (reads containing Ns .9), the remaining 90 bp clean reads were aligned to the A. flavus reference genome and annotated genes using SOAP2 software [50], allowing up to five base mismatches for genome alignment and three base mismatches for gene alignment. Reads that failed to be mapped were first trimmed off 1 base from the 59end. If there was still no match, these reads were trimmed off progressively two bases each time from the 39-end and mapped to the genome until a match was found (unless the read was trimmed ,48 bases for genome alignment or ,28 bases for gene alignment). The insert between PE reads was set to 1 bp-10 kb, allowing them to span introns or the intergenic regions of various size in the genome. When PE reads were mapped to nonredundant genes, the insert was set to 1 bp-1 kb. The coverage per base in the genome and the reads count covering each transcript could be calculated by reads mapping.
We used the popular RPKM method to normalize the transcript level, which was expressed as the number of reads per kilobase of exon region per million mapped reads (RPKM) [51,52]. The cutoff value for gene transcriptional activity was determined on the basis of the 95% confidence interval (CI) for all RPKM values. GO annotation of the A. flavus genes was done by Blast2GO (version 2.6.3) software [53] and visualized by WEGO software [54].
UTR and Upstream Open Reading Frame Analysis
UTRs were defined as regions flanking a gene coding sequence, with contiguous expression of each base supported by at least two uniquely mapped reads. Transcripts whose ends overlapped with each other were discarded. The length of 59-or 39-UTRs was limited to a maximum length of 1000 bp. The upstream ORFs (uORFs) were searched for in the 59-UTRs of genes. The length of uORFs was set to 9-150 bp, and the distance between a uORF and a gene start codon should be ,500 bp.
Detection of Novel Transcripts
Novel transcriptionally active regions (TARs) were determined in the intergenic regions by contiguous expression of each base supported by at least two uniquely mapped reads and length . 35 bp [43]. A novel TAR should be separated by at least 200 bp from the upstream or downstream region of the transcript. A novel transcript is composed of novel TARs connected by at least one paired read. Novel transcripts of length .150 bp were selected for further analysis.
Alternative Splicing Events in A. flavus
Potential junction sites were detected using the TopHat program [57]. Reads used to identify junction sites could not map to the genome but were able to map to the genome after several terminal bases were trimmed off. An acknowledged junction site should be supported by at least two mapped reads with different mapping positions within the junction site region and with a minimum of 5 bp mapping on each side of the junction site and a tolerance of 2 bp mismatch [43]. A novel junction site should be supported by at least four such reads. Using the method described by Wang et al [58], seven types of AS events were analyzed in A. flavus CA43, including skipped exons (SE), retained introns (RI), alternative 59-splice sites (A5SS), alternative 39-splice sites (A3SS), mutually exclusive exons (MXE), alternative first exons (AFE) and alternative last exons (ALE).
Natural Antisense Transcript
Putative antisense transcripts were detected for A. flavus annotated genes with RPKM .33. For a certain annotated transcript, the corresponding antisense transcript was denoted as a complementary overlapping region on the opposite strand with contiguous expression of each base supported by at least one uniquely mapped read and length .200 bp. In order to obtain reliable results, the detected antisense transcripts should have an average coverage depth .2. To eliminate the influence of an intronic region, antisense transcripts were analyzed in the transcriptional exons and UTRs.
Analysis of Differentially Expressed Genes between A. flavus Mycelia and Sclerotia States
The DEGseq R package was used to analyze differentially expressed genes (DEGs) by reads count covering each gene on the basis of the Random Sampling model [59]. Functional enrichment analysis of certain DEGs was performed with Blast2GO software using Fisher's exact test with robust false discovery rate (FDR) correction of 0.05.
Sequencing Summary
To obtain an elaborate transcriptome profile of A. flavus, the isolated mRNA of A. flavus CA43 mycelia (Aflavus_CA43_M) and sclerotia (Aflavus_CA43_S) were subjected to high-throughput sequencing by the strand-specific paired-end sequencing strategy. In total, 26.9 and 27.7 million of 90 bp Illumina reads were obtained for Aflavus_CA43_M and Aflavus_CA43_S, reaching an average genome coverage depth of 60-fold and 62-fold, respectively. About 85% of all reads were mapped uniquely to the A. flavus genome with a tolerance of 5 bp mismatch (84.48% for Aflavus_CA43_M and 83.91% for Aflavus_CA43_S; Fig. 1A and Table S1). Approximately 55% of all reads were mapped uniquely to annotated A. flavus genes (Table S1), which represents much greater accuracy compared to earlier A. flavus transcriptome data (23-29%) [4]. The 30% of reads mapped uniquely that could be mapped to the genome but could not be mapped to the annotated genes might represent novel transcripts hidden in the intergenic regions. Massive non-redundant ESTs in the A. flavus genome (Table S2) suggest the presence of a large number of novel transcripts. For all A. flavus transcripts, 38.34 million reads were mapped to their sense strand but only 0.65 million reads (1.7%) were mapped to their opposite strand, suggesting our RNAseq library is strand specific. The reads mapped to the opposite strand might represent natural antisense transcripts that occur normally in eukaryotes. Moreover, the low ratio of reads mapped to introns suggests that our RNA-seq data could depict the transcription of A. flavus genomic loci precisely.
Extensive Depiction of A. flavus Transcriptome by ssRNAseq Data
The global transcriptional profile of the A. flavus mycelia culture (Aflavus_CA43_M) is shown in Fig. S1. About 67.55% of the A. flavus genome (39.91 Mb) was expressed as ssRNA-seq reads, quantifying the transcriptional abundance for 9871 A. flavus genes (73% of 13,487 genes) with a 95% CI (Table S2). However, the EST-based annotation assembled only 3749 tentative consensus sequences [2]. The genome coverage was comparable to that of A. oryzae RIB40 (76.66%) [43], and .50% of the expressed genes had a sequencing coverage of .90% (Fig. 1C). GO annotation showed that 7652 of the 9871 transcribed genes were assigned to GO categories (Table S2), providing rich information for the investigation of gene function. The expression levels of exons were much higher compared to introns and intergenic regions (Fig. 1B), and only very few introns (1621 of total 27,783 introns) were mapped by ssRNA-seq reads. Therefore, ssRNA-seq based annotation provided a more extensive depiction of the A. flavus transcriptome.
We analyzed the transcriptional activity of A. flavus transcriptional factors (TFs) to address extensive transcription of the A. flavus CA43 genome. Fungal TF information was downloaded from the web (http://ftfd.snu.ac.kr/tf.php?a = summary0&o = o). The A. flavus genome contains 667 TFs and 617 of them were expressed as RNA-seq reads, while A. oryzae contains 603 TFs and 571 of them were transcribed (Table S3; the data for A. oryzae TFs were obtained from our earlier work [43]). GO analysis showed A. flavus CA43 TFs were enriched mainly in the GO categories of binding, catalytic, biological regulation, nucleotide binding, cellular process and metabolic process (Fig. 2). Particularly, A. flavus CA43 contains transcribed TFs involved in secondary metabolic process and sporulation.
Improved Annotation of A. flavus Gene Models
Besides the reads mapped to the 13,487 annotated A. flavus genes, 30% of reads mapped uniquely were located in the intergenic regions. Extensive mapping and clustering of these intergenic reads revealed 939 and 1196 previously unrecognized novel transcripts for mycelia and sclerotia samples, respectively ( Table S4). The levels of expression of these novel transcripts were almost as high compared to exons (Fig. 1B) and 62.62% of the identified novel transcripts were longer than 500 bp (Table S4). This ratio is much higher compared to A. oryzae RIB40 [43]. Most of the identified novel transcripts were non-coding RNAs (ncRNAs) and only 25 (1.17%) novel transcripts were predicted to be potential protein-coding genes with an ORF length of $ 150 bp. The vast majority of transcribed ncRNAs might act as NATs to regulate gene expression. One of the identified novel transcripts (TU134) in mycelia culture is illustrated by Fig. 3A, with a length of 848 bp and an average sequencing depth of 11.61.
The transcribed genome sequence assembled by our ssRNA-seq reads could be used to define or extend untranslated regions (UTRs) of the gene, which have important roles in posttranscriptional regulation [62]. The 59-UTRs for 5994 transcripts and the 39-UTRs for 6407 transcripts were determined in this study (Table S5). Most (88.87%) of these identified UTRs were , 500 bp with a median length of 167 bp for 59-UTRs and 206 bp for 39-UTRs (Fig. 3B). The median length of UTRs in A. flavus CA43 was longer compared to A. oryzae RIB40 (107 bp for 59-UTRs and 156 bp for 39-UTRs) [43] and Schizosaccharomyces pombe (152 bp for 59-UTRs and 169 bp for 39-UTRs) [63]. According to the study reported by Lackner et al suggesting the most stable transcripts have short 59-UTRs and the least stable transcripts have long 59-UTRs [64], A. flavus CA43 has less stable transcripts and a much higher RNA turnover rate compared to A. oryzae and S. pombe. It was reported that ORFs in 59-UTRs upstream of annotated start codons (uORFs) might constitute important regulatory factors for gene expression [65,66]. We have predicted 2600 uORFs for A. flavus annotated genes (19.28%; Table S5), which is much higher compared to A. oryzae RIB40 (11.14%) [43] and S. cerevisiae (6%) [67]. An example of a uORF and the corresponding 59-UTR is shown in Fig. 3C. Genes containing uORFs are enriched specifically for GO terms of cellular protein modification process, protein serine/threonine kinase activity, Ras GTPase activator activity, positive regulation of Ras GTPase Table S5).
AS events contribute to producing multiple proteins from genes with two or more exons in fungi. This could enrich the proteomic diversity of fungi and provide the ability to survive in a hazardous environment. AS events were analyzed in 9545 A. flavus multi-exon genes (70.77% of all A. flavus genes) using the method described by Wang et al [58]. A total of 1220 AS events took place in 941 A. flavus genes (Table S6 and Fig. 4A), including retained introns (RIs), skipped exons (SEs), alternative 59-splice sites (A5SSs) and alternative 39-splice sites (A3SSs). About 12.78% of the A. flavus multi-exon genes produced AS isoforms, similar to A. oryzae RIB40 (11.10%) [43] and many more than Pichia pastoris (4.78%) [68]. This is in agreement with the conclusion that the frequency of AS events is proportional to the ratio of multi-exon genes in a genome (76.98% for A. oryzae and 11.91% for P. pastoris) [69]. AS events might alter the amino acid composition and the structure of the target protein. For example, AFL2G_07666 (Fig. 4A), encodes sphingosine kinase (SphK), which participates in the sphingosine 1phosphate (S1P) metabolism. Together with S1P phosphatase (S1PP) and S1P lyase (SPL), AFL2G_07666 controls the intracellular S1P level and has important roles in the regulation of cell migration, survival, differentiation, angiogenesis and development through an extracellular signaling pathway mediated by a family of specific G protein-coupled receptors. We performed homologous modeling using SWISS-MODEL Workspace [70,71] and constructed a 3D model of the AFL2G_07666 transcript and its AS variant (Fig. 4B). The skipped exon alters the 3D structure of AFL2G_07666 and might influence its biological function. To investigate the mechanism of AS events, we calculated the ratio of the amount of RIs to cassette exons (CEs, including SE, AFE, ALE and MXE). The high RI/CE ratio (34.75) indicates A. flavus might recognize splicing sites and produce AS events mainly by the intron definition (ID) mechanism, according to the study reported by McGuire et al [69].
Natural Antisense Transcript
The ''G-value paradox'', that the amount of protein-coding genes does not correlate with the complexity of an organism, suggests that so-called junk DNA in a genome contains rich regulatory information exerting via transcribed natural antisense transcripts (NATs) [23,28]. So far, little is known about the role of NATs in Aspergillus spp. It was suggested that NATs might have an important role in differential expression of genes involved in secondary metabolism in A. oryzae [42]. According to an earlier study, the most prominent NAT type is the non-protein-coding antisense RNA partner of a protein-coding gene [26]. We searched for NATs in our data for all A. flavus proteincoding genes with the RPKM.33 using stringent criteria. In all, 1124 and 839 NATs were identified in the AflavusCA43_M and AflavusCA43_S samples, respectively (Table S7). This is many more compared to A. flavus NATs on the basis of EST data (352, 2.8%) and Cryptococcus neoformans NATs (53, 0.8%) [25]. The number of RNA-seq-based A. flavus NATs is in the same range as S. cerevisiae (1103, 16.7%) [46]. However, this frequency is much lower compared to Candida albicans (2458, 40%) and M. oryzae (4215, 32.8%) [28]. There are more NATs located inside coding regions (768, 39.14%) compared to UTR regions (466, 23.76%, Fig. 5A). This means NATs in A. flavus might regulate gene expression mainly at the post-transcriptional level, which is consistent with the conclusion reported by Donaldson et al [28]. The inside-biased NAT distribution in A. flavus, however, was different from the 39- biased antisense transcription in A. nidulans [72] and the mammalian NAT distribution, where NATs are usually enriched in the region of the 250 bp upstream sequence and the 1.5 kb downstream sequence [26]. The transcriptional level of A. flavus genes with NATs (average RPKM 253.59) was much higher compared to genes without NATs (average RPKM 54.50; Fig. 5B). This is consistent with the earlier study by Katayama et al [23], strongly denying the simple hypothesis that NATs are just negative regulatory elements and artifacts of leaky bidirectional transcription [26,28,73].
Among the 352 EST-based NATs in A. flavus, only 19 were verified by our RNA-seq data ( Table S7). One of the EST-based NATs is that of the aflatoxin biosynthetic regulator gene (aflR, AFL2G_07224), which was identified in our study when the RPKM cutoff for NAT analysis was set at 5 ( Table S7). The existence of a NAT in the transcriptional factor aflR suggests the interaction between a NAT and its sense transcript can occur in the nucleus, consistent with the conclusion from studies in mammals [23,26]. Another example is SdeA (AFL2G_00446), which is involved in the regulation of morphology under temperature change and in the production of multicellular developmental structures (conidiophores, cleistothecia and sclerotia) [25]. Changes in the expression of SdeA and its NAT in the mycelia and sclerotia states suggests NAT participates in the regulation of SdeA and thus biological processes controlled by this gene (Fig. 5C).
GO enrichment analysis demonstrated genes with NATs were enriched specifically in GO terms of protein complex, protein binding, RNA binding, translation, ribosome, intracellular protein transport, cellular amino acid metabolic process, vesicle-mediated transport, hexose catabolic process and biological regulation (FDR-adjusted p,0.05; Fig. S3 and Table S7). Thus, these NAT-containing genes are closely related to protein expression, secretion and energy production in A. flavus. For example, the A. flavus Set1 gene (AFL2G_02936) has a NAT located inside its coding sequence (Table S7). According to the report that the corresponding NAT could prevent Set1-mediated transcription initiation in S. cerevisiae [28], NAT might influence the Set1mediated transcription initiation in A. flavus. The discussion about NAT function in fungi was focused mainly on the NAT-mediated alteration of physiological processes in response to environmental nutrient starvation [28] and nitrogen metabolism [72]. Our analysis is the first global investigation of NAT function in A. flavus.
Sclerotia Development and Reproduction
Sclerotia are considered to derive from cleistothecia, which is the sexual reproductive organ in Aspergillus spp. [6]. Genes involved in sexual reproduction and the balance of sexuality and asexuality of Aspergillus spp. are given in Table S8 [ [74][75][76]. To identify genes involved in A. flavus sclerotia development, differentially expressed genes (DEGs) between the A. flavus mycelia and sclerotia cultures were detected. For the 13,487 A. flavus genes, 9871 (73.19%) were transcribed under both conditions. A total of 661 genes were expressed specifically in the mycelia state and 343 genes were specific for the sclerotia state. These genes might represent factors critical for the physiological development of A. flavus. There were 7609 DEGs (56.42%) between the A. flavus mycelia and sclerotia states, with 1821 up-regulated genes and 5788 down-regulated genes in the sclerotia state (p,0.001; Table S9). DEGs between the A. flavus mycelia and sclerotia states are much more abundant compared to A. flavus cultivated at different temperatures (2709) [25], indicating different developmental stages are accompanied by a high level of diversity in gene expression.
To identify genes closely related to the developmental stages, DEGs with change .2-fold were selected for further analysis: 1149 up-regulated genes and 4492 down-regulated genes were detected for the sclerotia state. WEGO illustration showed GO terms of the reproductive cellular process, the reproductive process, sexual reproduction and sporulation contained more upregulated genes in the sclerotia state compared to the mycelia state (Fig. 6A), indicating sclerotia is closely related to A. flavus reproduction instead of being only a sexual vestige. Additionally, the abundance of residual mating process genes in A. flavus suggests it might be capable of sexual development.
DEGs related to reproduction are shown in Fig. 6B, with 14 up-regulated genes and 37 down-regulated genes in the sclerotia state. The zinc finger protein-encoding gene brlA (AFL2G_00999) is the primary activator of asexual conidiation reproduction [77,78]. Down-regulated brlA in the sclerotia state (Fig. 6D), together with a series of conidiation-related genes (abaA, wetA, flbB, flbC and flbE; Fig. 6B), suggests conidial development is repressed during sclerotia development. These findings might contribute to understanding the sexual and asexual balance of A. flavus.
Secondary Metabolism
The secondary metabolic (SM) pathways in fungi consist of many genes encoding polyketide synthetases (PKS), fatty acid synthases, dehydrogenases, reductases, oxidoreductases, epoxide hydrolases, cytochrome P450 monooxygenases and methyltransferases [1]. However, it is difficult to determine whether such genes are involved in SM pathways, because vast numbers of genes are categorized as belonging to the gene clusters mentioned above. On the basis of the web tool SMURF [9], 55 putative SM pathways were identified in A. flavus, including 22 PKS and 27 nonribosomal peptide synthetase (NRPS) pathways. It is important to study secondary metabolism in A. flavus because it was reported that secondary metabolism was often related to sporulation and sclerotia development [2,76]. In our RNA-seq data, backbones of 38 SM pathways were transcribed in both the mycelia and sclerotia cultures (Table S10), including the aflatoxin (AF) biosynthetic pathway (cluster #54).
The AF biosynthesis pathway is the best studied SM pathway in A. flavus, containing at least 23 enzymatic reactions and 29 genes in a 75 kb cluster on chromosome III (Table S10) [79]. Only eight transcribed genes in the AF pathway were scored by microarray technology [1]. Our RNA-seq data provided precise information about AF biosynthesis. The transcriptional factor aflR (AFL2G_07224) was transcribed under both the mycelia and sclerotia conditions and the NAT of aflR was transcribed under the sclerotia condition, suggesting AF biosynthesis might be downregulated by NAT-mediated RNAi during sclerotia development (Table S10). This is consistent with the fact that the transcripts of all AF pathway genes were detected in the mycelia state but three genes (aflD, aflCa and aflP) were not detected in the sclerotia state (Table S10). Additionally, changes in expression of the global SM transcriptional factor veA (AFL2G_07468) during sclerotia development suggest it has roles in linking sclerotia development with secondary metabolism (Fig. 6C and Table S8) [80]. These findings are in agreement with the transcriptional profile of AF biosynthesis (Fig. 7A) and the down-regulation of most AF biosynthetic structural genes in the sclerotia state (Fig. 7B).
Genes participating in the AF metabolic pathway contain biosynthesis genes, signal transduction genes, regulatory genes and genes involved in the stress response [2]. Although most of the AF structural genes are in a single cluster, this cluster might be regulated by genes spread throughout the A. flavus genome [81]. Among the AF-related genes given in Table S10, 11 were downregulated in the sclerotia state and two were up-regulated (Fig. 7B). According to our analysis of sclerotia development (Table S8 and Fig. 6B), these AF genes were also related to sclerotia development, suggesting they link sclerotia development and aflatoxin biosynthesis. Therefore, high throughput RNA-seq data brought new insights into A. flavus mycotoxin production and other secondary metabolism pathways in this fungus. AF contamination of crops is a heavy economic burden on farmers and it is important, therefore, that the target genes identified by RNA-seq data can be used in the biocontrol of aflatoxigenic strains by genetic manipulation.
Conclusions
The A. flavus transcriptome has been studied intensively by several research groups in recent years. Despite these efforts, however, the mechanism underlying the regulation of A. flavus physiology is unknown. Our data profiled the A. flavus transcriptome on the genomic scale and annotated transcript structures precisely. UTR annotation revealed A. flavus might have a much higher RNA turnover rate compared to A. oryzae or S. pombe. A. flavus might recognize the splicing sites and produce AS events mainly by the intron definition (ID) mechanism owing to the high RI/CE ratio (34.75). Among the novel transcripts we were able to identify, the vast majority of transcribed ncRNAs might act as NATs to regulate gene expression at the post-transcriptional level. The transcriptional activity of A. flavus genes with NATs was much higher compared to genes without NATs, suggesting NATs are true transcripts rather than artifacts of leaky bidirectional transcription. Our analysis is the first global investigation of NAT function in A. flavus. As for DEGs, it is quite likely that the 14 up-regulated and 37 down-regulated reproduction-related genes in the sclerotia state link A. flavus reproduction and sclerotia development. In our ssRNA-seq data, the backbones of 38 SM pathways were transcribed in both mycelia and sclerotia states, and we identified genes linking sclerotia development and aflatoxin biosynthesis. Our data could be used to develop strategies to control aflatoxin synthesis by aflatoxigenic strains. Therefore, ssRNA-seq data provided in this study could expand our understanding of A. flavus. | 2017-07-09T12:26:03.647Z | 2014-05-21T00:00:00.000 | {
"year": 2014,
"sha1": "1f98c39d9de6dc79c168a99e00eb19ba601b69a7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0097814&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f98c39d9de6dc79c168a99e00eb19ba601b69a7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236548712 | pes2o/s2orc | v3-fos-license | Farmers’ Participation in Operational Groups to Foster Innovation in the Agricultural Sector: An Italian Case Study
Recently, the interpretation of the innovation process has changed significantly. Its linear model has evolved to a dynamic and ongoing participatory approach where cooperation, oriented to generate co-ownership, is the essence to co-produce knowledge among multiple actors. Farmers’ direct participation in the process is widely accepted since they contribute with first-hand information, perceptions, field experiences, and feedback that are essential for the design and implementation of a project. The European Union encourages their participation through the European Rural Development Policy that promotes competitiveness and sustainability in the agriculture and forestry sectors, building bridges among heterogeneous stakeholders that complement each other to find an innovative solution to a given problem. Thus far, despite participation importance, few details have been provided about producer’s contributions within the process. Consequently, this paper attempts to explore the modus operandi of an Italian Operational Group to get insights about the farmers’ participation and identify the factors that could influence and foster the interactive innovation process. The results, based on a participatory observation, key informants’ interviews, and theory reflection, revealed that farmers are active players in the design and implementation phases. Yet, their participation is not constant throughout the entire process. Empower them to find solutions with different players is a complex challenge as it requires motivation, commitment, trust, and an open communication among different actors.
Introduction
Innovation is a key to make agriculture provide healthy, safe, and nutritious food for a growing population, but also preserve land, water, and biodiversity [1]. However, evidence on past experiences has shown that traditional innovation approaches, in which ideas are developed and tested by researchers and then implemented by farmers [2], did not promote producers' empowerment [3], were not built on local farming experiences [4], and did not deliver innovations tailored to the needs of farmers and society [5]. While traditional approaches have adopted a 'linear' vision of innovation [6], 'circular' models have proposed new interaction patterns [7] based on participation and knowledge-sharing [8]. Hence, new methods have progressively emerged to address complex problems and participatory approaches [9]-in which innovation is co-produced through interactions among different stakeholders who create, adapt, and diffuse knowledge [10]-have been proposed. This open and inclusive perspective requires participation of a wide range of players [11] to learn together and stimulate innovation and knowledge in a collaborative way [12].
In the European Union context, the participatory approach attempts to consider farmers' objectives and constraints in the entire project life, addressing problems that are relevant to them and their circumstances [13]. This approach should enable learning for all involved focus to the so-called beneficiaries (marginalized groups that aim to improve their quality of life) [33], it has become part of the development jargon (a trendy buzzword that attracts) and the center of contemporary development discourse [34].
We regard participation as the involvement of individuals interested in a particular intervention to respond to their felt needs [31]. From an ethical point of view, participation gives individuals the possibility to express their views with regard to a range of social, economic, political, or environmental issues [35]. Moreover, it is also recognized as a human right (to give people a meaningful role in decisions that affect them) [36]. Participation is a personal but also a social action (take part of a process through action and interaction) [37]. Wenger (1998) used this term to describe the social experience about membership in communities and active involvement in social enterprises [38].
It has been stated that participation is crucial to a projects' success, and that it can transform development, empowers the poor, etc. [26]. (2017) suggest that it can be constrained by lack of knowledge, communication, motivation, confidence, and time [35]. The concept of participation and its practical implementation have been also criticized, for example, by Cooke and Kothari (2001) and Hickey and Mohan (2005), who consider participation itself as a form of power [39]. Others, as Kesby (2007) have emphasized its potential to empower participants and change their lives through the production, exchange, and use of knowledge that allowed, according to Cristóvão et al. (2009), them to learn through shared experiences from actors with different backgrounds, what brings farmers into the process, and strengthen their role in the innovation process [21]. Other qualitative studies suggest that, when adequately implemented, participation projects can lead to innovation, empower participants, encourage learning, raise networks' awareness, strengthen social capital [40], build trust, co-create knowledge, and generate social awareness [36].
Many studies argue that, even within a participatory innovation process, farmers tend to be mere adopters of agricultural innovations: they participate more in implementing than in shaping innovation process [41]. Cullen et al. (2014) showed that farmers participate in innovative initiatives as implementers, but not as designers, and their participation, compared to other actors, is weak and non-stable [42]. Oladele and Wakatsuki (2011), in turn, revealed that farmers may participate as testers of innovations [18]. Adekunle et al. (2012) stated that while farmers are recognized as sources of innovation, their participation and interaction with other actors is still often limited [43]. Other studies, however, provide evidence of farmers as promoters of innovative initiatives [44], and Spielman et al. (2009) has shown that farmers can be equally weighting sources of knowledge among diverse interacting players.
The present study conceptualizes participation as a complex interplay among different players into innovation projects in which producers are a fundamental component [32] as they contribute to understand the complexity of farm level constraints and to find solutions to ensure that their limitations are addressed [36]. Although empirical evidence has revealed that end-user participation does not always lead to innovation [6], and producers' cooperation depends on their level of engagement in the process, their social embeddedness, and their levels of social capital [40], the remaining challenge is to identify when farmers' inclusion would be beneficial [45], and when they are willing and interested to be part of a specific initiative [21].
Factors That Influence the Participatory Innovation Process
When working with farmers, and in general, with a heterogeneous group of people, some specific factors can facilitate the innovation process ( Figure 1) [46]. Although these factors are not enough to ensure a project's success, they might make the process more fluent and co-creative [33]. Literature review has allowed us to identify some of the factors that can stimulate and facilitate participation [47]. While the results of a participatory process are relatively well-defined [35], the elements that contribute to it have received less attention. We are aware that factors, other than the ones visible in Figure 1, can also influence participation, like the size of the farm, the crop type, growers' age, income, etc. However, we have focused on those subjective factors, related to personal attitudes that ceteris paribus influence participation. The choice reflects the fact that these types of factors were cited by the interviewed actors as the key determinants. In addition, while each factor encourages participation differently, they may also be a pre-condition for, or a result of, other factors. For instance, trust and interaction are drivers for networking. However, trust can also be a driver for interaction or knowledge co-creation. That is to say that one factor could be a driver for another, in a multi-directional relational pattern, while retaining its specificity. This complexity is simplified in Figure 1, where interactions among the six factors are not explicitly represented.
Motivation
Motivation originates from needs and desires [48] but can be also driven by curiosity, an urge to know and learn, and explore looking for answers [49]. Within a participatory process, to encourage motivation, group members must know and realize that their inputs are considered in the project activities, line up their personal goals with those of the initiative, set clear and meaningful objectives, have sufficient resources (economic and social), and helpful partners to develop a useful job [50].
Empirical examples prove that motivation is a decisive factor in the efficiency and performance of different agents within a project [49] because it allows them to accept changes, adapt to them, and get to work [8]. Greiners (2009) stated that motivation relates to personal or internal characteristics such as skills, abilities, emotions, and aspirations, as well as to external ones (opportunities, available resources) [51]. Therefore, within a group, not all the members are willing to collaborate at the same level. Willingness depends on motivation, usually until some goal is reached [48]. Literature also demonstrates the influence of cultural norms, identity, social contexts, values, goals, and worldviews [52] or personal philosophy as motivational devices that affect peoples' attitudes and behaviors towards participation to obtain personal or collective goals [51].
Commitment
Commitment has received much attention in the social sciences [53]. It can be described as acting towards fulfilling mutual, self-imposed, or explicitly stated obligations [54]. It is influenced by attitudes, identification with the group [55], and the project objectives, as well as personal values, loyalty, and expected benefits and costs [56]. Literature provides evidence that individuals with higher levels of commitment to participate in a project are more likely to contribute towards the achievement of its objectives [57]. The intensity of members' participation depends on their commitment level-the higher it is, the greater contribution towards achieving shared goals-that in turn, impacts the project success [55].
Interaction
As its name suggests, interaction is a dynamic sequence of social actions exercised reciprocally between two or more individuals [58] that modify their actions and reactions based on the actions of the person with whom they interact [59]. There is no participation without interaction [38]. Literature indicates that interacting with players from outside the agricultural sector prevents isolation [9], allows farmers to be aware of what is happening in their fields, and leads to build up their personal networks [60]. However, interaction requires time, and when it comes to reaching cohesion, common understanding and coordination [56].
Communication
Communication is the conscious action of exchanging information or opinions between two or more people to create understanding or convey a certain idea [47]. Within a participatory process, the communication flow should be dynamic (multi-directional), with multiple interacting sources of knowledge [61]. As an interactive project involves numerous goals, values, and interest from a group of actors [17], it needs an organized communication in which players know about each other's opinions, preferences, priorities, and concerns [62]. Being communicative and using an understandable language are considered crucial tools to facilitate the entire participatory innovation process [24], which requires a continual open and honest dialogue to share information, perceptions, experiences, and opinions to explore and generate new knowledge among the parties involved [63]. Written evidence witnesses that having straightforward and truthful communication is greatly valued by members of a project [64].
Networks
A network is a space where multiple stakeholders can interact and participate [65], know and listen others' perspectives, be aware of new trends, and maintain a constructive long-term position to cooperate rather than compete [57]. Literature suggests that networks encourage learning since interacting and participating with diverse agents permits working in an environment where continuous feedback loops are produced [66] over an extended period. Pittaway et al. (2004) stated that networks have become an important external source of innovation as they can expand access to knowledge and other resources [67]. Hence, it is assumed that diverse actors (within and outside the agricultural sector) could benefit being part of them. Notwithstanding, the literature shows that farmers' participation in networks is often limited [20]. Small and medium producers experience difficulties in networking because they rarely interact, for instance, with Universities or research organizations [68]. Based on this, being part of a participatory process could open doors for farmers to learn about the importance of building [53], expanding, or strengthening networks, increase their willingness to share information and experiences with diverse agents, and learn from it [69].
Trust
Trust is the belief a person has that another individual will be able and willing to act appropriately in a given situation [47]. It implies a suspension, at least temporarily, of uncertainty regarding others' actions [70]. In this sense, trust can be strengthened or weakened according to the actions of other people [54]. Luhmann (2001) defined it as the willingness to accept some risk and vulnerability towards others (rely other people to display some competence or develop certain things) [71]. Thus, trust simplifies social relationships as it flourishes when there is the feeling of belonging to a group [72], and there is a space to share thoughts, experiences, and feelings [73].
As some authors argue, trust increases the opportunities for cooperating with others and for benefiting from that collaboration [74]. Furthermore, it could remove the incentive to check up on other people activities, what makes participation less complicated (i.e., make cooperation possible, rather than easier) [75]. While the process of building trust is often slow and difficult as it is an emotional and logical act, being part of a participatory project could make its members feel eager to constitute a team with a shared purpose and increase or improve their willingness to trust each other [22].
A review of the literature suggests that a participatory innovation process facilitates relationship-building among diverse partners [76] that can lead to a continued exchange and cooperation [73]. Trust depends on the context, the group, the settled activities, etc. [77], and since each partnership is unique, its members influence the outcome of each initiative in a different way [72].
Outcomes of Participatory Innovation Processes
If we look at innovation as an obvious final outcome of an innovation process, learning should be considered a key outcome as well. Learning is the act to acquire knowledge (more and more considered as a lifelong process) which aids in obtaining critical thinking skills [78]. It explores different and challenging horizons where people discover new ways to perform tasks [79]. According to Neels et al. (2017), it is important that members of a group are open to fundamental changes in values, attitudes, and behavior so that learning becomes a continuous process among all actors [80]. Contrary to the conventional learning theory, which conceives it as an individual, separable, hierarchical, and abstract undertaking [81], Lave and Wenger (1991) demonstrated that effective learning takes place through participation in social groups as they encourage it across belonging, becoming, experiencing, and doing [82]. Nonaka and Takeuchi (2005) defined knowledge co-creation as new knowledge from interaction between different parties in a joint task [83], which can result in mutual learning [79]. Participatory projects are considered a relevant mean for obtaining access to knowledge, and other important resources for innovation [58]. As Moschitz et al. (2015) argued, interacting with different actors, interchanging information, collaborating within networks, and promoting peer-to-peer knowledge exchange is an opportunity to acquire ideas and knowledge that can contribute to develop solutions to certain problems [20].
Since innovation rests on learning, as well as on discovery [19], a successful participatory process can conduct to it [10]. Innovation may be technological but also organizational or social [84], and it can be based on new but also on traditional patterns [85]. Literature claims that innovation is made possible by a combination of knowledge from different sources [56]. While putting together actors with diverse skills also implies risks, there is no innovation without it [61]. According to Pittaway et al. (2004) a successful innovation requires continuous integration of new knowledge and knowledge exchange [20], and it depends on some characteristics related to farms, environment, and producers (access to information, capabilities, preferences) [86].
Methodology
The methodology follows the objective of our research, which aims to get insights about farmers' participation determinants and to identify the factors that influence and foster the interactive innovation process. The case study method was selected as a suitable tool for the collection and presentation of detailed data. First, for acquiring information about the initiative, desk research was carried out. Then, a map of the participants was developed to identify its main players, their roles, and core activities. After that, a standard questionnaire was prepared with a set of predetermined, and open, questions that sought to identify the OG organization, including the decision-making and learning processes with particular attention to the participatory approach followed during the two-year life of the project. Later, with the contribution of the innovation broker, the OG's partners were contacted by email and asked for an interview. The interviews started following a structured questionnaire. The scope was to gather information about partners' selection, their characteristics, their contributions, their motivations to be part of the initiative, their communication procedures and channels with external stakeholders, etc. (Appendix A).
The interviews were divided in two rounds. In the first one, three of the seven actors within the partnership, willing to be interviewed, were approached. The selection criteria focused on their sector of provenance, their contribution to the project, and their availability of time. The coordinator, the researcher, and the innovation broker were interviewed in the Italian Farmers Confederation-CIA facilities in Padua, under the same conditions, 50-60 min, in Italian, in February 2020. Then, depending on the gathered data and to get more details about interaction, participation, and co-creation, additional actors within and outside the partnership were contacted (3 farmers and the market player, plus an external stakeholder) for the second round of interviews. Once more, based on their time availability, the interviews were held in different dates during May 2020, under the same conditions, by Zoom (40-50 min).
Altogether, 13 actors were asked to participate. With 8 respondents in total and based on the convergence of the provided information, no more interviews were conducted ( Table 1). The answers of the respondents were recorded, with their previous consent. Next, the data were analyzed by the research group. The responses were transcribed considering all details and pieces of information, while different colors were given to each respondent for identification purposes. When needed, clarifications were done through email. Once completed, responses were grouped according to their similarity or divergence, classified under categories and then, separated by themes. The classification was made according to the terms used by the interviewees, which were used as a base for a manual coding, to describe the participatory process followed in Farmers Lab. Then, patterns were identified to find connections and associations among the respondents' answers (Appendix B).
Farmers Lab Background
Farmers Lab was born from small farms willing to address their difficulties in the fruit and vegetable sector in the province of Padua, a vibrant rural (and industrial) area with about 940,000 inhabitants, with many medium and small-sized farms. Its main agricultural products are corn, wheat, barley, tobacco, tomatoes, strawberries, green beans, apples, peaches, and grapes [87]. Short shelf life of raw materials, food loss, and low added value pushed them to start thinking about products' transformation to improve productivity and foster innovation throughout the agri-food chain, maintain wealth within the territory, increase sensitivity to the quality and seasonality of the products, and reduce the ecological footprint. The idea of having a common laboratory was born from the exchange of thoughts among small farmers (mainly producers of vegetables, apples, cherries, strawberries, sugar beets, forage, milk, grapes, and peaches) associated with the Italian Farmers Confederation (CIA)-a farmers' organization which represents a significant share of primary producers.
This OG was chosen as a case study since it follows a bottom-up approach and innovative ways of working. This project offered a promising solution for producers who were used to selling their agricultural raw products in local markets and to their neighbors directly. When sales were not as planned and supply exceeded demand, and food was used as animal feed or thrown away, the result was an economic loss. Processing fruits and vegetables was a way to increase their added value, to extend their shelf life, and to facilitate an efficient logistics, thus increasing farmers' revenues.
Once the idea of the project was clear, the coordinator, with support of an innovation broker, identified partners external to the agricultural sector-based on their skills, background, and history of cooperation-to be part of Farmers Lab. At the beginning of the initiative, farmers did not have in mind to include external actors, nonetheless, according to the coordinator, one of the requirements of the funding call was to involve partners from outside the agricultural sector. Hence, the group was formed with seven members from heterogenous fields, all of them inspired by the initial idea, and willing to contribute with their experience, complementary competences, and knowledge to design a common laboratory for transforming fruits and vegetables. The farmers who took part in the OG were not selected by any internal or external stakeholder, or under any specific criterion. Since the initiative was presented to the CIA members, those who felt motivated and found benefits in the activities to be carried out, became part of the initiative on a voluntary basis. The innovative element, especially for the agricultural world, was to put them together to design collectively (Table 2). External stakeholders were also essential since, without economic resources, a location for the lab and a business guidance, it would not have been possible to realize the initiative. In a second phase, the same partnership is planning to begin with the transformation of fruits-peaches, apricots and strawberries-and vegetables-tomatoes, chicories, and pumpkins-to continue building the capacity of the innovation actors to self-organize, create, experiment, test, and make use of their and others' skills and knowledge.
Factors That Influence the Participatory Innovation Process
As extensively argued in Section 2, the literature suggests that factors such as motivation, commitment, interaction, communication, among others, can stimulate participation (the intention and actual action). With the data collected, so far, it can be inferred that each of the named factors influence the project in a different way depending on its phase. Moreover, these factors are built on personal characteristics and the context in which the initiative is developed. Within this section, quotations in italic are taken from the interviews.
Motivation
The engine that encouraged producers to be part of the initiative was mainly to determine if it was possible to run a shared laboratory with other farmers to increase their profitability adding value to fruits and vegetables. Besides that, Farmers Lab was a perfect alternative to reduce food waste and offer a quality product directly to consumers. For the coordinator, linking the agricultural world with those from different backgrounds was a challenge, but also an opportunity and a proof that being isolated is useless.
The farmers involved were often very enthusiastic about the project approach. They committed significant time to the initiative activities. For most of them, physical proximity guaranteed their attendance to the scheduled meetings even if it meant leaving their workplace. While the motivations for being part of the project were important, they were not enough for some producers, who did not continue in the OG. Some farmers did have time constraints, but others were not interested in doing unremunerated work (in January 2018, there were around 25 farmers interested on the initiative and followed its activities. In December 2019, there were around six producers, most of them women, who faithfully believed that the common laboratory could be a reality).
On the other hand, the interviewed partners identified certain components that sometimes hindered their motivation level. They stated that the rigidity of the process, meaning that what was written had to be done in that way (even if there was an option of simplifying procedures), was one of the main difficulties for the project. For the innovation broker, clear commands and guidelines were missing, and this caused several actors to doubt the future of the OG. Additionally, different expertise, language (technical terms), working habits, and methodologies of the partners caused discomfort and confusion when the project started.
Commitment
The entire partnership was committed to a common cause, they were very enthusiastic about the OG approach, and perceived the initiative as their own. However, producers were not always well represented in the assemblies, and even if they were present, not all of them were able to voice their views. Since they had to leave their lands to attend to physical meetings ("lost day of work"), for them, being part of the project required extra commitment and coordination. Producers argued that the most difficult task was to integrate themselves in the system and increase their availability and desire to participate. As it was a new and stimulating experience, they contributed by providing information and inputs that allowed them to evaluate their situation and identify what was missing. Each partner was committed to succeed in the bottom-up initiative and working with actors from different backgrounds.
According to the innovation broker, the commitment of each member enabled them to listen and know what was happening in other areas they were working in. For the interviewees, a good attitude, teamwork, open mind, motivation to cooperate, meticulous coordination, belief in the essence of the project, and partners' skills aligned with the initiative needs were the core ingredients to work within a committed team to achieve the common goal.
Interaction
For the interviewed producers, agricultural mindsets are still focused on technology transfer (there is someone who transfers and someone who is transferred) with no coplanning, co-creation, co-designing, nor cooperation in the process. The majority of them emphasized the isolated and individualistic character of their activities: "we work alone", "everyone has their own ideas", "we do not discuss", and "when needed, we help each other, but when we have to work as a team, we get lost".
The relationship among partners within the OG was satisfactory (balance between formal and informal relations). They were not constant, but there were continuous interactions, either in the meetings or by phone, to determine how to carry out the planned activities. The innovation broker was the facilitator of the group who collaborated to identify and complement different needs, interests, ideas, competencies, and actions to arrive to the commonly agreed objectives. Collaborative discussions were the space where actors expressed openly and exchanged views and knowledge to reach agreements and contribute to solutions. Everyone had a voice ("nobody was excluded"). All the interviewees felt heard. They perceived their views, experiences, opinions, and suggestions were considered to a great extent.
A Special Collective Mandate Agreement, with representation for the establishment of a Temporary Association of Purpose (ATS), was stipulated to carry out Farmers Lab activities through interaction and systematic comparison between all partners along the entire path of the development, implementation, and dissemination of the innovation. Decisions related to the contents, management, and coordination of the initiative were defined in a collaborative way and based on a democratic discussion where all the actors had the chance to comment and share their point of view without restrictions or discrimination. Occasionally, there were disagreements about some activities to be carried out. In that case, according to the coordinator, the partners reviewed the issue together and found collectively the best possible solution for the project. After listening each ones' arguments, a decision was taken in consensus. For a farmer, "the clue was to be open minded, be focused on finding the solution, share what the limits were, and propose ways to address the inconvenient".
As already mentioned, external stakeholders were also involved in the process as observers and advisors. Their mission was to analyze and determine the project's importance for the region (technically feasible, economically and socio-culturally viable, and politically acceptable). The aim was to involve them to know their perspectives, comments and suggestions, to be aware of financing opportunities, and to solve some bureaucratic and administrative issues. Although they attended to the project meetings, a producer stated that there were not enough opportunities to interact with them: "it would have been positive to find a space to meet them bilaterally to strengthen the relationship". These events did not open the chance to approach them and deepen on some topics about the initiative. Usually, time was the big constraint (attendees had busy agendas). Notwithstanding, farmers also admitted that they did not look for opportunities to interact with these players because at that time they did not consider it necessary.
Communication
Beyond reaching agreements, communication allowed the team to learn, develop esteem, empathy, trust, and friendship: "at the beginning, it seemed difficult to understand each other. We did not have a common language. The technical terms were hard to comprehend (design expressions), but partners were able to explain with simple words what they wanted to do and their proposals. We put ourselves in each other's shoes", said a farmer. For all the interviewees, the bases for taking decisions were direct and multidirectional communication (with multiple interacting sources of knowledge), cooperation, and motivation to achieve something different and recognizable for the sector. The entire project was an ongoing discussion that facilitated productive dialogue over time to resolve the partnership issues as a team (spaces for coordination).
Networks
Besides the provision of funding, the Veneto region allowed Farmers Lab to be known nationally and internationally. It contributed for dissemination activities and let the initiative make use of existing networks and platforms to gather information. There were events where the representatives of the OGs were invited to be informed how other projects were implemented. The coordinator and the innovation broker participated representing Farmers Lab. Yet, they recognized that the interaction with other OGs was minimal. Even if these events had an environment of cordiality and partnership, once they were over, there was no stable communication, no cooperation, no further relationships, or interaction with other initiatives. "The objectives of each Group were very different, and everyone was focused on their own businesses", said the innovation broker.
Trust
Since the initiative of establishing a shared laboratory was born (2016), the creators started to work with an innovation broker to identify possible funding sources and partners, determine what to fulfil for the application, and prepare a solid proposal. The project players were identified and selected by the coordinator, with support of the innovation broker. Everyone carried out their activities, which were well defined right from the start and encouraged members to trust the different actors in the OG. For the researcher, there were no doubts of what to do and when ("everything was well organized"). Every participant had their own space to work, and farmers were engaged from the start of the project. Trust on the others' skills and expertise was particularly important to move forward with the settled activities. For a producer, the mere fact of being part of Farmers Lab already meant trusting that the partners would collaborate to achieve the proposed objective. Based on the trust they built during the project, one of their goals is to continue cooperating on a second phase of the initiative.
Outcomes of Participatory Innovation Processes
The lead partner, with support of the innovation broker, was in charge of ensuring the circulation of information among the participants. Their constant involvement (opinions and remarks) on the activities, as well as periodic sharing of progress, for the innovation broker, were the pillars in the learning process of the initiative. Regularly, the coordinator verified that all the actors shared the technical, organizational, and operational choices adopted. When processes and results were not fully communicated/understood, discussions were promoted using available multimedia tools (e-mail, phone calls, project website, etc.). The main objective was to observe whether the tasks brought the desirable outcome and reflected as a group on what went well and what could be improved.
In the view of the innovation broker, even though there was not a formal established instrument to learn from each other, the written reports promoted the need for a space to exchange, intervene, discuss, and ask questions and clarifications. Open and democratic conversations were the basis for peer-to-peer learning (all the actors were well prepared in their fields). Therefore, for him, open cooperation, collaborative attitude, motivation, and empathy among the participants added up to a better understanding of each partner progress and achievements.
According to the respondents, all of them learned something from Farmers Lab. A producer, for instance, recognized the importance of some tools she had underestimated in her own laboratory related to marketing and business planning: "by nature, I am not a merchant, therefore, I did not know how to value my product. With Farmers Lab, I realized that I produce high quality food and I have to sell it in this way". Another producer learned about the upstream steps needed for selling his products. He noticed the importance of knowing the market and contacting traders in advance to find out what their needs are: "I am aware that not everything I produce matches with the reality we live in. I have to keep my eyes open to go in the same direction as the demand does". The market actor, instead, learned how agriculture works on the field. In this way, a micro-system was created where knowledge did circulate openly.
Discussion
In this paper, we explore the modus operandi of Farmers Lab to get insights about farmers' participation and identify the factors that influence and foster the interactive innovation process. Figure 2 summarizes the ways in which the six factors influence learning, knowledge co-creation, and innovation, as discussed in this section. Producers' involvement in the OG emerged as a response to economic opportunities, which is, in fact, one of the major incentives for farmers to be part of innovative projects. While producers can indeed be active participants in the co-creation of innovative solutions, there is still the need to learn from successful experiences to build farmers' capacities that should be strengthened with practice and time.
Motivation
When farmers are aware that they could have a chance to learn something from a collective activity, this encourages them to be part of innovative initiatives. However, in the case of Farmers Lab, the main motivations for being part of the project were mainly practical, such as reducing food waste and improving farm income rather than sharing knowledge with other actors. Since one of the challenges in a participatory process is to keep motivation at a high level, it seems even more challenging to keep farmers motivated when they look for tangible and short-term goals, which may need a long time to be achieved within a participatory process. If the farmer does not see tangible positive results in the short term, he/she is very likely to leave the process, as it happened in our case study. Although the OGs provide the necessary conditions for producers to be part of the process, there is a lack of strategies to motivate them to actively participate and work with other stakeholders when this dynamic is not part of their way of working. In fact, motivating producers requires concrete results, and the short time span of the projects does not seem to focus on this issue.
Being motivated to be part of something not only depends on individual attitudes and expectations, but also on external elements such as farmers' geographical location and transport facilities to join the project activities, which in this case, was a limitation. Although motivation directs the action towards an end, in the case of participatory projects, producers, beyond being motivated to improve their economic activities, must be motivated to learn and recognize the challenge that this implies. For actors who are not familiar with participatory processes, a guidance by policy makers or support organizations can help to keep motivation high despite possible failures that may occur along the way.
Commitment
When producers are confident that their ideas can create positive impact for them and the community, their level of commitment is strengthened. Providing farmers an opportunity to voice their opinions and develop innovative solutions could reinforce their commitment what contributes to build a healthy team and working environment. In Farmers Lab, being part of the project required extra commitment and coordination for producers who not always had a clear vision of the pathway and were not fully aware that a participatory process requires constant actions and interactions with other actors.
A positive factor that encourages farmers' commitment is to define each partner's role and responsibilities since the beginning of the initiative. Hence, actors are clear about what is expected from them and they commit themselves to comply with what is agreed. Strengths and weaknesses of each actor must be known in advance to make realistic decisions considering possible impediments, opportunities and risks, and propose tailored solutions. When members commit to achieve a goal, prompt responses or advances are important to sustain their engagement and enthusiasm. While commitment levels may vary, the challenge is to make use of the opportunity and be open to catch up with the rhythm of a participatory process. The challenge with commitment, as well as with motivation, is to take early actions to promote the desire of learning and collaborating to reach common goals.
Interaction
Despite the fact that the European Union encourages the formation of innovative and interactive projects, for many farmers being part of an initiative still means that someone transfers knowledge to them. Nonetheless, Farmers Lab supports the finding that the participatory approach can lower the innovation barriers that agriculture faces. This project influenced the producers to interact with other individuals from different backgrounds, boost their entrepreneurial behavior, and strengthen their adaptive capabilities. Through interaction, partnership members become aware of the importance of cooperation, recognize its role in the innovation process and the importance of being part of something to achieve change. Although not all the members have equal discussion and communication skills, the project gatherings open the door to exchange information and knowledge. The acquisition of interactive competences is essential for the optimal functioning of an OG since active members contribute much more to shared goals than passive ones. This initiative witnesses that producers' participation must be conceived more broadly than simply in terms of farmers' experiments or farmers' presence. What is useful to remember is that innovation can emerge as a result of exchange, use, and production of knowledge through interaction and cooperation. While interacting, farmers make others aware of their problems, thus expanding the opportunities to find solutions. In addition, mutual trust can be strengthened.
For some farmers, this project is the first experience of collective work. Hence, the role of the innovation broker is vital to encourage interaction and achieve a degree of confidence so that partners can express their ideas without fear of being judged. It is also important to create links to continue working as a partnership on future initiatives, which in turn could leave the door open to the creation of new networks. Interaction contributes to reach agreements, but it can also create conflicts or misunderstandings. The key is to recognize potential drawbacks and communicate openly to make consensus decisions. Before being part of a participatory process, all the actors should be aware of what actual interaction means in practice.
Communication
Farmers' field work is normally considered a solitary task. Although many farmers have learned to communicate with others to develop activities in favor of their farms, not all of them have done so. Therefore, they might not know the keys for a good communication to interact with other actors. Within the partnership, there are a significant number of farmers that are not able to communicate openly due to their shyness or lack of confidence. No matter how motivated they could be to be part of the team, if they are not able to communicate their ideas, opinions, issues, and limitations clearly, the interaction with other players would become a bottleneck. Attention must be paid also to the terms that are used. In the ideal scenario, time allows partners to speak the same language since practice contributes to develop a shared vocabulary.
Improving communication skills is not an easy task, even less when long-term changes are required. To achieve an active participation from producers, the focus should be on strengthening their capacities, so that they realize the importance of assertive communication.
Networks
Interaction takes place through internal and external networking. While being part of a network provides the possibility to get in touch with new issues, skills, solutions, and knowledge, unless farmers are conscious of these possible benefits, they do not participate actively. Farmers Lab was a good example to realize why being surrounded by different actors helps them to be informed of what is going on around. The focus should be quality rather than quantity. The objective of belonging to a network is to increase the ability to act collectively, encourage learning, and be a catalyst for development. For a producer, being linked to a network can be challenging because, once more, some guidance on how to be part of the process and be related to the group is often necessary, and sometimes not available to all farmers. Farmers Lab actors were not focused on creating or expanding their network. However, the participation to the project gave them the opportunity to appreciate the benefits new networks might offer.
Trust
In the partnership, lack of trust diminishes motivation to invest effort and energy in the process. Farmers, as other actors, have learned to trust in other partners over time. Given that each player varies in personality, desires, and needs; each one develops trust in a different way. As seen in Farmers Lab, delivery of accurate information in a timely manner, and transparency about actions and intentions through clear communication contribute to increase mutual trust, which supports the groups' cohesion and durability. In addition, defining tasks to be fulfilled for each actor is a fundamental aspect to carry out all activities on time and focus on the common objective.
Involving all the partners from the beginning strengthens trust in the team capabilities. While building trust seems an easy process, it can be a long and laborious approach if actors are used to work alone, as it is often the case for farmers. Actors must be aware that trust makes people more willing to be part of a straightforward participatory process, understand its complexity, work together to pursue goals, and lay the groundwork for future collaborations.
Learning and Knowledge Co-Creation
It is on the base of this case study that heterogeneous actors can complement each other's knowledge. Farmers Lab partners assured us they had learned something during the project. This is indeed a positive aspect since the first step is that partners realize that being part of an innovative initiative provides new knowledge and opens doors to learn about specific issues. Particularly for farmers, discussions are one of the key tools in their learning process. Nevertheless, if they do not continue developing their activities under a participatory process, they will most likely return to work in isolation as they did before. Therefore, it would be useful to monitor and follow-up on the partnership activities even when the project is over. Yet, at the policy level, we are conscious that it is challenging to check all the initiatives that have been financed. Once again, time is the main constraint. However, if a learning, collaborative, and innovative society is desired, we must not only implement foundations for it, but also support producers in the long-term process.
Innovation
Innovation is not just something achievable at the end of the project, rather, it is a progressive outcome, present in the entire participatory process. In Farmers Lab, although designing a laboratory for common use is something innovative, the principal novel aspect is working together and creating links with different stakeholders in a region where it is not normally done in that way. Having partners with different skills to design collectively is a new way of working for some actors who are not familiar to develop their activities as a team. This experience is valuable to recognize that innovation goes beyond a final product. It is not only related with technology development, but also with the dynamic of doing things, the way of organizing multiple tasks within the group, the way of reaching objectives, etc.
Limitations
Despite the contributions of this research, we identify two general limitations. First, the primary limitation is that a single case study was used for this investigation. Therefore, general conclusions cannot be drawn based on a two-year project only. Nevertheless, it contributes to expand our knowledge about farmers participation in OGs, initiatives that sometimes are not considered in other research but are important for the insights they can provide. Second, our sample attained for interviews was small (N = 8), and it could make it harder to substantiate our findings. In addition to the individual interviews, the initial plan was to conduct a focus group with farmers to validate and corroborate if our interpretation was aligned with their approach and, if possible, get more insights about the followed process. However, it was not feasible due to mobility constraints from one region to another within Italy. These encountered limitations offer an interesting road for future research where practitioners and scholars are encouraged to replicate these results with larger samples to increase credibility related to the generalizability of our findings.
The study presented here analyzes farmers' participation in the first phase of Farmers Lab. The OG's plan is to continue in a second phase of the project to begin processing its products in the common laboratory. Further research on how producers and other actors participate in the continuity of this type of initiatives would be advisable. While our findings contribute to expand our knowledge about farmers' participation within the OG in Veneto, Italy, the question that remains is whether the observed organizational structure, dynamics, and approach can be the roadmap by other farmers that belong to other Groups or innovative projects.
Conclusions
The collected qualitative data allows us to conclude that farmers were active and critical players in the inspiration and implementation phases of Farmers Lab. However, motivation, communication, interaction, commitment, trust, and networks are fundamental elements that should be considered and triggered throughout all the different stages of a project so that farmers participation allows to learn from each other, learn with each other, co-create, and innovate. Table 3 provides an overview on the main elements highlighted by the different actors in relation to each factor. Moreover, these elements are all interconnected. An effective communication enforces actor's commitment to networking and interact, which in turn is a base to develop mutual trust and to realize mutual learning in practice. The other way around, successful innovation outcomes reinforce the attitudes towards interaction and knowledge sharing. Farmers need to be aware that innovation is highly social, thus, problems and opportunities should be addressed collectively. If farmers regard an OG as a credible option to access to knowledge, to trigger change, and to create value, this strongly enhances their motivation, commitment, and trust.
When ideas are turned into practice and functional, this is the opportunity to show farmers that bringing people together with the appropriate skills is the clue to succeed. Since the beginning of an initiative, farmers must be aware that despite all the effort done to develop a great idea, something could go wrong, as risk is inevitable. Nonetheless, what is a failure today may contribute to an important innovation tomorrow. If success is not possible, there is a lot to be learnt from the experience. The key is to define common interests, challenges, and clear goals to maintain a social belonging to the group and ensure its future. If this dynamic is not well understood from the start, there is always the possibility to adjust goals, approaches, rules, or procedures. Farmers' participation is a dynamic process that requires continuous reflection to encourage learning.
What was just argued raises the issue of time as a key aspect to consider in the settingup of such projects. We refer to the time to be spent in the project for people with busy agendas, as well as to the project timeframe, especially the time to wait before benefits are visible (including the risk of failures). Policies (funding, support to organizations, promotion of best practices) can reduce the time-related obstacles in both these aspects, in the awareness that the short timespan of a project is often a limitation. Policy support to these initiatives should be as flexible as possible, given political and administrative constraints. Flexibility means designing a long-term cooperation (also beyond the timeframe of the project itself) when adequate to the challenge to address but also to the actor's attitudes and focusing more on short-term goals when the scope is to tackle specific time-bound challenges.
Finally, farmers are not homogeneous, what could work for a group of farmers could not work for another one. Different actors, even within farmers' community, not only have diverse agendas and priorities, but also different jargons and communication styles. For achieving a long-lasting participatory process and avoiding the risk of being part of solitary initiatives, programs are required that not only ensure long-term sustainability but also increase farmers soft skills and competencies, minimizing communication barriers (which entails sharing a common language with agreed rules), and developing clear procedures for decision making process. Additionally, it is important to provide physical or virtual spaces for sharing an idea to support farmers on establishing a partnership where their concerns are addressed, and they can feel and see the added value of cooperation.
These observations highlight the importance of tailoring the policy support and design of OGs, as well as of other similar participatory innovation processes, to the actual needs and capabilities of the actors involved. Dedicating some resources to the engagement of external facilitators and professional innovation brokers can effectively support these processes, helping the actors to share respective knowledge, but also to identify most adequate rules and procedures, for an attractive and effective innovation pathway. | 2021-08-02T00:05:23.748Z | 2021-05-17T00:00:00.000 | {
"year": 2021,
"sha1": "4b02f1f34ae105b6f83e3005ae851ad1579b8792",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/10/5605/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3bc1a31ff6f8d53225e37b0049305b2ebeb4fe1a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
134881593 | pes2o/s2orc | v3-fos-license | Early Miocene island arc tholeiite in the Mineoka Belt: Implications for genetic relationship with the Izu-Bonin-Mariana (IBM) arc
The Mineoka Belt in central Japan is a Paleogene accretionary complex with the various kinds of volcanic rocks and plutonic rocks formed in multiple ages. A basaltic lapilli tuff derived from the Hota Group was collected from the Mineoka Belt, Boso Peninsula to reveal the genetic relationship of the Mineoka Belt with the Izu– Bonin–Mariana (IBM) arc. The whole–rock composition of the basaltic lapilli tuff shows island arc tholeiite affinity, and the zircon U–Pb dating yields about 18 Ma. The geochemical signatures of the basaltic lapilli tuff are similar to those of the Eocene plutonic rocks in the Mineoka Belt, but the zircon U–Pb age is much younger than those of the plutonic rocks. The idea that the arc–related rocks in the Belt are fragments of the IBM arc can explain the two different ages of the Mineoka arc–related rocks. The island arc plutonic rocks are derived from IBM middle to lower crust formed during Eocene to Oligocene, whereas the Early Miocene basaltic rock is most likely to be a fragment of arc products formed in the IBM volcanism at the end of the Miocene back–arc spreadings.
INTRODUCTION
A Paleogene accretionary complex, the Mineoka Belt is distributed from the Boso Peninsula in Chiba Prefecture through the north of the Izu Peninsula to eastern Shizuoka Prefecture. This Belt discontinuously contains abundant fragments of ultramafic rocks (serpentinized peridotite), mafic rocks (gabbro, dolerite, and basalt), and pelagic and terrigenous sedimentary rocks, which are interpreted to form an ophiolitic mélange (Ishiwatari et al., 2016). The origin of these ophiolitic rocks has been still controversial. Arai (1991) suggested that the ophiolitic rocks in the Mineoka Belt originate from the Shikoku Basin and Izu-Bonin-Mariana (IBM) arc in the Philippine Sea Plate. Takahashi et al. (2012) interpreted that they are derived from the West Philippine Basin and IBM arc. On the other hand, Ogawa and Taniguchi (1987), Hirano et al. (2003), and Mori et al. (2011) proposed that the ophiolitic rocks came from the missing 'Mineoka Plate', which had completely subducted beneath the Japan arc in Miocene. Recently, Ichiyama et al. (2017) postulated that the Mineoka plutonic rocks are fragments of middle to lower crust of the IBM arc on the basis of their geochemistry and zircon U-Pb dating. Following this proposal, the other igneous rocks of the Mineoka Belt were also probably the constituents of the ancient IBM arc. We performed the zircon U-Pb dating of a basaltic sample from the Mineoka Belt in order to decipher the genetic relationship with the IBM arc. In this paper, we report the zircon U-Pb age and geochemistry of the Mineoka basaltic rock and discuss their implications for the igneous and tectonic evolution of the IBM arc.
GEOLOGICAL AND GEOCHRONOLOGICAL OVERVIEW
The Mineoka Belt is one of the Paleogene accretionary complexes in the Outer Zone of Southwest Japan. The Mineoka Belt is located in the southern part of the Boso (Fig. 1b). In particular, the various sizes of basaltic blocks (a few cm to km scale) are included in a serpentinite matrix and locally form geomorphological knocker features along Mineoka Mountains (Ogawa et al., 2009). Most basaltic rocks in the Mineoka Belt show tholeiitic mid-ocean ridge basalt (MORB) affinity, but some are alkaline intra-plate basalt affinity (Ogawa and Taniguchi, 1987). Mohiuddin and Ogawa (1998) reported Early Miocene foraminifera fossils from limestone in the Shirataki Formation in the Mineoka Group. Suzuki et al. (1984) reported Late Eocene to Early Oligocene foramin-ifera fossils from the same formation. On the other hand, Ogawa and Sashida (2005) reported Middle Cretaceous radiolarian fossils from bedded chert from the Yoka Beach in Kamogawa City. The Hota Group is composed of coarse-grained volcaniclastic sandstone and tuffaceous mudstone to sandstone. Early to Middle Miocene radiolarian (Saito, 1992) and silicoflagellate (Sawamura and Nakajima, 1980) are reported from the Hota Group. The Hota Group is highly affected by deformation and fragmentation and shows a complicated stratigraphy. Because the stratigraphic classification of the Hota Group is different among researchers, we collectively refer to Early Miocene clastic formations as the Hota Group in this study.
SAMPLE DESCRIPTIONS
On the Yoka Beach, the Mineoka ophiolitic constituents (serpentinite, basaltic rocks, gabbro, diorite, etc.) and a minor amount of tuff and lapilli tuff appear as float stones. These float stones were probably provided from the rocks distributed around the beach because there are no large rivers flowing into the beach (Fig. 1b). We collected a large boulder of basaltic lapilli tuff (about 40 cm in diameter) on the beach near Kamogawa Harbor for determining the zircon U-Pb age. The lapilli tuff is macroscopically similar to the Mineoka ophiolitic dolerite but is different in microscopic characteristics. The lapilli tuff consists of subangular crystalline basaltic fragments with a matrix of discrete crystals and altered glasses (Fig. 3a). The basaltic fragments are composed mainly of plagioclase (0.3-1.0 mm), clinopyroxene (around 0.3 mm), magnetite (0.3-0.5 mm) and quartz (0.3-2.0 mm) with a trace amount of orthopyroxene and hornblende (Fig. 3b). Smaller xenocrystic tonalite fragments (several mm) composed of plagioclase, quartz, and magnetite are included in the basaltic fragments (Fig. 3c). Honeycomb and dendritic structures are developed in plagioclase in the tonalitic xenoliths. The phenocrysts of quartz, magnetite, and part of plagioclase in the basaltic fragments are probably xenocrysts originated from the tonalitic xenoliths.
ANALYTICAL METHOD
We used X-ray fluorescence spectrometer (XRF: ZSX RIGAKU PrimusII) at Earthquake Research Institute, The University of Tokyo for the whole-rock analyses of major and trace elements. Standard sample JB-1a and JG-1a published by Geological Survey of Japan were measured together to check the precision and accuracy of the data. For these analyses, glass beads were made from the mixture of rock powder and Li 2 B 4 O 7 flux (in proportion of 1:5) with 1 to 2 drops of LiBr solution diluted to 50 times. The detailed procedure of XRF analysis is described in Hokanishi et al. (2016). We determined trace element including rare earth element (REE) of the wholerock sample by using laser-ablation inductively coupledplasma mass-spectrometer (LA-ICP-MS: Agilent 7500s coupled to MicroLas GeoLas Q-plus 93 nm ArF excimer) at Kanazawa University. We used 'fused whole rock glass' prepared by a direct fusion method using an I-strip heater without any flux. An ablating spot of 80 μm diameter and 5 Hz repetition rate with an energy density of 8 J/cm -2 per pulse were used as the operating conditions. BCR-2G was used as an external calibration standard. The detailed procedure of LA-ICP-MS followed Tamura et al. (2015). The results of the whole rock chemistry are listed in Table 1.
Zircon U-Pb dating was carried out by using LA-ICP-MS (Thermo Fisher Scientific ELEMENT XR coupled to a New Wave Research UP-213 Nd-YAG laser) at the Central Research Institute of Electric Power Industry. The beam diameter of 15 μm was employed. Plešovice zircon (337.13 ± 0.37 Ma; Sláma et al., 2008) was measured together as a standard sample (Table S1; available online from https://doi.org/10.2465/jmps.180118). The measurement procedure of the zircon U-Pb dating accorded to Ito (2014). The results of zircon U-Pb analyses are listed in Table 2.
The trace element pattern normalized by the N-MORB values of Hofmann (1988) exhibits distinct positive anomalies in large-ion lithophile element (LILE; Ba, Rb, Sr, and Pb) and negative anomalies in high-field strength element (HFSE; Nb and Ta), which are indicative of island arc volcanic rocks (Fig. 5a). The REE pattern normalized by the chondrite values of McDonough and Sun (1995) shows depletion in light REE (Fig. 5b). A slight positive Eu anomaly was possibly caused by the capture of the tonalitic xenoliths. This pattern is well similar to that of typical MORB or oceanic arc basalt, indicating that the basaltic parental magma was depleted to the same degree as MORB.
Zircon U-Pb Dating
We analyzed 20 zircon grains for the U-Pb dating. The cathodoluminescence images of the analyzed zircon grains show clear oscillatory and sector zonings (Fig. 6a), indicating that these zircon grains formed by crystallization from magmas. The zircon 204 Pb-238 U ages statistically exhibit a single population (Fig. 6b). The weighted mean age of zircon 204 Pb-238 U (Fig. 6c) and concordia age of 204 Pb/ 238 U and 207 Pb/ 235 U (Fig. 6d) result in 18.2 ± 0.8 and 18.4 ± 0.9, respectively.
DISCUSSION
Although it is unclear whether the zircon grains analyzed in this study are derived from the basaltic fragments or tonalitic xenoliths, it is evident that the basaltic rocks formed after 18 Ma. Despite the presence of the tonalitic xenoliths, these geochemical characteristics can be regarded as the composition of the original basaltic magma, although the contents of SiO 2 and Eu have been slightly affected by the tonalitic xenoliths. The whole-rock composition of the basaltic lapilli tuff exhibits island arc tholeiite affinity (Figs. 4 and 5), and they formed in an oceanic island arc environment. The contemporaneous radiometric ages in the Mineoka Belt are reported from alkali basalts with ocean island basalt affinity (Hirano et al., 2003;Mori et al., 2011), but their geochemical signatures are distinct from the basaltic lapilli tuff. The zircon U-Pb ages of the Mineoka plutonic rock with island arc signatures show around 38 Ma, which is consistent with the ages of the Eocene to Oligocene tholeiite and calc-alkaline arc magmatism in the IBM arc (Ichiyama et al., 2017). The geochemical signatures of the basaltic lapilli tuff are similar to those of the plutonic rocks, as seen in trace element and REE patterns (Fig. 5), but the zircon U-Pb age of 18 Ma is much younger than that of the plutonic rocks. Therefore, the basaltic lapilli tuff was produced by a different tectonic event from the plutonic rocks.
Considering the lithology and zircon U-Pb age, the basaltic lapilli tuff probably originated from the Hota Group exposed behind the Yoka Beach. The younger is- land-arc related andesitic rocks (whole rock K-Ar ages of 6-16 Ma) are also reported from the Hota Group by Mori et al. (2011), although it is possible to be secondly modi-fied ages due to degassing during alteration. The geochemistry and zircon U-Pb age of the basaltic lapilli tuff assure that the Miocene island arc volcanic rocks are actually present in the Mineoka Belt.
If the ophiolitic rocks in the Mineoka Belt are derived from the IBM arc as proposed by Ichiyama et al. (2017), it can account for the presence of the Miocene arc-related basaltic rock in the Mineoka Belt. After the Eocene to Oligocene tholeiite and calc-alkaline magmatism in the IBM arc, the IBM volcanism ceased during the back-arc spreading of the Shikoku and Parece Vela Basins. Subsequently, the bimodal volcanism resumed at the end of the back-arc spreading (about 17 Ma) in the Izu-Bonin region (Tatsumi et al., 2016;Taylor, 1992). The basaltic lapilli tuff is the most likely to be a fragment of the arc products formed in the IBM volcanism at the end of the Miocene back-arc spreadings. Comparing the trace element and REE patterns of the basaltic lapilli tuff with current frontal IBM basaltic lavas, for example from Aogashima Island (Fig. 1a), the geochemical characteristics, such as LREE-depletion, negative HFSE anomalies, and positive LILE anomalies are well consistent with each other (Fig. 5). Therefore, the Eocene and Miocene island arc igneous rocks in the Mineoka Belt can be explained by regarding the Mineoka igneous rocks as fragments of the IBM arc. Saito et al. (1992) reported the petrological characteristics of the andesitic to rhyolitic volcaniclastics in the Hota Group. They concluded that the volcaniclastics are derived from the Miocene volcanism of the Northeast Japan arc and that there was no supply of clasts from the IBM arc. However, the presence of the IBM-origin material in the Hota Group, as suggested in this study, implies the supply of clasts from the IBM arc to the Hota Group. The Mineoka Belt consists of unique rocks in the point of view of the large variety of formation ages and petrological characteristics. The further investigations of this belt will help us to understand the genesis and evolution of intra-oceanic arcs, especially IBM arc.
CONCLUSION
We obtained the zircon U-Pb age of about 18 Ma for basaltic lapilli tuff supplied from the Hota Group in the Mineoka Belt. The geochemical signatures of the basaltic rock show island arc tholeiite and are similar to those of the Eocene plutonic rocks in the Mineoka Belt. The Eo-cene plutonic rocks are interpreted as fragments of the IBM middle to lower crust, whereas the basaltic rock is the most likely to be a fragment of the arc products formed through the IBM volcanism at the end of the Miocene back-arc spreadings.
ACKNOWLEDGMENTS
We acknowledge Prof. S. Arai and Dr. A. Tamura at Kanazawa University and N. Hokanishi at Earthquake Research Institute, The University of Tokyo for their support of the instrumental analyses. This paper was improved by the constructive comments from Dr. N. Hirano and an anonymous reviewer. This study is part of the graduation thesis of H. E. at Chiba University. We would like to thank Prof. A. Inoue, Prof. M. Tsukui, and Dr. N. Furukawa for The definition line between tholeiite and calc-alkaline is after Miyashiro (1974). The compositional fields of MORB (mid-ocean ridge basalt), IAT (island arc tholeiite), and OIT (ocean island tholeiite) are after Basaltic Volcanism Study Project (1981). For comparison, the data of the Mineoka ophiolitic basalt (Ogawa et al., 2009) and the Hota volcaniclastic rocks are also shown.
their valuable comments for this study. This study was supported by the JSPS KAKENHI (No. JP90625469). Table S1 and color version of Figure 3 are available online from https://doi.org/10.2465/jmps.180118. | 2019-04-27T13:10:53.033Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "2f57bac470b6c15f7d6badf05804707e42210de6",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jmps/113/4/113_180118/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d576b39da7fd474c0b7c952cd1efc54654630cf2",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
4038999 | pes2o/s2orc | v3-fos-license | Cosmetic Surgery: Regulatory Challenges in a Global Beauty Market
The market for cosmetic surgery tourism is growing with an increase in people travelling abroad for cosmetic surgery. While the reasons for seeking cosmetic surgery abroad may vary the most common reason is financial, but does cheaper surgery abroad carry greater risks? We explore the risks of poorly regulated cosmetic surgery to society generally before discussing how harm might be magnified in the context of cosmetic tourism, where the demand for cheaper surgery drives the market and makes surgery accessible for increasing numbers of people. This contributes to the normalisation of surgical enhancement, creating unhealthy cultural pressure to undergo invasive and risky procedures in the name of beauty. In addressing the harms of poorly regulated surgery, a number of organisations purport to provide a register of safe and ethical plastic surgeons, yet this arguably achieves little and in the absence of improved regulation the risks are likely to grow as the global market expands to meet demand. While the evidence suggests that global regulation is needed, the paper concludes that since a global regulatory response is unlikely, more robust domestic regulation may be the best approach. While domestic regulation may increase the drive towards foreign providers it may also have a symbolic effect which will reduce this drive by making people more aware of the dangers of surgery, both to society and individual physical wellbeing.
Introduction
Travelling abroad for medical reasons is not a new phenomenon. Victorian travellers who travelled to 'take the waters' or to breathe fresh sea or mountain air for health purposes helped to lay the foundations of modern tourism [25]. More recently, however, a new type of medical tourist has emerged. Cosmetic surgery tourism is a fast growing market and in a global society the reasons for seeking cosmetic surgery abroad may vary significantly. Elite consumers may seek the services of the very best cosmetic surgeons in the world, while those seeking more extreme or even risky procedures may simply be unable to get what they want in the UK. The most common reason for travelling abroad for cosmetic surgery rather than remaining in one's home country to access domestic services, however, is financial. Cosmetic surgery can be considerably less expensive outside the UK, with a recent report suggesting that popular procedures such as 'nose jobs' (rhinoplasty) and breast augmentation may be approximately £2000 cheaper in the Czech Republic and Poland than in the UK. 1 Consequently, the increasing appetite for affordable cosmetic surgery has led to a growth in such surgical holidays. And although the services that this new breed of traveller is seeking are indeed medical-hopefully involving qualified surgeons working within private hospitals-the consequences of travelling abroad for cosmetic surgery, within a highly commercial and poorly regulated industry, may be far from medically beneficial.
In this paper we explore some of the concerns over the rapidly expanding global market in cosmetic surgery before considering the challenges of attempting to regulate the cosmetic surgery market. Data available on cosmetic surgery is patchy; there are small-scale sociological studies, surveys conducted by certain interested parties including professional organisations of plastic surgeons and medical defence organisations who represent surgeons in legal proceedings, and there is some data kept by the NHS. Such lack of evidence means that we do not have truly accurate data on the incidence of procedures in the UK and elsewhere or the resulting harm. Yet there is a lot we can glean from the data that does exist, although in using the data in this paper we do not claim our conclusions are representative.
We begin by briefly reviewing the evidence and recent developments pertaining to cosmetic surgery in the UK, considering what is driving the increase in such surgery and what cultural and physical harms are resulting, as well as how the law might respond. We then situate the domestic evidence within the global phenomenon of cosmetic surgery. We assess the evidence and apparent risks of cosmetic surgery tourism and the implications for the UK and the NHS in particular. We ask whether such foreign surgery should necessarily, and always, be regarded as more dangerous or whether the domestic market is raising concerns in order to protect itself. Either way we argue that the global trends are harmful but while a domestic response is crucial, its power to reduce harm is restricted by the drive for cheaper surgery in certain foreign countries.
In the context of global dominant beauty norms and the cultural and individual physical harms that they bring, we conclude by asking what the role of the law should be in this context. Is a global regulatory response possible? If the potential for global regulation is extremely limited we should consider how else to address the harms at stake and begin by addressing the inadequacy of domestic regulation. In response to the concern that tighter domestic regulation will encourage cosmetic tourism we agree with McHale that 'it is too easy to assume that globalisation means that resistance in the form of regulation is futile' [21]. Consequently we argue that a domestic regulatory response that seeks to make cosmetic surgery provision subject to increased governance is necessary to combat the dual harms (cultural and physical) we identify. Firstly, tighter regulation would make commercial provision safer for the consumer. Secondly, it would send a clear message about the potential dangers of cosmetic surgery in order to influence societal perceptions and resist the increasing normalisation of surgery as beauty treatment.
The Growth and Normalisation of Cosmetic Surgery Within the UK: Difference and Sameness
Plastic surgery is surgery undertaken for the purposes of altering the appearance of a patient. There are two subfields within the broader medical practice of plastic surgery. Reconstructive surgery is defined as work that seeks to 'repair, catastrophic, congenital, or cancer-damage deformities and is seen as restorative of a somehow damaged appearance, whereas cosmetic surgery is defined as 'entirely elective' work that is seen as purely as enhancement of appearance [3]. And while all surgery carries risks, the risks of cosmetics surgery should be more carefully weighed because they cannot be justified on health grounds but rather, the serious risks are undertaken for purely aesthetic reasons. This aesthetic rather than therapeutic purpose, as we have argued elsewhere [12], changes the nature of the risk/benefit analysis and, compared to medically necessary surgery, the risk/benefit analysis of cosmetic surgery should necessitate a more precautionary approach.
Over the past few decades, societal attitudes to cosmetic surgery have evolved quite dramatically. Undergoing surgery as a beauty enhancing treatment has become a lifestyle choice for increasing numbers of people, with a significant increase in people electing to undergo such procedures. According to the British Association of Aesthetic Plastic Surgeons (BAAPS), 50,122 cosmetic procedures were performed in 2013, a rise of 17% from 2012 [5]. The cosmetic surgery industry was worth £750 m in the UK in 2005, £2.3bn in 2010, and is forecast to reach £3.6bn by 2015 [10]. As Sir Bruce Keogh's review of the industry recently reported, rising demand for cosmetic enhancement has been driven by a number of socio-economic and technological factors, leading to the normalisation of serious and potentially harmful cosmetic interventions [17]. Keogh's report and other evidence shows that while surgery was once undertaken discreetly, now many more people will admit to it and even celebrate it. The media, social media, celebrity endorsement and advertising have been central to this normalisation [21].
Looking at gender issues within this phenomenon, we can see that while men are opting for surgery in greater numbers than ever before, it is still women who predominantly undergo such treatments [17]. The early feminist position was that women who have cosmetic surgery are victims of a patriarchal culture and beauty industry that pressurises them into making themselves more sexually desirable to men. Within this initial feminist response, women undergoing surgery are viewed as passive victims of a patriarchal system who are capitulating to their own sexual objectification. Subsequently this view was challenged by other feminists who emphasised questions of choice, autonomy and self-determination. Most famous in this respect is Kathy Davis's work, which portrays women as active agents, carefully negotiating and controlling their surgeries rather than being mere puppets of patriarchy. However, this account has been critiqued for overemphasising women's agency. Bordo, for example, states that Davis presents the self as an 'authentic and personal reference point untouched by external values and demands or relations with others [4]. The third position that has evolved, and which accords more with our position, regards the motivation for cosmetic surgery as neither fully internal nor external but rather an intersubjective and embodied process that takes place in a consumerist environment [22].
Placing aesthetic non therapeutic surgery within an intersubjective and consumerist context shifts the focus away from the disembodied individual cosmetic surgery patient that has dominated much previous research (which portrays her as either the victim of patriarchy or free autonomous agent), and allows us to take into account agency and choice within constrained cultural context. Bodies are active and reflexive but also heavily influenced by other bodies and gendered and racialised norms [26]. In this position, cosmetic surgery is approached as 'a purchase, characterised by the rhetorics of fashion, consumption and selfpresentation rather than medical or psychological necessity' [22]. Here then for us 'consumption of cosmetic surgery is a strategic act on the part of individuals who are rational and intelligent but who reside in a structural context where class, gender and race determine action' [22]. Latham's work has also been informative here [19]. Following her review of the feminist literature Latham advocated for a 'third way', which promotes a relational approach to autonomy and which seeks to address the conflicting feminist concerns through more precautionary yet pragmatic regulation. Also significant is the fact that when we talk about consumers of surgery (both nationally and globally), we are talking about aged, classed and raced women. While surgery has become normalised, and global norms can be discerned, women are differently placed in relation to this normalisation in the UK and globally according to the cultural and social environment in which they reside. For example, in relation to social class, as Jacqueline Sanchez Taylor has recently noted 'we are not all the same kind of makeover citizens, nor do we all experience the same pressure to conform to the same patriarchal ideals of feminine beauty.' Some women, middle class academics for example, can (usually) do their gender without undergoing invasive procedures, they will not be dishonoured by their lack of attention to the kind of beauty and fashion regimes which are important to some other women and, in fact the reverse is often true, within an environment where low key performances of gender are typically more highly valued and respected [24].
In an ESRC study 'Sun, Sea, Sand and Silicone' women from different countries who were undergoing foreign cosmetic surgery were tracked [14]. Cultural differences in procedures the women wanted undertaken and why they were undertaking them were stark. For example, 44% of the British sample and 66% of the Australian sample were travelling abroad for breast augmentations. This procedure was absent for the sample of Chinese women who were travelling to South Korea for surgery, for whom eyelid, jawbone and nose jobs made up 88% of the total. The UK women were defined as mostly working class women, while the Chinese patients were mostly middle class women who sought better quality surgery than that which is available in China.
Sanchez Taylor has observed that: [W]e may all live as engendered beings in a patriarchal society(ies), and may all metaphorically be cosmetic surgery recipients, we also have different class, age, sexual and racialized identities (as well as being differentiated along the lines of able-bodiness/disability) and so stand in different relations to outward displays of gender and are positioned differently in relation to contemporary discourses and practices of cosmetic surgery [24].
Recognising such difference is thus essential when we look at cosmetic surgery because not all women stand in the same relationship to beauty norms and the pressures to have surgery. While certain global beauty norms may be evident, such as the desire to look young, there are still key differences. Despite such differences, however, it is true to say that generally women share many of the pressures and for women who do turn to cosmetic surgery, similar risks may be apparent. Indeed, when we consider the potential risks and real harm that such surgery involves, both physically and arguably also culturally, we can see that many of the concerns are universal.
Harm and Risk in UK Cosmetic Surgery
There are firstly the risks of what Jean McHale has described as 'normalising perfection' as well as pathologising imperfection [21]. Cosmetic surgery reinforces and heightens concern with body image and culturally prescribed standards of beauty, contributing to a youth culture that distains aging and the elderly and upholds culturally specific standards of beauty. It also promotes inequality between those who have the resources to purchase an enhanced appearance and those who don't. As McHale has asked, will the 'cosmetically unenhanced become an effectively unemployable underclass?' [21]. While women are differently placed in relation to having to conform to standards of beauty and surgical enhancement, none are untouched by being placed in relation to these standards and even those women who do not feel beholden to explicit gender performances (or even surgery for economic or cultural capital) may be discredited in wider society for not doing so. The physical risks of cosmetic surgery have most starkly and recently been illuminated by the PIP (Poly Implant Prothese) breast implant scandal, which resulted in global outrage after the French implant company, PIP, were found to have used industrial grade silicon in their product. 2 More generally, data on harm in cosmetic procedures is scarce but some information is available from medical negligence claims. According to a major analysis of cosmetic surgery done by the Medical Defence Union (MDU), growing numbers of patients are suing cosmetic surgeons over mistakes during operations designed to improve their appearance. 3 Data from this analysis shows that negligence claims concerning breast surgery, facelifts, eyelid operations, nose reductions, and weight-loss procedures account for 80% of claims stemming from cosmetic surgery and damages of more than £500,000 were paid out over a five year period. The MDU state that cosmetic surgery negligence claims are successful in 45% of cases, compared with 30% of medical negligence claims in general. This success rate would suggest that when harm occurs from cosmetic surgery, the presence of negligent behaviour is more often clearer and easier to prove than when harm occurs from medically necessary surgery. The reasons for this are unknown, however, we suggest that this might be at least partly because the surgery is not carried out for therapeutic reasons and so the resulting harm, to a victim who was (presumably) previously in good health, is an obvious sign that mistakes have been made. Additionally, the lack of regulation has meant that within the private market for cosmetic surgery, too often poorly qualified surgeons are undertaking procedures for which there has been an inadequate consent process coupled with inadequate consideration of the patient's health and well being, which further heightens the usual risks associated with invasive surgery [17]. For this reason, as we have argued elsewhere [12], the unquestioning assumption that non-therapeutic cosmetic surgery is justified under the medical exception to the (English) criminal law as 'proper medical treatment', has led to a complacent approach to regulation that requires urgent attention.
In response to the Keogh Review, there have been some developments but not enough to fully protect the cosmetic surgery consumer. Consequently, the president of BAAPS, Rajiv Grover, has commented that: 'It's business as usual in the Wild West and the message from the government is clear: roll up and feel free to have a stab' [18]. Some improvements, however, have been made. The Royal College of surgeons in October 2016 launched new guidance for patients on cosmetic surgery to protect them from 'aggressive marketing' and 'ruthless' sales tactics and they expected to create a register of certified surgeons who are appropriately qualified to provide particular procedures. 4 The General Medical Council also issued new guidance which sets out the standards they expect from doctors who provide cosmetic interventions, including stipulations to market their services responsibly, seek a patient's consent themselves rather than delegate this to somebody else and consider patients' vulnerabilities and psychological needs when making decisions with them about treatment options. 5 While this presumably will not do anything to prevent poorly qualified doctors from offering their services as a cosmetic surgeon, it will, at least, allow prospective consumers of cosmetic surgery services to ascertain whether the surgeon in question is appropriately qualified. Hopefully therefore, we may anticipate a marginally better regulated collection of cosmetic surgery providers in the UK. Yet in spite of the expected improvement, we question whether these small measures go far enough. Moreover, even if safety is eventually improved through the creation of the cosmetic surgeons register, this would do little to address the cultural harms that cosmetic surgery perpetuates, particularly for women.
Reforming the Regulation of Cosmetic Surgery in the UK?
Currently cosmetic surgery in the UK is regulated in the exactly same way as medically necessary surgery. As we have mentioned above, cosmetic surgery is defined as acceptable and legitimate medical practice ('proper medical treatment'), alongside medically necessary or beneficial surgery. As a matter of public policy, the criminal law prohibits consensual harmful activities unless they can be justified because they are medically necessary, or carried out in pursuit of legitimate sporting activity. 6 The only time that a doctor might fall under the scrutiny of the criminal law for harming a patient will be if that patient dies. If death occurs as a result of a medical blunder-and assuming the doctor did not intentionally kill the patient, for that would be murder-a charge of gross negligence manslaughter could follow. Essentially therefore, the reckless cosmetic surgeon who harms (but does not kill) a patient through performing ill-considered, inadvisable and negligent surgery will not be troubled by the criminal law. Where surgery is concerned, provided it is in the best interests of the patient and is carried out by a qualified healthcare professional, there is no question that it falls within the medical exception to the criminal law and is thus lawful. But having considered the risks we might ask when, if ever, is non therapeutic and potentially harmful surgery in a person's best interests?
We have argued elsewhere that when patients suffer at the hands of cosmetic surgeons, who, driven by commercial profit, recklessly undertake risky and nontherapeutic surgery, the usual public policy justification for the medical exception is absent [12]. For this reason we have suggested that there should be a more significant role for the criminal law, through the use of the Offences Against the Person Act 1961, when serious harm is inflicted. Detailed consideration of the criminal law is beyond the scope of this article, other than to say that the main obstacle to a greater role for the criminal law is the medical exception, which surgeons rely upon to legitimise what might otherwise be harmful criminal conduct. The medical exception rests on assumptions that surgery is performed in the best interests of the patient because it is therapeutic, or it is in the interests of another (for example, when donating a kidney). 7 The difficulty here is that there is sometimes a fine line between plastic surgery for therapeutic reasons and non-therapeutic cosmetic surgery. For example, consider breast augmentation surgery for reconstruction after a mastectomy due to cancer, which is evidently therapeutic, compared with breast enhancement to treat psychological issues with self-image. The latter may also be therapeutic in a sense, but it does nothing to treat the possible psychological causes for the lack of self-worth and it harms physical health via the surgery. While sometimes no clear line can be drawn, in our previous paper we define all non-therapeutic cosmetic surgery along the same lines as the NHS [12].
We have also suggested that even when the surgery goes well, the professional ethics of the surgeon in normalising such invasive interventions for cosmetic purposes have a harmful societal effect and so, for this reason also, cosmetic surgery should not be included within the definition of 'proper medical treatment' as a means for justifying it. Instead, much tighter regulation and sufficiently informed consent for all non-therapeutic cosmetic surgery should be required in order to legitimise its performance rather than by recourse to the medical exception. This could look like the model in France where, following the enactment of the Kouchner law 2002, regulation is much stricter, consent procedures are far more detailed, and additional safeguards regulating advertising and requiring a 'cooling-off' period, to allow the consumer to reflect on the decision, have been brought in [9,20]. Moreover, when things do go wrong in French cosmetic surgery, as generally in French medicine, there is a much more significant role for the criminal law [16]. Latham was hopeful that following the Keogh Review, the British government might look to the French approach to inform legal change in the UK but unfortunately this now seems unlikely [20]. While the GMC have recently (April 2016) issued ethical 'Guidance for doctors who offer cosmetic interventions', which is a welcome development, in the absence of substantive legal reform we might expect little if any improvement within commercial provision.
For others, even improved regulation might not go far enough. Dennis Baker has argued, like us, that cosmetic surgery is harmful in a direct sense because it causes physical harm and in an indirect sense because it reinforces artificial celebrity or racist appearances as the preferred social norm [1]. For Baker better regulation would not significantly reduce either of those harms, and so he has argued that all significantly invasive cosmetic surgery should be regarded as a criminal offence and thus prohibited because it is inherently harmful. Baker also argues that consent cannot be used to justify such harms, commenting that: 'The medical profession has hidden the criminal harm in unnecessary cosmetic surgery by dressing it up as genuine medicine' [1]. For Baker, it should be criminalised because it involves wrongful harm. While we agree that the current approach is highly problematic, we do not agree that competent adults should be prevented from seeking lawful cosmetic surgery within the UK. A ban would simply drive such surgery underground and overseas where the dangers may be greater. Furthermore, as we have discussed above, although we share concerns that some women seeking cosmetic surgery may be viewed as victims of patriarchy, an informed choice to surgically enhance oneself may be viewed as a rational and positive life choice within the context of that individual's circumstances.
Like Baker, McHale has focused on the law as it applies to the choices that children can take in respect of their bodies and cosmetic surgery. McHale asks whether regulation and/or outright criminalisation of certain cosmetic procedures concerning children and adolescents should now be considered and she pinpoints Queensland in Australia as an interesting and instructive example where such sanctions apply. However, as McHale notes, a key weakness within arguments concerning tighter regulation and criminalisation is that domestic law is constrained by what is now a global market in healthcare, which has become particularly significant in relation to cosmetic surgery. McHale asks; 'would the adolescent and parent denied surgery in the UK simply hop on Eurostar or EasyJet and receive treatment elsewhere?' [21]. Clearly it is highly likely that this question would be answered in the affirmative. Thus, in a global cosmetic surgery market, domestic regulation may be irrelevant to the growing tide of tourists who elect to seek cosmetic surgery services abroad. Domestic regulation could also drive an increase in such tourism if surgery becomes too expensive and difficult to access in the UK. This would evidently be an unwelcome development if it brought with it additional risks and harms, especially harms that would also be potentially costly to the NHS.
Cosmetic Surgery Tourism: Risks and Harms
In order to consider the regulatory challenges within a global cosmetic surgery market, it is important to explore the evidence and risks regarding cosmetic surgery tourism. Such tourism, which might be defined as the movement of patients from one location to another to undertake aesthetic procedures, is a significant and growing area of medical tourism [2]. The UK's annual International Passenger survey shows that approximately 100,000 UK citizens go abroad each year for medical treatment, which is projected to rise about 20% per year. Evidence from other jurisdictions is also illuminating. For example, cosmetic surgery tourists make up about 85% of Australian medical tourists [2]. In the UK figures are harder to find but a survey conducted by Treatment Abroad found that, including dental treatment and obesity surgery (for have cosmetic purposes), cosmetic procedures account for 60-70% of all medical tourism (42% excluding dental and obesity).
With respect to the top destinations for UK tourists for cosmetic surgery, we see that Poland (40%), then Spain, India, Tunisia, Czech Republic are the most significant markets. 8 Some early studies have presented the surgery tourist as highly mobile, wealthy elites [8]. Other recent empirical research has in fact found that these consumers are far from wealthy, most often being lower middle class and working class women [14]. Confirming this, as mentioned earlier, it has been found in many studies that cost is the motivating factor influencing decisions to travel abroad (with the exception of China where travel was to seek out better quality surgery).
When a person elects to travel abroad in order to undergo cosmetic surgery, there are a number of reasons why the usual risks of surgery may be magnified, yet there is little clear evidence to suggest that cosmetic surgery abroad is necessarily dangerous. In order to explore the risks, we have identified the main concerns as follows: a primary concern is that it will often be more difficult to check that the clinic/hospital in a foreign destination is safe and reputable. In the UK the Care Quality Commission (CQC) regulates all such surgical providers and while there may be similar regulatory organisations and mechanisms in some countries this will vary significantly. Moreover, even where such regulatory agencies exist, the language barrier may make it difficult to access any relevant information.
Secondly and similarly, checking that a surgeon working outside the UK is appropriately qualified will often be more challenging. In the UK, although we await the register of certified cosmetic surgeons, it is at least possible to check a doctor's registration with the General Medical Council (GMC). Whether any similar system operates abroad will depend upon the jurisdiction and, once again, the customer's ability to access and understand any relevant information. Linked to these regulatory issues is the fact that UK providers and surgeons delivering private cosmetic surgery services must be appropriately insured in case of malpractice so that the patient may be compensated. Again, there may be a corresponding requirement in certain other countries but this is by no means universal and if something does go wrong, pursuing damages within a foreign legal system will invariably present greater challenges. While insurance may be available to safeguard such consumers and to enable them to pay legal costs should it be necessary, this will inflate the cost of seeking surgery abroad and thus some travellers will not obtain adequate insurance.
A further concern is that patients are usually required to pay for a package deal prior to travel. Informed consent, if it occurs at all, and the initial consultation may be superseded by agreeing to the treatment and, crucially, paying for both travel and treatment. Consequently people who have paid for all or part of the treatment (and travel) and then travelled abroad to the destination, will naturally feel reluctant to cancel the planned procedure in the event that the consultation and consent procedure cause them to reconsider the decision to have surgery. It also raises issues for after care in the case of complications. If the procedure has been paid for in advance the package may not include after care or if it does it such after care may not be accessible if the patient has flown back to their home country.
If the surgery goes well, the very concept of a 'cosmetic surgery holiday' carries dangers because traditional holiday activities, such as lying on a beach, swimming, sight-seeing and drinking alcohol, are potentially risky following surgery. Finally, and assuming one flies to the chosen destination, both surgery and air travel intensify the risk of deep vein thrombosis (DVT) and so flying home shortly after the surgery should be avoided. The British Association of Plastic, Reconstructive and Aesthetic Surgeons (BAPRAS) suggest that people should wait five to seven days after procedures such as breast augmentation or liposuction and seven to ten days after facial or abdominal surgery before flying home. 9 Considering that most people elect to have cosmetic surgery abroad to minimise cost, the additional expense of an extended stay abroad will often compel people to fly home before they should, thus increasing their chances of suffering DVT. We can therefore see that the combination of all these additional risks, together with the possible language barrier and cultural differences within a health care context, make for a potentially dangerous experience.
But do these concerns translate into actual harm? And if so, is this costly to the NHS? Regarding cosmetic surgery, Jeevan and Armstrong conducted a survey for the British Association of Plastic, Reconstructive and Aesthetic Surgeons [15]. 203 out of 325 surgeons responded and of these, 76 (37%) had seen patients in the NHS with complications arising from overseas cosmetic surgery. In an audit of the pan-Thames region, 35 out of 65 consultants replied to requests about cosmetic surgery impacts [15]. Sixty per cent of those replying had seen complications and the majority of these cases (66%) were emergencies that required inpatient admission.
It is important to note that although the very real risks outlined above present compelling reasons to urge caution, other evidence of actual harm (especially beyond that of cosmetic surgery in the UK) paints a different picture. In a study by Hanefeld et al. [13] the costs and benefits of medical tourism to the NHS presented more positive figures, with few admissions. Holliday's research found only 17% of their participants had complications and of those only 2% were serious [14]. The research showed that 97% of those participating in the study were happy with the outcome of the surgery and would recommend their surgeon to a friend [14]. The high levels of satisfaction are perhaps surprising considering that 17% of this group experienced complications following the surgery, with 9% requiring further treatment on their return home [14]. Clues to this response, however, can be elucidated from other information in the study, which suggests that those questioned did not undertake the surgery on a whim or with unrealistically optimistic expectations. Rather, they were ordinary people on modest incomes who took a long time to reach the decision to access cosmetic surgery abroad. This indicates that they were conscious of the inherent risks and even where the road to recovery included complications, provided the ultimate result was satisfactory, the risks were regarded as worth taking. Yet there are some problems with making too many positive conclusions from this data. There may also be cases where the NHS does not cover complications and thus patients do not present in the above studies. In addition, any levels of satisfaction clearly do not mitigate the costs to the NHS. Finally even 9% requiring NHS treatment is a significant financial burden to a stretched health service in terms of patient beds, delayed procedures and public health risks such as increased antimicrobial resistance stemming from the likely use of antibiotics with these patients and potential for introducing hospital infections from their stay in another hospital.
We might also consider how fears regarding services abroad may be inflamed by the rhetoric of national professional bodies naturally motivated to protect the national market. Note that the study by Jeevan and Armstrong was conducted for the British Association of Plastic, Reconstructive and Aesthetic Surgeons, who arguably have a vested interest in persuading prospective patients to seek treatment within the UK. Gimlin has argued that narrative strategies that discredit foreign providers are employed by organisations seeking to invoke fear in the consumer [11]. Gimlin provides an account of her own very positive experience of accessing health services abroad (in Costa Rica), before exploring the way in which cosmetic surgeons' professional organisations [11] seek to influence perceptions about services abroad. Gimlin notes that while the warnings presented on the websites of these organisations are; 'framed as 'educational' they also portray associations members' services as better, safer and more public spirited than those of foreign practitioners.' [11] In her study she found British organisations often drew on constructions of foreign providers as deceitful, unhygienic and primitive.
Notwithstanding the risks involved in seeking such surgery-even if these are exaggerated by domestic cosmetic surgeons-the other side of the argument is that the availability of affordable cosmetic surgery abroad enables more widespread access to services that have long been available to the wealthier few in society. Thus, the benefits of cosmetic surgery-improved appearance and self-image leading to the alleviation of psychological anxiety related to the physical body and greater happiness-are now available to more people. Accordingly, in a society where previously only the wealthy, or perhaps those sufficiently desperate and/or vulnerable and willing to endure severe financial hardship, were able to utilise the services of cosmetic surgeons, the phenomenon of affordable cosmetic surgery abroad might be seen to be egalitarian. If we recall arguments about cosmetic surgery 'normalising perfection', which may eventually result in an underclass of cosmetically unenhanced, then it might be argued that cosmetic surgery tourism provides new hope to those previously unable to afford such surgery.
Yet the counter argument is that by making cosmetic surgery even more readily available to wider groups of women, we are perpetuating and heightening the harmful cultural normalisation of enhanced beauty. So while such surgical tourism might democratise access to this form of enhancement, we would argue that this is not a positive development. Resisting the inequity of the normalisation of perfection by making it more accessible will only create more pressure to conform. The only way is to resist such normalisation is arguably to restrict such surgery, to deter surgeons from bad practice and to reduce the demand for it, but is this is possible?
Regulating (Harmful) Cosmetic Surgery in a Global Context
How do we deal with the harms (both physical and cultural) that stem from domestic and foreign surgery? From the available evidence we might predict that tightening regulation, or even criminalising certain cosmetic surgery in the UK, would fuel demand for surgery abroad. The motivating factor for such travel is currently financial but it is probably accurate to forecast that if stricter regulation meant that accessing surgery in the UK became more difficult, ease of access to certain foreign services would become important. How much harm will stem from this depends upon the nature and scope of regulatory approaches in other jurisdictions. Many countries have a regulatory approach that is equal, or superior, to that of the UK. France, as we have mentioned, now takes a much more precautionary approach than the UK, with recent legislation which has tightened up practices in order to safeguard patients [20]. Yet the most popular venues seem to have a more relaxed approach to regulation. With respect to Europe wide regulation, the European Committee for Standardisation (CEN) has very recently produced a European Standard for Aesthetic surgery services within the 33 member countries. 10 Speaking about the new standard, the chair of the group, Dr Johann Umschaden, an Austrian specialist surgeon stated: Even if there are specific regulations in some EU Members States on aesthetic surgery, some of them are lacking in terms of hygienic, technical issues, or they don't include a risk analysis. Recent reports on incidents in the context of aesthetic surgery emphasize the importance of this comprehensive European Standard which was developed through an open, inclusive, multi disciplinary and evidence based process [6].
The standard is, of course, voluntary and so does not compel providers to improve the quality and safety of their services. Prospective consumers can, however, select only those providers who sign up to the standard, thus improving their chances of having a safer experience. We may therefore view the CEN Standard as a step in the right direction towards a more uniform and better regulatory approach, at least in Europe, though only time will tell us whether the anticipated improvements occur.
However, CEN Standard does not cover all of the countries that are significant cosmetic surgery destinations for UK women seeking surgery and so perhaps a global regulatory response is necessary. Because the problems of normalisation and the unwelcome societal implications reach far beyond Europe, it seems that global regulation would serve an important function, however, there are obvious difficulties in constructing any global response. First, there are wide differences in attitudes towards acceptable levels of risk and gaps in commercial medical services. In addition global regulation will compete against drives to encourage and open new markets and the free flow of services. Moreover, in such a global market, cost will ultimately be the most significant driver and as we know, regulatory measures, however soft, are invariably expensive which would render them unattractive to many jurisdictions.
In view of the clear evidence of the risks in cosmetic surgery and regulatory inadequacy both domestically and globally, are we asking too much of the law in this context? We see merit in Sharon Cowan's argument that we perhaps need to take a break from the legal in order to transform the social, or at least use the social and the legal together, to address the harms that cosmetic surgery arguably perpetuates [7]. Thus, rather than focusing solely on using a regulatory response to discourage harmful surgery, we should also be considering how we might challenge the culture that positions women in relation to the normalised representations of beauty. But the legal is not redundant. Tighter regulation in the UK, including using the criminal law against surgeons who cause harm when they proceed with risky surgeries, will not prevent people from seeking services abroad. However a domestic response would hopefully send out symbolic message that such surgery is potentially dangerous and should therefore be treated with great caution. Here the legal and social could work together in the way Cowan describes. While the impact of the law may not be direct or quick to change social perceptions of cosmetic surgery, in the long run, knowing that cosmetic surgery in the UK is restrictively regulated may alert people to the dangers of the practice they are about to undertake. In light of the constraints on applying effective legislation on a global level, this domestic response could be to the best hope foring the harms of the global cosmetic surgery industry. Our approach would prioritise changes in domestic regulation but accepts that, as Riles argues, we must resist relying solely on law to mould the social world around us and must use it as in combination with other strategies of transformation [23].
Conclusion
We have considered the complex and sometimes conflicting evidence regarding domestic cosmetic surgery and the risks of seeking services abroad. Our research suggests that the very real physical and cultural risks of cosmetic surgery, wherever it is performed, coupled with the normalisation of the surgically enhanced female, means that stricter control via regulation is desirable. A global regulatory response is unlikely, especially for the UK during a time in which (post Brexit) we are retreating from cross border regulation. Thus we should pin our hopes on domestic regulation. Considering the precise forms of any such stricter regulation is beyond the scope of this paper, yet as a starting point we have suggested that the French approach, which regulates advertising, marketing, informed consent and which necessitates a cooling off period, would better safeguard consumers. Additionally, the assumption that all cosmetic surgery is medically justified-and so beyond the reach of the criminal lawsimply because qualified doctors are performing it, should be reviewed in order to adopt a more nuanced approach. This, we have argued, should include recourse to the criminal law when patients are harmed in certain circumstances.
Tightening regulation may not prevent people seeking such treatment elsewhere but by changing the law and thus sending a clear message about the harms of such surgery, we suggest that it would alert at least some potential consumers to the dangers and make some people reconsider the wisdom of cosmetic surgery. Questioning the medical ethics of performing harmful, highly invasive surgery for purely aesthetic purposes and subsequently tightening the regulatory approach would deny cosmetic surgery the credibility and legitimation it currently receives. While such changes in domestic regulation may drive an increase in people seeking out foreign providers it may also have certain other indirect effects that would deter such surgery and thwart normalisation by delegitimising it. This would, we argue, make women think twice about seeking cosmetic surgery at home or abroad. distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-04-03T03:39:50.638Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "167ba1f91c91e5e410a7f32f2da6907eb09e8b13",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10728-017-0339-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d7f5ac15672fe85f2d945134acfc9a18d29a5bf",
"s2fieldsofstudy": [
"Business",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12705732 | pes2o/s2orc | v3-fos-license | Lactobacillus reuteri DSM 17938 in the prevention of antibiotic-associated diarrhoea in children: protocol of a randomised controlled trial
Introduction Administration of some probiotics appears to reduce the risk of antibiotic-associated diarrhoea (AAD). The effects of probiotics are strain-specific, thus, the efficacy and safety of each probiotic strain should be established separately. We aim to assess the effects of Lactobacillus reuteri DSM 17938 administration for the prevention of diarrhoea and AAD in children. Methods and analysis A total of 250 children younger than 18 years treated with antibiotics will be enrolled in a double-blind, randomised, placebo-controlled trial in which they will additionally receive L. reuteri DSM 17938 at a dose 108 colony-forming units or an identically appearing placebo, orally, twice daily, for the entire duration of antibiotic treatment. The primary outcome measures will be the frequencies of diarrhoea and AAD. Diarrhoea will be defined according to 1 of 3 definitions: (1) ≥3 loose or watery stools per day for a minimum of 48 hours during antibiotic treatment; (2) ≥3 loose or watery stools per day for a minimum of 24 hours during antibiotic treatment; or (3) ≥2 loose or watery stools per day for a minimum of 24 hours during antibiotic treatment. AAD will be diagnosed in cases of diarrhoea, defined clinically as above, caused by Clostridium difficile or for otherwise unexplained diarrhoea (ie, negative laboratory stool tests for infectious agents). Ethics and dissemination The Bioethics Committee approved the study protocol. The findings of this trial will be submitted to a peer-reviewed paediatric journal. Abstracts will be submitted to relevant national and international conferences. Trial registration number NCT02871908.
INTRODUCTION
Antibiotic-associated diarrhoea (AAD) is defined as unexplained diarrhoea that occurs in association with antibiotic therapy. 1 The prevalence of AAD varies depending on the criteria used to diagnose it; however, it is estimated at 5-30%. 2 3 AAD may occur just a few hours after antibiotic administration or up to several months after its discontinuation, 4 and it is associated with increased costs and hospital length of stay. 5 One of the potential mechanisms by which antibiotics cause diarrhoea is a direct effect of the antibiotics on the intestinal mucosa. As a consequence, alterations in the gut microbiota composition and overgrowth of pathogens, primarily by Clostridium difficile, but also Staphylococcus, Candida, Enterobacteriaceae and Klebsiella may occur. 6 However, often the mechanism(s) by which antibiotics cause diarrhoea remain unclear. The clinical presentation of AAD varies from mild diarrhoea to colitis or fulminant pseudomembranous colitis. 7 Preventive measures to reduce the risk of AAD include the use of probiotics. 8 Probiotics are defined as 'live microorganisms that, when administered in adequate amounts, confer a health benefit on the host'. 9 The rationale for the use of probiotics is based on the assumption that AAD results from the disruption of the commensal gut microbiota caused by antibiotic therapy. 10 Available evidence documents that the Strengths and limitations of this study ▪ The study design (randomised controlled trial, RCT) is the gold standard research design to assess the effectiveness of healthcare interventions. ▪ A precise clinical question has been posed to fill a gap in knowledge as to whether administration of Lactobacillus reuteri DSM 17938 is effective in the prevention of antibiotic-associated diarrhoea (AAD) in children. ▪ The findings of this RCT, whether positive or negative, will contribute to the formulation of recommendations on the use of L. reuteri DSM 17938 during antibiotic treatment. ▪ The frequency of AAD may be lower than expected. ▪ There is no single, generally accepted definition of AAD.
administration of some probiotics significantly reduces the risk of AAD. 8 Examples of probiotics with proven efficacy include Lactobacillus rhamnosus GG and Saccharomyces boulardii. 11 12 However, in line with the position of the Working Group on Probiotics of the European Society for Paediatric Gastroenterology, Hepatology and Nutrition, the effects of probiotics are strain-specific, thus, the efficacy and safety of each probiotic strain should be established separately. 8 L. reuteri DSM 17938 is a Gram-positive bacterium that naturally inhabits the gut of mammals. First described in the early 1980s, it has been safely used in infants and adults. 13 One randomised controlled trial (RCT) evaluated the efficacy of L. reuteri DSM 17938 at a dose of 10 8 colony-forming units (CFU) for the prevention of AAD (defined as at least three loose or watery stools per day in a 48-hour period that occurred during or up to 21 days after cessation of antibiotic treatment) in 97 hospitalised children. 14 No significant difference in the risk of AAD was found between the placebo group and the group receiving L. reuteri DSM 17938. However, the overall frequency of diarrhoea was surprisingly low (one case in each study group). Thus, the efficacy of L. reuteri DSM 17938 for preventing AAD remains unclear.
Trial objectives and hypothesis
We aim to assess the effectiveness and safety of L. reuteri DSM 17938 administration for the prevention of diarrhoea and AAD in children. We hypothesise that children who receive L. reuteri DSM 17938 during the antibiotic therapy will have a lower risk of AAD than children receiving a placebo.
METHODS AND ANALYSIS
The trial is registered at ClinicalTrials.gov (NCT02871908) and any important changes in the protocol will be implemented there.
Trial design
This study is designed as a randomised, double-blind, placebo-controlled trial, with allocation of 1:1.
Settings and participants
The recruitment will take place in two hospitals in Poland ( paediatric academic hospital in Warsaw and community hospital in Łuków). We aim to recruit hospitalised children in general paediatric wards. However, inclusion of outpatients and involvement of other recruiting wards and/or sites are under consideration provided that the personnel are adequately trained and competent in conducting clinical trials. The start of the recruitment is planned in December 2016 and should be completed within the following 2 years.
Eligibility criteria
Children eligible for the trial must fulfil all of the following criteria: age younger than 18 years; oral or intravenous antibiotic therapy which started within 24 hours of enrolment; signed informed consent.
Children will be excluded for the following reasons: pre-existing acute or chronic diarrhoea, history of chronic gastrointestinal disease (eg, inflammatory bowel disease, cystic fibrosis, coeliac disease, food allergy) or other severe chronic disease (eg, neoplastic diseases), immunodeficiency, use of probiotics within 2 weeks prior to enrolment, use of antibiotics within 4 weeks prior to enrolment, prematurity, and exclusive breast feeding.
Interventions
The intervention under investigation will be administration of L. reuteri DSM 17938. The placebo drops consist of a mixture of pharmaceutical grade medium chain triglycerides and sunflower oil together with pharmaceutical grade silicon dioxide to give the product the correct rheological properties. The formulation is identical with the active product but without L. reuteri DSM 17938. In our trial, we choose to use a placebo for a comparator, as it is widely regarded as the gold standard for testing the efficacy of new treatments. 15 The study products (L. reuteri DSM 17938 and placebo) will be manufactured and supplied by BioGaia (Lund, Sweden) free of charge. The manufacturer will have no role in the conception, protocol development, design or conduct of the study, or in the analysis or interpretation of the data.
Study procedure Caregivers will receive oral and written information regarding the study. Written informed consent will be obtained by the physicians involved in the study. Participants will be randomised after admission to the hospital and administration of antibiotic treatment. Eligible patients will receive either L. reuteri DSM 17938 at a dose of 10 8 CFU or placebo, orally, twice daily, in drops (ie, 2×5 drops), during the entire period of antibiotic treatment. Throughout the study period, healthcare providers and/or caregivers will record the number and consistency of stools in a standard stool diary. To record stool consistency, in children younger than 1 year, the Amsterdam Infant Stool Scale (AISS) will be used, and loose or watery stools will correspond to A-consistency. 16 In children older than 1 year, the Bristol Stool Form (BSF) scale will be used, and loose or watery stools will correspond to scores of 5-7. 17 In the case of missing or incomplete data, data from hospital charts will be obtained. At any time, caregivers will have the right to withdraw the participating child from the study; they will be not obliged to give reasons for this decision, and there will be no effect on subsequent physician and/or institutional medical care.
In the event of loose or watery stools, the presence of viral or bacterial pathogens in the stool samples will be investigated. The presence of viral pathogens will be checked by using a standard rapid, qualitative, chromatographic immunoassay that simultaneously detects rotaviruses, adenoviruses and noroviruses. Standard microbiological techniques will be used to isolate and identify bacterial pathogens (Salmonella spp, Shigella spp, Campylobacter spp and Yersinia spp). C. difficile toxins A and B will be identified by standard enzyme immunoassay.
Follow-up
All study participants will be followed up for the duration of the intervention (antibiotic treatment) and then for up to 1 week after the intervention.
Compliance
In case of inpatients who will be discharged before the end of antibiotic therapy, and in outpatients, the caregivers will be asked to bring the remaining study product and diary to the study site at the end of the intervention period. Compliance with the study protocol will be assessed by direct interview with the patient and/ or caregiver and by measuring the amount of the fluid left in the bottle, assuming that 1 mL equals 20 drops. Based on previously published trials, it seems to be appropriate to consider those participants receiving <75% of the recommended doses as non-compliant.
Concomitant medications
If needed, discontinuation or modification of the treatment may be considered at the discretion of the physician.
Outcome measures
As in previous studies carried out in our setting, the primary outcome measures will be the frequencies of diarrhoea and AAD. 18 19 Three different definitions of diarrhoea will be used, as the definitions of diarrhoea/ AAD in published studies vary. These will include diarrhoea defined as: (1) ≥3 loose or watery stools per day for a minimum of 48 hours during antibiotic treatment; (2) ≥3 loose or watery stools per day for a minimum of 24 hours during antibiotic treatment; and (3) ≥2 loose or watery stools per day for a minimum of 24 hours during antibiotic treatment. AAD will be diagnosed in cases of diarrhoea, defined clinically as above, caused by C. difficile or for otherwise unexplained diarrhoea (ie, negative laboratory stool tests for infectious agents). In all cases, loose or watery stools will correspond to scores of 5-7 on the BSF scale or A-consistency on the AISS.
The secondary outcome measures will be as follows: infectious diarrhoea (rotavirus, adenovirus, norovirus, Salmonella, Shigella, Campylobacter, Yersinia and C. difficile), the need for discontinuation of the antibiotic treatment, the need for hospitalisation to manage the diarrhoea (in outpatients), the need for intravenous rehydration in any of the study groups, and adverse events.
Participant timeline
For the time schedule for enrolment, interventions, assessment and visits for the participants (table 1).
Sample size
The primary outcome of the study is the frequency of diarrhoea. Based on the data from studies previously conducted at Warsaw Medical University, 18 we assumed the frequency of AAD to be 23%. To detect a 15% difference between groups, with a power of 80% and a significance level of 5% and taking into account that 20% of the patients will be lost to follow-up, we have calculated that a total of 250 children will be needed. However, the frequency of AAD in earlier trials varied, depending on the definition of AAD used in the study. [19][20][21] Table 2 summarises sample size calculations depending on the definition used.
Recruitment
The recruitment rates will be monitored every month. In the case of poor or slow recruitment, the reasons at various levels, such as the patient, the recruiting clinician, the centre and the trial design, will be evaluated.
Sequence generation
A computer-generated randomisation list prepared by a person unrelated to the trial will be used to allocate participants to the study groups in variable blocks of eight. Consecutive randomisation numbers will be given to participants at enrolment. This procedure will be performed by a physician not involved in the study. The study products will be signed by consecutive numbers according to the randomisation list.
Allocation concealment
An independent person will dispense the numbered study products according to a computer-generated randomisation list. To ensure allocation concealment, allocation will be performed after getting informed consent and registering the basic demographic data to case report form (CRF).
Blinding
The active product and placebo will be packaged in identical bottles. Contents will look and taste the same. Researchers, caregivers, outcome assessors and a person responsible for the statistical analysis will be blinded to the intervention until the completion of the study. The information on intervention assignments will be stored in a sealed envelope in a safe in the administrative part of the department.
Data collection and management
All study participants will be assigned a study identification number. CRFs will be completed on paper forms. Data will then be entered and stored in a passwordprotected electronic database. The original paper copies of CRFs and all study data will be stored in a locker within the study site, accessible to the involved researchers only.
Statistical analysis
All analysis will be conducted on an intention-to-treat (ITT) basis, including all participants in the groups to which they are randomised for whom outcomes will be available (including dropouts and withdrawals). Additionally, per-protocol analysis will be performed, including all participants included in the ITT analysis, who participate in the study, without major protocol violations.
Descriptive statistics will be used to summarise baseline characteristics. The Student's t-test will be used to compare mean values of continuous variables approximating a normal distribution. For non-normally distributed variables, the Mann-Whitney U test will be used. The χ 2 test or Fisher's exact test will be used, as appropriate, to compare percentages. For continuous outcomes, differences in means or differences in medians (depending on the distribution of the data), and for dichotomous outcomes, the relative risk (RR) and number needed to treat, all with a 95% CI, will be calculated. The difference between study groups will be considered significant when the p value is <0.05, when the 95% CI for RR does not include 1.0 or when the 95% CI for mean difference does not include 0. All statistical tests will be two-tailed and performed at the 5% level of significance.
Monitoring
The study will be carried out in accordance with the approved protocol. L. reuteri DSM 17938 is being safely used worldwide for a number of indications, and the X X X X X X Return of non-used study products X AAD, antibiotic-associated diarrhoea. Food and Drug Administration applied to it the Generally Recognized as Safe (GRAS) status. 22 Still, an independent Data and Safety Monitoring Board (DSMB) will be set up prior to the start of the study. The DSMB will review data after recruitment of 25%, 50% and 75% participants to review the study progress and all adverse events.
Harms
Although the occurrence of adverse events as a result of participation in the current trial is not expected, data on adverse events data will be collected. All serious adverse events will be immediately reported to the project leader who will be responsible for notifying the Ethics Committee, all participating investigators and the manufacturer of the study products.
Auditing
The Ethics Committee did not require auditing for this study.
ETHICS AND DISSEMINATION
Verbal and written information regarding informed consent will be presented to the caregivers. Any modifications to the protocol that may affect the conduct of the study will be presented to the Committee. The full protocol will be available freely due to open access publication. The findings of this RCT will be submitted to a peer-reviewed journal. Abstracts will be submitted to relevant national and international conferences. The standards from the guidelines of the Consolidated Standards of Reporting Trials (CONSORT) will be followed for this RCT.
Contributors HS conceptualised the study. MK developed the first draft of the manuscript. Both authors contributed to the development of the study protocol and approved the final draft of the manuscript. HS is the guarantor.
Funding This trial will be funded by the Medical University of Warsaw.
Competing interests HS served as a speaker for BioGaia, the manufacturer of Lactobacillus reuteri DSM 17938.
Ethics approval The Ethics Committee of the Medical University of Warsaw approved the study before recruitment started.
Provenance and peer review Not commissioned; externally peer reviewed.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work noncommercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons.org/licenses/by-nc/4.0/ | 2018-04-03T06:18:04.152Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "ee8e8075580785dea330fcf0903488dbc103c386",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1136/bmjopen-2016-013928",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee8e8075580785dea330fcf0903488dbc103c386",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246775274 | pes2o/s2orc | v3-fos-license | What is the added value of incorporating pleasure in sexual health interventions? A systematic review and meta-analysis
Despite billions of dollars invested into Sexual and Reproductive Health and Rights (SRHR) efforts, the effect of incorporating sexual pleasure, a key driver of why people have sex, in sexual health interventions is currently unclear. We carried out a systematic review and meta-analysis following PRISMA guidelines across 7 databases for relevant articles published between 1 January 2005–1 June, 2020. We included 33 unique interventions in our systematic review. Eight interventions reporting condom use outcomes were meta-analyzed together with a method random effects model. Quality appraisal was carried out through the Cochrane Collaborations’ RoB2 tool. This study was pre-registered on Prospero (ID: CRD42020201822). We identified 33 unique interventions (18886 participants at baseline) that incorporate pleasure. All included interventions targeted HIV/STI risk reduction, none occurred in the context of pregnancy prevention or family planning. We find that the majority of interventions targeted populations that authors classified as high-risk. We were able to meta-analyze 8 studies (6634 participants at baseline) reporting condom use as an outcome and found an overall moderate, positive, and significant effect of Cohen’s d = 0·37 (95% CI 0·20–0·54, p < 0·001; I2 = 48%; τ2 = 0·043, p = 0·06). Incorporating sexual pleasure within SRHR interventions can improve sexual health outcomes. Our meta-analysis provides evidence about the positive impact of pleasure-incorporating interventions on condom use which has direct implications for reductions in HIV and STIs. Qualitatively, we find evidence that pleasure can have positive effects across different informational and knowledge-based attitudes as well. Future work is needed to further elucidate the impacts of pleasure within SRHR and across different outcomes and populations. Taking all the available evidence into account, we recommend that agencies responsible for sexual and reproductive health consider incorporating sexual pleasure considerations within their programming.
Background To date, despite the billions of dollars in domestic and donor funding spent each year to advance SRHR services and programming, and considerable advances in global policy commitments (e.g. through the International Conferences on Population and Development [1], the Commission on the Status of Women, the Millennium Development Goals and subsequently the Sustainable Development Goals), sexual pleasure has been insufficiently addressed. The Guttmacher-Lancet Commission on SRHR in 2018 acknowledged that certain aspects of sexual health, including sexual pleasure were "largely absent from organised SRHR programmes and their links to reproductive health . . . understudied".
An earlier evidence synthesis in 2006 focusing specifically on condom eroticization found that interventions including condom eroticization could lead to improved risk-preventative attitudes. In a meta-analysis of five studies reporting unspecified condom use, condom eroticizing interventions had a positive effect (d = 0�25, 95% CI 0�09-0�42).
Present study
To our knowledge, this is the first systematic review of interventions incorporating pleasure beyond condom eroticization. Our systematic review and meta-analysis provide evidence for the effectiveness of different interventions incorporating pleasure for a variety of outcomes related to behavior, attitudes, and knowledge in the context of sexual health. Our meta-analysis focuses on condom use as an outcome exclusively and aggregates eight interventions whose control group is either standard care or a matched control group isolating the role of pleasure. Thus, we are able to assess the value added of incorporating pleasure above and beyond standard care interventions. We further provide a quality assessment of the existing evidence base and report on how interventions vary in the degree to which they incorporate pleasure.
Introduction
Decades of global commitments, investments, research, advocacy, and innovation-such as the International Conference on Population and Development in Cairo 1 the Fourth World Conference on Women in Beijing [2], the Millennium Development Goals, and most recently, the Sustainable Development Goals [3] and the Generation Equality Forum [4]-have defined modern-day sexual and reproductive health and rights (SRHR), including HIV, programming and services. Despite this, sexual pleasure, a key reason why people have sex, remains insufficiently addressed in most areas in the world [5][6][7][8]. This gap has also been acknowledged by the Guttmacher-Lancet Commission on SRHR in 2018 [9] where aspects of sexual health, including sexual pleasure, were considered as "largely absent from organised SRHR programmes and their links to reproductive health . . . understudied". The Commission's recent conceptualization of sexual health also included sexual pleasure as a core component [9]. However, education and programming around sexual and reproductive health (including HIV) often defaults to ill-health prevention. For example, programs highlight the dangers and risks of unprotected sex or so-called 'sexually-risky behavior', messages warn of disease burden from HIVs/STIs, and reflect an assumption that sexual decisionmaking is driven by rational health considerations. By contrast, concerns about sexual pleasure or perceived reductions in libido are often cited as reasons for not using condoms [10] or discontinuing contraception [11].
PLOS ONE
Assessing the potential value added from incorporating pleasure within SRHR is long overdue but has possibly never been more relevant in the public health discourse. In the months following the start of the ongoing COVID-19 pandemic, many national and subnational health authorities issued guidance on COVID-safe sexual activity. Recommendations addressed multiple types of partners (Netherlands, Ministry of Health, Welfare and Sports [12], Ireland, Sexual Health and Crisis Pregnancy Programme [13]); use of sex toys and cybersex activity (Colombia, Ministry of Health and Social Protection [14]); masturbation [13]; consensual cybersex activity [14]; and non-reproductive sexual practices with a partner that were more (or less) COVID-safe (Spain, Ministry of Health [15]; Argentina, Ministry of Health [16]). In the face of an international public health crisis, these pandemic-issued recommendations by necessity addressed the reality and common questions around pandemic-safe sexual activity, and in doing so acknowledged links between sexual activity and intimate connection, sexual desire, and sexual pleasure.
Exploring the potential of sexual pleasure considerations to make SRHR programming more effective is also particularly timely, considering that fewer than ten years remain to achieve the 2030 target set by the United Nations' Sustainable Development Goals (specifically SDGs 3�7 and 5�6). At present, the most comprehensive evidence available regarding the role of pleasure within SRHR interventions comes from an earlier evidence synthesis [17]. While this review was narrow in focus and assessed the impact of condom eroticization exclusively, it provided preliminary evidence that such forms of pleasure incorporation could have positive effects on certain health-related outcomes. In particular, in meta-analysis of five studies reporting unspecified condom use, condom eroticizing interventions had a significant moderate positive effect (d = 0�25, 95% CI 0�09-0�42). Overall, we consider that at present the effects of incorporating pleasure, including forms of condom eroticization, within SRHR programming are understudied, inconclusive, and warrant further elucidation. As such, our systematic review aims to evaluate the potential impact of incorporating pleasure components within SRHR programming.
Search strategy and selection criteria
We followed PRISMA guidelines [18] in carrying out this systematic review and meta-analysis. We searched 7 databases (PubMed, CINAHL, Sociological Abstracts, PsycINFO, and EMBASE, Global Health, Child Development and Adolescent Studies) for relevant articles published between 1 January 2005-1 June 2020. As a secondary search strategy, we contacted known researchers in the field via email, searched Google Scholar and carried out reference tracking. S1 Appendix reports our search strategy. This was a hybrid approach incorporating both subject headings and keywords. We did not explicitly search for sexual pleasure related terms as this would potentially exclude interventions that have not been indexed appropriately or do not report sexual pleasure in their abstract, title or keywords. Instead, we opted for an expansive approach where we would first identify sexual health interventions with appropriate design through abstract and then full text screening. Interventions incorporating pleasure would be a subset of all sexual health interventions and would thus be captured by our search. We added an additional full text screening step to specifically identify interventions incorporating pleasure, ensuring robustness and high agreement in the process (see Fig 1). We included SRHR interventions that had test arms with at least one component that incorporated considerations of sexual pleasure. We viewed sexual pleasure in line with the definitions of the World Association for Sexual Health [19] that "sexual pleasure is the physical and/or psychological satisfaction and enjoyment derived from shared or solitary erotic experiences, including thoughts, fantasies, dreams, emotions, and feelings" and the Global Advisory Board for Sexual Health and Wellbeing [20], according to which "sexual pleasure is the physical and/or psychological satisfaction and enjoyment derived from solitary or shared erotic experiences, including thoughts, dreams and autoeroticism".
We included randomized controlled trials and quasi-experimental studies with both preand post-intervention measures and a control group published in peer-reviewed journals. For comparison arms, we accepted either no treatment or a non-pleasure incorporating intervention (in SRHR or another area of health) for our systematic review. In order to quantitatively
PLOS ONE
Sexual health interventions with pleasure incorporation: A systematic review and meta-analysis assess the value added from incorporating pleasure, we further restricted the types of eligible comparison arms for the meta-analysis. For this, we accepted only standard SRHR control groups or matched control SRHR groups, as long as they did not include any component(s) with pleasure. For a control group to be considered matched, it had to provide an SRHR intervention similar in duration and implementation form to the test arm. We considered various outcomes including behavioral measures (use of condoms, prevention services, risky behavior, etc.), attitudes and knowledge (about contraception use, STI/HIV incidence, etc.). We did not exclude on the grounds of participant characteristics or language of publication.
The search strategy was carried out by MZ. MZ screened all abstracts and full texts. AP, AS, GL, LG served as independent raters. LG independently screened a random sample of 25% of abstracts and full texts. AP, AS, GL all independently screened a random sample of 10% of abstracts and full texts. Inter-rater reliability was consistently high between MZ and all raters across both stages (ranging from Cohen's Kappa κ = 0�74 to κ = 0�85, all p < 0�001.)
Data analysis
We extracted data from the publications (and where needed, intervention manuals and curricula) via a customized spreadsheet recording the pre-registered study characteristics and outcomes of interest. Where data was missing, we contacted authors. Data extraction and doublechecking was carried out independently by three reviewers (MZ, AS, LG).
We anticipated considerable heterogeneity amongst studies and planned to carry out an inverse variance random effects meta-analysis (R, version 3.6.1) on the quantitative data for each outcome type for which we could extract an effect size estimate (Cohen's d) and 95% confidence interval from a minimum of 3 studies. Study variability was assessed via I 2 estimate of heterogeneity. Two raters (MZ, AP) independently used the Cochrane Collaboration's RoB2 tool [21] to examine possible sources of risk of bias. We further narratively synthesized all included studies, with particular consideration for differences in populations, interventions, and ways of incorporating pleasure.
Results
We screened 7825 abstracts for suitable outcomes and appropriate design and retained 1208 articles for full-text screening. After full text screening, we retained 41 articles and held an additional consensus meeting where all team members had to agree if interventions incorporated pleasure or not. Our supplementary search provided three additional suitable interventions. We found a total of 37 relevant papers (approximately 0�5% of our total screened abstracts) of which 33 reported unique interventions (18886 participants at baseline, see flowchart in Fig 1). Papers reporting duplicate interventions were retained for the purposes of examining the different ways pleasure can be described (see appendix). However, we discarded four duplicate interventions for the systematic review and meta-analysis, retaining only the paper reporting the highest total sample size. S1 Table reports study characteristics. All interventions were implemented in a risk reduction context and targeted HIV/STI-related outcomes. Twenty-five interventions were delivered in the USA with only eight delivered elsewhere: two in South Africa [22,23], and one in Brazil [24], Spain [25], the United Kingdom [26], Singapore [27], Nigeria [28], Mexico [29].
We were able to meta-analyze eight studies (6634 participants at baseline) that examined condom use as an outcome and had standard care or a matched control group that would allow us to isolate the role of pleasure. Our random effects model (see forest plot in Fig 2) indicated that interventions incorporating pleasure significantly improved condom use compared to non-pleasure incorporating interventions (Cohen's d = 0�37, 95% CI 0�20-0�54, p < 0�001, I 2 = 48%, τ 2 = 0�043, p = 0�06).
On the basis of the Cochrane Collaboration's RoB2 tool, we found that the methodological quality of the studies varied (see appendix). The most common source of bias was due to outcome measurement (12 out of 33 papers interventions with "Some concerns"), followed by bias from the randomization process (4 interventions with "High risk" and 7 with "Some concerns") and bias from deviations from the intended intervention (11 interventions with "Some concerns"). Nevertheless, the most common assessment for each type of bias was "low risk". Examination of the funnel plot (Figs 3 and 4) suggests that some asymmetry is present and Egger's test supports this (p < 0�05). Duval and Tweedie's trim-and-fill procedure imputes four studies and pooling the effect sizes after this procedure yields an overall effect that remains positive moderate and significant (Cohen's d = 0�22, 95% CI 0�04-0�40, p < 0�05).
We narratively synthesized all studies focusing on differences in populations, types of interventions, and ways of incorporating pleasure. Eight interventions (2633 participants at baseline) exclusively targeted men who have sex with men (MSM). All pleasure-incorporating interventions with MSM were implemented in the USA and a significant body of them focused on young MSM and/or Black or ethnic minority MSM, either living with or without HIV. In general, interventions with MSM achieved successful randomization and had high participant retention rates over multiple follow-up time points. Pleasure was not a focus in these interventions but rather tended to be discussed in the context of behavioral skills such as how condom use could become fun and pleasurable, often considered within a negative and risk-centered context including how alcohol and drugs could affect condom use [30]. A notable exception was the Focus on Future intervention [31] whose primary purpose was to promote condom use in order to enhance sexual pleasure. By and large, these interventions were able to improve some self-reported behavioral outcomes and increases in condom use for anal sex [30][31][32].
Within the stream of interventions targeting MSM, four online RCT interventions (1298 participants at baseline) targeted HIV prevention. Benefits of online approaches can include the non-judgmental and non-clinical context, as well as the ability to view and interact with content at one's leisure. These interventions, MINTS-II [33], HINTS [34], Guy2Guy [35], myDEx [36], provided informational and motivational content and typically discussed pleasure in the context of behavioral skills (e.g., lubrication, safer sex practices). Participants in myDEx's interventional arm were less likely to have engaged in condomless receptive anal sex than controls, and intervention participants in MINTS-II also reported a marginally significant decrease in the number of men with whom they engaged in unprotected anal intercourse, while Guy2Guy did not find behavioral differences between the arms. For sexually inexperienced youth in the Guy2Guy trial, motivation increased compared to controls which may highlight the importance of online interventions particularly for younger and sexually inexperienced youth.
Nine interventions (7473 participants at baseline, delivered in the USA) focused on racial and ethnic minorities rather than sexual partner preferences or sexual behaviors. One intervention was provided to adolescent women from any minority ethnic background who have suffered abuse; the remaining eight interventions focused on African American participants. Within this general categorization, participants included men newly diagnosed with STIs [37], young males attending STI clinics [38], young people and adolescents [39][40][41], heterosexual men and women [42], and women [43], including women in primary care settings [44]. Beyond the considerable heterogeneity in participant population, intervention differed significantly in terms of duration (e.g., brief or single interventions [37,38,40,43] vs multi-session [30,45,46]), and context (e.g., clinical [37,44] or not [40][41][42][43]). Pleasure was incorporated in a range of ways, including addressing negative beliefs about condom use and pleasure, discussing ways to make condom use more pleasurable, and eroticization of safe sex. The noted methodological heterogeneity and variability in target population characteristics poses a challenge for extrapolating an overall trend or conclusions about the effectiveness of this stream of research.
A further five RCT interventions [29,44,[47][48][49] (1988 participants at baseline) targeted people who use or have used drugs. These participants often faced multidimensional and intersectional risks due to their ethnicity, work (sex workers [29]), health status (living with HIV
PLOS ONE
Sexual health interventions with pleasure incorporation: A systematic review and meta-analysis [50]), socioeconomic status [49], or were otherwise classified as high risk and already enrolled in treatment [47]. Pleasure was often discussed in sessions regarding negotiating harm reduction and how safer sexual practices can be eroticized [29,47,50] but could also include a more empowering and rights-based approach such as assessing personal sexual rights, positive sexual choices [50], increasing confidence and pride in one's body, affirming one's right to pleasure [48]. Despite working with high-risk groups who often faced additional stigma and adversities, these interventions were effective in increasing informational outcomes such as knowledge about HIV/STI and self-efficacy. Behavioral outcomes were not assessed in all interventions [48] and were likely affected by response bias [47], although notably amongst participants in a pleasure-inclusive interactive version of one intervention [29] there was a 50% decrease in HIV/STI incidence compared to a control didactic sex intervention. Pleasureinclusive interventions can have positive effects for high-risk populations, but further empowerment and appropriate interactive interventional programming may be required to adequately support individuals at risk.
Finally, several programs targeted young people and adolescents in schools or educational contexts. Notably, two large-scale quasi-experimental evaluations (3393 participants at baseline) of sexual education programs in Spain [25] and Brazil [24] presented sexuality as a positive human value and source of pleasure and included various forms of empowering and interactive activities and were both able to increase condom use in intervention schools compared to controls. In another school-based RCT [26], teenagers were provided with a condom use promotion leaflet. This brief 20-minute intervention found positive effects in a variety of cognitive domains, such as self-reported attitudes towards condom use, self-efficacy, and intention to use condoms. Changes in behavioral outcomes were not observed at a one-month follow-up, likely due to the short and informational nature of the intervention. Positive intervention effects on cognitive and some behavioral domains were also found in brief educational curricula [40,41]. In interventions targeting adolescents, populations considered at higher risk by the authors were still prevalent, including ethnic minority girls who have suffered sexual abuse [45], adolescent girls attending Planned Parenthood [51], and incarcerated youth [52]. While this latter stream of interventions generally tended to have positive effects on informational, motivation, and behavioral outcomes (such as fewer infections), they were also marked by attrition problems. Limited evidence for university students is available from three interventions [22,53,54]. Condom promotion video materials that discussed how condom use could be erotic [53,54] improved condom use in one intervention [55] and improved self-efficacy but had no effect on consistent condom use in another [53]. Further, an eight-session module targeting HIV risk reduction had significant positive effects on frequency of condom use, self-efficacy, and HIV-related knowledge [22].
Discussion
We provide qualitative and quantitative evidence that SRHR interventions incorporating pleasure can have positive effects on a variety of behavioral and information outcomes. Our metaanalysis indicates that interventions incorporating pleasure have a moderate positive and significant effect on condom use (Cohen's d = 0�37, 95% CI 0�20-0�54, p < 0.001) above and beyond standard care. Affirming human sexuality and the reasons why people have sex could be an important way to ensure sexual health interventions are effective.
Notably, we found large heterogeneity in terms of intervention designs, participant risk profile, as well as time and emphasis placed on pleasure. Hence, it is difficult to extrapolate a conclusive causal mechanism through which pleasure can affect the different health-related outcomes, pertaining to domains such as behavior, attitudes, knowledge, motivation. Relatedly, due to heterogeneity, it is also difficult to isolate enough comparable studies to allow a robust estimation of an effect size. While we have managed to do so for condom use, future work is required in order to better understand the impacts of adding pleasure components to SRHR interventions on further outcomes.
In our narrative synthesis, we find some qualitative evidence that significant positive effects are found in relation to how pleasure was positioned in the interventions. Specifically, the majority of included interventions discuss pleasure in the context of behavioral skills (e.g., making condom use fun or sexy, using lubrication to enhance sexual pleasure) and here, it seems, behavioral outcome changes were most prevalent. Notably, one intervention [26] provided informational content only and found improvements in cognitive and informational domains. Importantly, most research captured under our systematic review focused on HIV and STI reduction. We find an evidence gap for interventions targeting pregnancy prevention and contraception interventions that warrants future attention.
A further consideration in better understanding the effects of pleasure pertains to examining participant populations. We found two educational programs delivered in schools to general population students who were not considered at risk but, due to their age, could be considered vulnerable. All remaining programming targeting adolescents was, by and large, targeted towards adolescents classified as 'at risk' or having high incidence of STIs. Currently, there is an evidence gap for the impacts of interventions incorporating pleasure on the level of the general population, including heterosexual individuals and couples, with a particularly pronounced gap for women of reproductive age as well as older women. Overall, we find that implementing pleasure within SRHR interventions occurs in a largely risk-reduction context with a strong targeting of groups deemed 'high-risk'. We provide further detail on the spectrum of pleasure, examining considerations such as the overall aims of interventions, how much time is dedicated to pleasure, and in what contexts, in S2 Appendix.
In terms of limitations, it is possible that we may have missed interventions incorporating pleasure because their write-ups did not indicate pleasure was a part of the intervention's components or delivery. The way interventions are described is often guided by considerations linked to funding, publication, cultural and other biases. We acknowledge that such contextual and financial factors may also limit which interventions can be implemented in the first place. We discuss these issues as they relate to challenges for carrying out a systematic review and understanding effectiveness elsewhere [55]. In so far as how intervention description may affect inclusion, we have tried to mitigate the potential impact of this as much as possible by contacting authors and reading intervention manuals. Equally, we anticipate that some interventions might have been delivered in ways that included pleasure beyond what can be captured in intervention manuals. As a further limitation, we are aware of work involving pleasure not published in peer-reviewed journals, as well as interventions whose designs are not eligible for inclusion per our criteria (e.g., cross-sectional or non-controlled work). There are already advocacy and civil society-led efforts to document and connect organizations leading in this type of programming [56]. For this review, we instead chose to prioritize a conservative approach with strict cut-offs in study design in order to maximize the chance of isolating the value-added of incorporating pleasure components as compared with SRHR interventions which do not include pleasure considerations. Notably, in this review, we also examined pleasure through a focused definition, excluding other intervention framings, such as those grounded in a gender transformative, or personal and/or economic empowerment approaches. This should not be perceived as nonrecognition of their importance: good SRHR should include the potential for respectful, equitable, consensual, as well as pleasurable interactions.
Based on the current findings, we are able to point to areas where future work is needed. As a first step, clear description of work that incorporates pleasure, such as through key words and direct in-text descriptions, could facilitate accurate identification and later systematic assessments. Precise definitions of how much emphasis and time have been allocated to pleasure components will additionally allow for a consideration of dose-response effects. Moreover, addressing the existing gaps-in populations (e.g., heterosexual individuals, women), as well as types of programming (i.e., family planning, pregnancy prevention)-is another direct implication. Importantly, considering that the majority of identified outcomes are selfreported, future research could also provide value by considering different measures of impact, including biological markers such as STI incidence.
The current review is also timely in the context of the United Nations' Sustainable Development Goals, particularly goals 3�7 and 5�6, which target universal access to sexual and reproductive health and reproductive rights. Continued high levels of mortality and morbidity attributed to SRH-related outcomes indicate that revisiting how to most effectively design SRH programs and education is warranted. This may involve a fundamental rethink of how programs are oriented. Our review indicates that programs and education which better capture a full working understanding of sexual health, which acknowledges that sexual experiences can be 'pleasurable', have been demonstrated to improve not only knowledge and attitudes around sexual health, but also safer sex practices. With fewer than ten years to go, and many countries not on track to meeting these SRHR goals [57], interventions that incorporate pleasure may prove an important strategy to ensure that not only positive outcomes are obtained, but that they go beyond the effects normally anticipated by standard care programming. Continuing to avoid pleasure inclusive sexual health and education risks further misdirecting or inefficiently utilizing the much-needed resources to reach the SDGs.
Supporting information S1 | 2022-02-13T05:25:15.472Z | 2022-02-11T00:00:00.000 | {
"year": 2022,
"sha1": "93b37aa2d5f2ff28381e47f2c415fe887ffb395c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0261034&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "910a1e3f42eea6f75e7e75000e79dade776bea53",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259752780 | pes2o/s2orc | v3-fos-license | The Effect of Brand Activation on Brand Awareness (Survey of Visitors to Teras Komuji Coffee Shop Bandung)
This study aimed to determine how the effect of brand activation on brand awareness at Teras Komuji. In this study, the independent variable (X) is brand activation, consisting of several sub-dimensions: identity, employee, product and service, and communication. The dependent variable (Y) is brand awareness. The type of research used is descriptive and verification with an explanatory survey method and a cross-sectional study approach. The data in the study used primary data from a sample of 220 respondents, namely customers who had visited and had activities at Teras Komuji. The data analysis technique used was multiple regression using the assistance of a statistical calculation program. The results showed that, together and separately, the brand activation variables significantly affected brand awareness at the Teras Komuji
INTRODUCTION
Marketing activities in the new normal era of Covid-19 today require all resources to adapt to new circumstances and find solutions to problems that have an impact. It is widely recognized that the is urgent for the whole world. WHO (World Health Organization) declared this as PHEIC (Public Health Emergency of International Concern) in late January 2020. (Kumar 2020). Indonesia is one of the countries massively exposed to . Active cases of the Covid-19 virus in Indonesia have reached 4,254,443 thousand people (November 4, 2021). That number puts Indonesia in the fourteenth position with the highest number of active cases of Covid-19 in the world. The following is Table 1, explaining the countries with the largest number of Covid-19 distribution cases in the world. With the situation and a significant decrease in visitors every year, Teras Komuji experienced a significant revenue decrease as well. The decline in revenue of Teras Komuji can be seen in Table 2, which describes the financial statements of Teras Komuji: The occurrence of a significant decline in income has inspired Teras Komuji to create a marketing strategy in the current pandemic situation. One of the activities that need to be done is awareness-raising activities. (HAQ, 2016) because, according to Witrie, the higher the level of awareness of a brand in the minds of consumers, the more likely the brand is.
Brand awareness is one of the essential requirements in the marketing communication process before the processes in other marketing strategies are carried out. With awareness, marketing communication strategies will be easier to carry out. As a visitor who will buy a product, awareness of the brand of that product must be built. The tendency of buyers to choose products with a certain brand will only come when they are aware of the chosen brand so that, ultimately, it will create a buying and selling transaction. (Macdonald & Sharp, 2003). Based on the explanation of the importance of understanding brand awareness, there is an overview of the impact that will occur due to poor or low brand awareness, such as a decrease in purchase interest, selling power, and no revisit intention. Especially in the pandemic era, revisiting intention and purchase decisions are important for the sustainability of an industry, and both of these can be built through brand awareness. (Campbell et al., 2014;Putri, 2021). The problem of low brand awareness in food and beverage has been proven by several studies and surveys conducted by many researchers in the tourism industry.
Building brand awareness, especially in the current pandemic, requires a planned promotional strategy to create awareness of a product or brand in the minds of visitors and, as much as possible, become top of mind. Brand promotion is needed in the form of bringing closer and building brand interactions with its users through attention-grabbing activities to give a deep impression of the product image in the minds of consumers (Pebrianti et al., 2020).
These activities are required to form a platform to create two-way communication between brands and consumers directly so as to create emotional closeness with visitors. In other words, categorized as brand activation. (Saeed et al., 2015) Brand activation is a form of brand promotion activity by interacting more closely with its users through various experiential activities of a brand that attract their attention. A strong emotional connection with consumers is needed to create a successful brand activation strategy. The stronger the relationship, the more successful the quality of the interaction will be. It can also make consumers buy the product more often (Saeed et al., 2015).
In addition, from the perspective of building brand awareness, brand activation has many opportunities to succeed because promotional strategies are created in attractive packaging in the pandemic era. That is when people are more receptive to persuasive messages brand owners convey. By experiencing first-hand the experience of using the product and education built in a fun way to increase public awareness of the brand. Brand activation, especially in the realm of coffee shops, can be done with various activities, both online and offline. Online is done through community activation through social media, while offline through brand activation. In both online and offline activities, the conditions for success are the same: successfully creating brand awareness, increasing consumer interest in trying products, and building relationships with visitors. (Prameswari, 2019;Putri, 2020) .
In response to the problems that exist in Teras Komuji, it is felt that there is a need for a strategy to increase brand awareness by assessing the influence of brand activation in the coffee shop, so it is important to conduct a study on "The Effect of Brand Activation on Brand Awareness" (Survey of Teras Komuji Bandung coffee shop visitors). The research problem can be formulated as follows: 1. How is brand awareness at Teras Komuji in the new normal era? 2. How is brand awareness at Teras Komuji in the new normal era? 3. How does brand activation affect brand awareness at Teras Komuji Bandung?
Literature Review
The World Health Organization (WHO) explains that Covid-19 viruses (Cov) are viruses that infect the respiratory system. This viral infection is called Covid-19. Covid-19 viruses cause the common cold to more severe illnesses such as Middle East Respiratory Syndrome (MERS-CoV) and Severe Acute Respiratory Syndrome (SARS-CoV). The Covid-19 virus is zoonotic, which is transmitted between animals and humans. According to the Indonesian Ministry of Health, Covid-19 cases in Wuhan began on December 30, 2019, when the Wuhan Municipal Health Committee issued an "urgent notice on the treatment of pneumonia of unknown cause (Deb et al., 2022) The spread of the Covid-19 virus is very fast, even across countries. To date, 213 countries have confirmed being affected by the Covid-19 virus. The spread of the Covid-19 virus, which has spread to various parts of the world, has impacted the Indonesian economy regarding trade, investment, and tourism (Hanoatubun, 2020).
Tourism support sectors such as restaurants, cafes, and coffee shops will also be affected by the Covid-19 virus. The lack of visitors decreases the business's income dramatically because most visitors. These conditions require F&B business actors to survive, and strategies are needed to deal with them to survive despite difficult conditions (Herdiana, 2020;Supriatna, 2020).
Brand activation can be a strategy to survive in this new normal era. Situations requiring visitors to be more selective in choosing a brand they will consume can be communicated innovatively and creatively so that visitors continue to be aware of the brand. (Dissanayake & Gunawardane, 2018;Prameswari, 2019).
Brand activation is a marketing relationship created between brands and visitors to make visitors understand the brand better and consider it a part of their lives. Brand activation is activating visitors by creatively combining all available communication sources (Saeed et al., 2015).
Marketers use brand activation and event marketing to build relationships with consumers, increase brand equity, and strengthen ties with the trade. In other words, the event's success highly depends on the fit between the brand and the target market (Chen et al., 2020). Brand activation has many opportunities to succeed because events are organized to create a relaxed and happy mood. In addition, brand activation is the answer to the modern promotion concept because it can increase brand awareness and lift a brand's image. This strategy effectively builds a brand because brand activation is a form of promotion that brings the brand closer and builds interaction with its users. (Saeed et al., 2015) Coffee shops in this era utilize brand activation to attract consumers by organizing events in the new normal era to create awareness from visitors to the coffee shop. (Boentoro & Paramita, 2020) According to (Dissanayake & Gunawardane, 2018), four aspects can be a solution for a brand to help direct and innovate its company. The four aspects are:
Identity
A brand must have a strong identity. A strong identity affects the relationship between the brand and visitors. A strong identity involves functional benefits, emotional benefits, and self-excoriation to produce an image of a brand in the minds of visitors.
Human Resources (employees)
Human resources or employees can be a potential factor in activating a brand. Visitor loyalty can be formed from the services provided by employees. In addition to being required to work well according to their respective duties, employees can become brand ambassadors for the company's brand.
The company communicates the aspects that build the brand to its employees to form behavior following the brand's mission in employees' minds.
Product & Service
An important part of marketing is products and services. Products from the production process are goods, while services will be services to visitors. These two things can influence visitor decisions because products and services are offered to meet visitor needs.
Communication
In implementing brand activation, communication plays an important role in supporting the previously described: products and services, human resources/employees, and brand identity. Brand activation will not run perfectly without good communication, and the message will not be conveyed as it should. Brand activation contains take-to-action communication messages or communication messages that are call-to-action messages. Based on the description of the theory, the indicators are applied as dimensions in this study which can be seen in the brand activation research paradigm on brand awareness as follows: (Goos & Meintrup, 2016), in the book Statistics with JMP: Hypothesis Test, ANOVA, and Regression, state that a hypothesis is a statement that a population or process is true or false. For this reason, it is necessary to hypothesize through relevant research. According to (McLeod, 2018), a hypothesis is a precise and testable statement of what researchers predict will be the study's outcome. According to (Sekaran & Bougie, 2016), a hypothesis predicts a statement that researchers can test and predicts expectations found in empirical data.
Based on the description of the definition of the hypothesis above, in compiling the hypothesis, the author is supported by the following premises: 1) In a journal entitled "The Effect of Advertising, Brand Activation, and Sales Promotion on Brand Awareness of "Zee" Milk," researchers show that brand activation variables have a significant positive effect on brand awareness (Nurvita & Budiarti, 2019), 2) In a journal entitled "Brand Activation Strategy to Increase Brand Awareness," researchers show that brand activation variables have a significant positive effect on brand awareness (Prameswari, 2019), 3) In a journal entitled "Designing Excelso Jemursari Brand Activation during the Covid-19 Pandemic", researchers show that brand activation variables have a significant positive effect on brand awareness (Fransisca et al., 2020).
Based on the premises above, the hypothesis described by the researcher is that there is a relationship between variable (x) and variable (y), namely, "there is an effect of brand activation on brand awareness."
METHODS
This research uses a quantitative approach. There are 10 indicators of brand activation and 6 indicators of brand awareness developed. The sample obtained was 220 respondents, using probability sampling and nonprobability sampling (Sugiyono, 2017). The data comes from data sources, primary and secondary techniques, and data collection with offline and online questionnaires. Distribution was done offline by distributing questionnaires to visitors to the Teras Komuji Bandung coffee shop during the new normal period and online distribution via Google form, which distributed questionnaires to visitors who had visited the Teras Komuji Bandung coffee shop to determine to sample. Respondents in this study were visitors to Teras Komuji during the new normal period.
RESULTS AND DISCUSSION
According to the results recapitulation of visitors' responses to brand activation at Teras Komuji Bandung, communication is a variable with the biggest assessment of 26.85%. Teras Komuji Bandung has a good level of information delivery about ongoing activities. It is due to the ability of word of mouth from the Komuji community itself so that information about ongoing activities spreads rapidly. While employees are the variable that gets the smallest assessment among others, which is 19.74%, this is observed because of the small number of employees owned by Teras Komuji Bandung, so it takes more time in the process of serving visitors.
The recapitulation of the results of visitors' responses to brand awareness at Teras Komuji Bandung has a total of 5088, where the highest question item is the level of ability of Teras Komuji Bandung to continue to be embedded in the minds of visitors with a total score of 906 and a percentage of 17.80%. The lowest question indicator is the level of ability of Teras Komuji Bandung as a brand that is always the first choice, with a score of 803 and a percentage of 15.78%. It is due to the large variety of coffee shops in Bandung, giving visitors various choices when visiting a coffee shop. The correlation test results and the coefficient of determination show that the correlation value (R) of the relationship between brand activation and brand awareness at Teras Komuji Bandung is 0.740. The amount of correlation between brand activation and brand awareness is in a strong category. While the coefficient of determination (Adjusted R Square) of 0.547 shows that each dimension of brand activation (X) contributes 54.7% to the brand awareness variable. Meanwhile, the remaining 45.3% contributes to other factors not examined in this study. Table 5 that the significance value is 0.000, which is smaller than 0.05. Based on the F test decision-making that if the Sig. Value < 0.05, then the independent variable (X) has a significant effect on the dependent variable (Y), and if the Sig. Value> 0.05, then the variable (X) has no significant effect on the dependent variable (Y). Therefore, a set of independent variables of brand activation (X) consisting of identity, employee, product and service, and communication have a stimulant effect on the dependent variable brand awareness (Y). Table 5. T-test results The T-test results show the partial influence between the dimensions of brand activation on brand awareness through ttabel knowledge at the degree of freedom (df) and α = 5% using a two-part test to be 5%. It can be explained by comparing the significance level or tcount with ttabel. It can be explained as follows. 1. There is a significant effect between the identity dimension on brand awareness because the significance value is 0.000 < 0.05 and the tcount is 4.713 > ttable 1.971059. Therefore, H0 is rejected, and H1 is accepted. 2. There is a significant effect between the Employee dimension on brand awareness because the significance value is 0.001 < 0.05 and the tcount is 3.389 > ttable 1.971059. Therefore, H0 is rejected, and H1 is accepted. 3. There is no significant effect between the Product and Service dimensions on brand awareness because the significance value is 0.066 > 0.05 and tcount 1.848 < ttable 1.971059. Therefore, H0 is accepted, and H1 is rejected. 4. There is a significant effect between the Communication dimension on brand awareness because the significance value is 0.000 < 0.05 and tcount 4.180 > ttable 1.971059. Therefore, H0 is rejected, and H1 is accepted.
CONCLUSIONS
Based on the results of research that has been conducted by distributing 220 questionnaires to visitors of Komuji Terrace and by calculating using multiple regression techniques to determine the effect of brand activation on brand awareness at Komuji Terrace, the following conclusions can be drawn: 1. Respondents' responses to brand activation consisting of identity, employees, product, service, and communication get a fairly high assessment, which means that visitors feel that Teras Komuji has become a good coffee shop, especially in implementing brand activation. The dimension that gets the highest percentage value is communication. Communication is crucial to determine whether an individual wants to visit Teras Komuji or choose another place. Building communication can produce more emotional bonds with visitors so that visitors continue to make Teras Komuji their first choice when they want to visit a coffee shop. The lowest value is in the employee dimension. The number of employees owned by Teras Komuji is arguably small, so in the procession of visitor services, it will take more time. Therefore the employee dimension gets the lowest percentage. 2. Respondents' responses regarding brand awareness have received a good assessment from respondents. The highest assessment was obtained from the question item on the level of Teras Komuji's ability to continue to be embedded in the minds of its visitors. It is because Teras Komuji has a unique side, namely the existence of discussion rooms, music as well as a variety of literacy readings so that it can be embedded in the minds of its visitors. Meanwhile, the lowest assessment is the level of Teras Komuji's ability as a brand that is always the first choice. Due to the large variety of coffee shops in Bandung, visitors have a variety of choices for visiting coffee shops. 3. Based on the results of stimulant and partial data processing regarding brand activation, there are 4 dimensions, namely identity, employee, product and service, and communication, that significantly influence brand awareness. It means that the value of brand activation received by visitors to Teras Komuji has an impact on brand awareness for its visitors.
Recommendations
Based on the findings that have been generated from this research, the authors recommend several things regarding the implementation and influence of brand activation on brand awareness at Teras Komuji as follows: 1. Brand activation is one way the manager of Teras Komuji can increase the brand awareness of visitors. In the brand activation variable, the employee sub-variable gets the lowest score compared to other sub-variables, especially in the ability to communicate Komuji Terrace's products. It is due to the decline in promotions carried out by Komuji Terrace during the Covid-19 period and the lack of employees, so there is no special section to do the job. We recommend that Teras Komuji open more job vacancies, especially in this field, to increase the effectiveness of Teras Komuji in its services and promotional activities. 2. Based on the results of research on brand activation at Teras Komuji, the lowest brand awareness assessment is on the indicator of the level of ability of Teras Komuji as a brand that is always the first choice. It shows that several things must be improved by Teras Komuji, especially in applying the mindset of visitors always to choose Teras Komuji. The author can recommend carrying out activities that can be in direct contact with visitors so that visitors feel involved and bound by the activities that are taking place at the Komuji Terrace. | 2023-07-12T06:36:17.927Z | 2023-05-18T00:00:00.000 | {
"year": 2023,
"sha1": "0ef0f1f5cd15a910262c3da6167bf3107f3722d6",
"oa_license": "CCBYNCSA",
"oa_url": "https://jurnal.ppsuniyap.ac.id/index.php/joer/article/download/39/27",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "24e0cb9e78168971c3dcfc2914c97452ac67bc81",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
226350570 | pes2o/s2orc | v3-fos-license | Academic Survival Skills for Urban Students
This article discusses specific attitudes, behaviors, and skills used by some urban students to greatly enhance their chances of experiencing academic success in school. Both qualitative and quantitative analyses demonstrate that regardless of students’ socio-economic background or ethnicity and race, high achievement can become an expectation. The research found the most commonly shared attribute among academically successful urban students was their willingness to assume greater personal responsibility for their educational outcomes. This research supports the position that urban schools should incorporate the teaching and practice of these attitudes, behaviors, and skills into the daily curriculum as a mechanism for meaningful student achievement and personal growth.
The Coronavirus, also known as COVID 19 caused nearly 100% of public schools across the United States to be shut down in March of 2020. Although there was early hope of reopening, the schools remained closed for the duration of the academic year. This crisis led schools to scramble to find ways to continue to keep students engaged in relevant learning activities and course completion without the benefit of being in physical classrooms.
While there is no real substitute for in-class and hands-on teaching, schools that had made early investments in distance learning technology and training for both teachers and students fared far better during this time than those who had not. Many suburban schools had already incorporated and funded models such as -one-to-one technology‖ where each student is provided a personal laptop or similar device for use at both school and home. These districts had also established partnerships with local cable services to ensure that virtually every home had Internet access, even for free if necessary. In these cases, students were well-positioned to remain in continuous contact with their teachers and to be remotely provided instruction, materials, and feedback resulting in much less interrupted learning.
During this same period many urban districts, especially those primarily composed of African-American and Hispanic students saw meaningful instruction greatly reduced, and in some instances come to a screeching halt. The mass integration and utilization of instructional technology in these schools could at best be described as spotty. While urban educators fight valiantly to teach and fulfill the myriad needs of their students at school on a day-to-day basis, many of these districts found themselves ill-prepared or completely at a loss as to providing meaningful instruction to students from a distance.
As urban districts hurriedly shut down, a few schools rushed to make paper copies of worksheets to be distributed to students. Although the worksheets could provide continued practice in multiple subject areas, there was little expectation or requirement that the tasks be completed or that it would even be returned or graded. In other instances, there were hastily made, but short-lived plans for parents to come to school on a daily basis to pick up homework for their children. Oftentimes when efforts were made to provide students with laptops, weeks had already passed and there was no guarantee of Internet access in the home or that students were knowledgeable of relevant instructional applications. Teachers were commonly left directionless in terms of expectations for providing instruction to students, other than to be available for two hours per day. More often than not, the Coronavirus shutdown resulted in nearly three months of unfulfilled instructional time for urban students.
Students of color who attend schools in large metropolitan areas have struggled academically in even the best of times. Whether due to social, economic, political, or familial conditions, African-American and Hispanic students, in general, have always trailed their suburban and rural counterparts on all indicators of school success. Unfortunately, the Coronavirus shutdown has stalled any academic gains that may have been on the horizon.
By 2015 DeArmond et.al from The Center for Reinventing Public Education had already painted a stark picture of schooling in some of the nation's cities, particularly for low-income students and students of color. The study takes into account nine indicators around the health of public education-across all public schools in the cities-and does not separate traditional district schools from charter schools. Among the major findings were: Less than a third of the cities examined made gains in math or reading proficiency over the three-year study span relative to their state's performance.
One in 4 students in 9th grade in 2009 did not graduate from high school in four years.
Forty percent of schools across the cities that were in the bottom 5 percent in their state stayed there for three years.
Less than 10 percent of urban high school students enrolled in advanced-math classes each year in 29 of the 50 cities.
Less than 15 percent of urban high school students took the ACT/SAT in 30 of the 50 cities.
Low-income students and students of color were less likely to enroll in high-scoring elementary and middle schools than those who were more affluent or were white.
On average, 8 percent of students in the study cities were enrolled in "beat the odds" schools -those that got better results than demographically similar schools in the state.
About a 14 percentage-point achievement gap existed between students who were eligible for free and reduced-price meals and those who were not.
Black students were almost twice as likely to receive an out-of-school suspension as White students.
There is no single issue or cause that has led to the devastatingly poor academic outcomes of urban students over the years. But, there are seemingly endless and insurmountable conditions and circumstances that contribute to their failure. Students and teachers in urban schools have greater challenges to overcome in many areas compared to their suburban and rural counterparts. Studies by Hudley, (2013) pointed out that: Urban schools have larger enrollments, on average than suburban or rural schools at both the elementary and secondary levels.
Urban teachers have fewer resources available to them and less control over their curriculum than teachers in other locations, as do teachers in urban high poverty schools compared with those in rural high poverty schools.
Administrators of urban high poverty schools have more difficulty hiring teachers than their counterparts in most other schools.
Teacher absenteeism, an indicator of morale, was more of a problem in urban schools than in suburban or rural schools, and in urban high poverty schools compared with rural high poverty schools.
Student behavior problems were more common in urban schools than in other schools, particularly in the areas of student absenteeism, classroom discipline, weapons possession, and student pregnancy.
Hudley went on to state that the general education urban students receive in public schools is demonstrably insufficient to make them competitive with their more advantaged, middle, and upper-income peers. Specific examples include the emphasis placed on the importance of STEM careers for the future of our youth and our country. Mathematics classes in high-poverty high schools are twice as likely to be taught by a teacher with a credential other than mathematics as are mathematics classes at low-poverty high schools. Similarly, for science classes at high-poverty high schools, teachers are three times as likely to be credentialed in areas other than science as those who teach science at low-poverty high schools.
Which Urban Students Demonstrate Academic Survival Skills
Even the most casually interested observer can broadly describe the reasons so many urban youths experience academic failure. Their explanations often include circumstances such as generational poverty, the devaluation of education, fragmented family structures, high unemployment rates, physically deteriorating schools, poorly funded districts, high rates of teen pregnancy, and neighborhoods with high rates of drug use and crime. Each of those conditions, if not collectively then individually, has likely inhibited the academic success of many urban youths. And, it is inarguable that those circumstances have existed for multiple generations. From a statistical perspective, the odds of any urban k-12 student receiving a fundamentally sound education that well prepares him or her with post-high school career options for upward social and economic mobility are relatively low.
Educators generally understand the rationale and conditions leading to urban school failure. But, we have yet to capitalize on and learn from the lives and stories of those urban students who overcome their environmental conditions to experience stellar academic success. Every year, some urban students survive devastating family, community, and school obstacles, then go on to graduate with honors and are well prepared to pursue intellectually challenging career options. A few of those students receive full academic scholarships to attend prestigious universities. Others acquire grade point averages (GPAs) and Scholastic Achievement Test (SAT) scores for traditional acceptance into colleges and universities. And still, some qualify for admittance into community colleges, the armed forces, and to pursue high paying skilled labor apprenticeships.
Almost everyone can think of a friend or an acquaintance that overcame tremendously difficult circumstances and became successful in a specific field. We have all heard of individuals who were born into poverty, attended failing schools, were raised by a single parent, and survived tough city streets. Yet those individuals went on to become some of our most admired citizens. How is this possible? What distinguishes one student from another when neither has familial, economic, or social advantages? Why does one inner-city student succeed in school while another fails, yet both are demographically similar?
Answers to those questions are highly complex and cannot be condensed into one easily digestible response. However, it is clear that urban youth who defy the odds of school failure and excel in their environment have mastered and use an array of academic survival techniques. Those techniques being the acquisition and application of specific attitudes, behaviors, and skills which greatly enhance the odds of school success. Certainly, every student regardless of race needs parental support, effective teachers, instructional materials and equipment, and district leadership to experience optimal academic achievement. But, since urban youth have more challenges in finding consistent access to those necessities, the self-reliance on academic survival skills often becomes the difference between school success and failure.
Over the decades, there are countless examples of urban youth who not only succeeded in school but soared into the outer stratosphere of academic success. Oftentimes, those students give credit to specific teachers or administrators who motivated and inspired them toward self-efficacy. There are untold numbers of urban youth who succeeded in school solely because their parents and grandparents refused to accept anything less than academic excellence from them. Unfortunately, an examination of achievement data over the past 30 years reveals those scenarios to be the exceptions rather than the rules.
Notwithstanding the lack of advantages currently available to urban students, those who do excel and distinguish themselves from their peers have learned to take advantage of all the supports and services that can be found in these environments. More importantly, they are either innately or overtly aware that they too must make great contributions to their own school success. It is important to note that the students who exhibit the highest academic performance in these environments are not necessarily those with the highest IQ's, but are the students who demonstrate that they are committed to learning and are willing to work for accomplishment.
Identifying the Academic Survival Skills of Urban Students
Research in this area began with a qualitative study entitled -The Seven Secrets of Successful Inner-City Students‖ (Hampton, 2008). That study examined the lives of five academically successful seniors from an inner-city high school in Cleveland, Ohio. The search for those students was conducted with the caveat that they had come from impoverished homes, were raised by only one parent, and that parent had no formal education beyond high school. Each of those students overcame tremendously difficult personal and family circumstances. None of the students had been involved with their fathers, all were on public assistance, one of the students had faced homelessness, and some others had been in the care of various extended family members. Some of the students had witnessed the effects of drugs on their remaining parent, at least one of the students had an imprisoned parent, and all lived in neighborhoods with high drug, crime, and gang activity.
Given those conditions however, all five of those students demonstrated high academic achievement and had mapped out promising post-high school careers. All those students had GPAs that ranged from 3.3 -4.0. One student had received a full academic scholarship to attend a southern historically black college (HBCU) and another had received a partial academic scholarship to attend a different HBCU. Although the remaining three qualified for general college admission, two deferred entrance for military service and governmental funding for college, which was their long-term goal. The final student intended to enroll in community college to begin a nursing program.
After six months of weekly recorded interviews with the students and their teachers, and volumes of anecdotal records and details, seven themes or common characteristics among the students emerged which helped to explain their academic success. Although in varying degrees, they all shared specific characteristics in attitudes, behaviors, and skills related to successful learning and achievement. While none of the five students particularly regarded the others as friends, those seven characteristics emerged as the most common denominators defining their paths to academic achievement. Those attitudes, behaviors, and skills collectively became known as their Academic Survival Skills or Successful Learner Characteristics and were defined as follows: 1. Self-respect -The extent to which the student demonstrates a high regard for him or herself.
Command of Standard English -
The extent to which the student demonstrates the desire and ability to routinely construct grammatically correct sentences and to pronounce words correctly.
3. Goal-setting ability -The extent to which the student demonstrates the ability to identify relevant short and long-term objectives leading to a desired outcome. 4. Self-motivation -The extent to which the student demonstrates the ability to push him or herself toward the accomplishment of relevant short and long-term objectives and goals.
5. Time management skills -The extent to which the student demonstrates the ability to plan, organize, schedule, and work on relevant tasks.
6. Consequence awareness -The extent to which the student demonstrates a concern for the outcomes of his or her actions (to usually think before they act).
7. Respect for others -The extent to which the student demonstrates regard for the worth, rights, property, and feelings of others.
That qualitative study identified the broad attitudes, behaviors, and skills that were thematically associated with the academic survival of urban students as they traversed the difficult terrains of their schools and community. It is important to point out that there was no detailed analysis that precisely described how, when, or where these students acquired those characteristics. It should also be noted that those survival characteristics were viewed as relative, meaning only in comparison to their student peers. It is unknown how well developed the specific characteristics of those five urban students would measure in comparison to same age and grade peers from suburban and rural school districts.
Analysis of the Academic Survival Skills
To confirm and validate the findings of the 2005 inquiry of urban student academic survival skills, a follow-up quantitative study was completed in 2011. The second study included approximately 160 African-American students in grades 4-10 from a broad array of urban schools in northeast Ohio. To conduct this study, teachers completed a two-part survey that was designed to measure the correlation between the demonstration of academic survival skills and the grades students earned (Hampton 2014).
Part one of the survey was completed by teachers near the end of the first grading period. The first weeks of school gave the teachers time to become familiar with each student, develop daily classroom relationships, learn about their home environments, and observe their personal and academic behaviors. At the end of six weeks, the teachers were asked to rate their respective students in the areas of 1) Self-respect, 2) Command of Standard English, 3) Goal-setting ability, 4) Self-motivation, 5) Time management skills, 6) Consequence awareness, and 7) Respect for others. Each item received a rating of high, average, or low in terms of observed frequency.
Part two of the survey was completed by teachers after the first term final grades had been recorded. Although teachers had not been previously informed, they then were asked to provide the grade each respective student earned in the observed class. Through the statistical analysis, correlations between demonstrated academic survival skills and earned grades became quite clear. Students who were reported to display more of the academic survival skills during the observation period earned significantly higher grades than students who displayed fewer of those skills. For example, students who routinely demonstrated 4-7 of the skills typically earned grades of A-B, students who routinely demonstrated 3-5 of the skills typically earned grades of B-C, and students who routinely demonstrated 3 or fewer skills typically earned grades of C-F.
Pearson's Correlation Coefficient was used to model the linear correlation (dependence) between variables X (grades) and Y (academic survival skills). Linear Regression was used to model the relationship between a scalar dependent variable X (grade) and one or more of the explanatory variables Y (academic survival skills). Through multiple correlation analysis, fully two thirds (R² = .67) of the variance in grades was explained by the academic survival skills. This was an unusually high level of predictability and rivals the traditional educational predictors such as socio-economic status (SES) and prior achievement.
The academic survival skills variables were moderately to strongly inter-correlated, ranging from a low of .48 (the correlation between -goal-setting‖ and -self-respect‖) to a high of .84 (-goal-setting‖ and -self-motivation‖). The academic survival skills were also moderately to strongly correlated with grades, -time management‖ having the highest correlation (.81) and -command of standard English‖ the lowest (.50) (Hampton 2014).
This study demonstrated significance far beyond the correlations between achievement and the display of academic survival skills. More important were the implications for a corollary curriculum in urban schools. Since the survival skills are strongly predictive of student achievement, then student achievement must also be strongly predictive of survival skills. Far too often, educators have attempted to improve urban student achievement simply by improving standardized test scores as evidence of growth. Frequently when standardized test scores are improved, they are the result of test-taking practice as opposed to improved learner traits, e.g., increased study time, greater homework completion, improved school attendance, and better classroom behavior. Any trustworthy documentation of significant improvements in urban student test-taking must also address the questions of -why‖ and -how‖ the students have changed. Urban schools should not first seek to improve standardized test scores, but instead, focus on improving the student traits that will result in higher standardized test scores.
Incorporating Academic Survival Skills Into the Curriculum
Over the past 30 years, much research has been conducted and many papers have been written which address the so-called -achievement gap.‖ All of these works have generated important philosophies, practices, and programs that if implemented with fidelity could potentially lead to gains in urban student achievement. After all these years however, and although there are instances of promise, one is still hard-pressed to find reliable data illustrating nationwide significant growth and improvement in educational outcomes among African-American and Hispanic students. Sadly, there seems to be no current strategy or initiative capable of erasing the decades of problems that have plagued and stunted urban student growth.
Notwithstanding the need for increased school funding, better teacher preparation programs, and improved training for school principals, an equally urgent need should be to capitalize on what we have learned from students who have already found success in urban schools. Many urban students have demonstrated that there is less of an -achievement gap‖ and more of a gap in attitudes, behaviors, and skills that lead to academic achievement. Moreover, gaps in these areas could be immediately addressed because their development is not contingent upon school funding, additional faculty, or student socio-economic status.
The development of enhanced academic survival skills within the broad population of urban students would lead to more meaningful and long-lasting achievement, as well as the acquisition of tools for life-long learning and self-reliance. These attitudes, behaviors, and skills could be taught through the daily incorporation of their practice and held as expectations woven into the fabric and culture of all urban school activities. Through thoughtful planning from the earliest grades, guiding students to learn and develop the academic survival skills need not be viewed by teachers as a new curriculum or additional work, but rather as a more meaningful and creative way of providing instruction and to help students reach their full potential.
Urban educators must be aware that the acquisition and development of academic survival skills may not come quickly to all students. These skills will need to be encouraged and practiced every day, in every possible scenario, and under every possible set of circumstances. There is no shortcut to the internalization of attitudes, behaviors, and skills that are necessary for the long-term academic growth of African-American and Hispanic students.
Some broad examples of how to teach and incorporate the development of academic survival skills into the urban school curriculum might include: Begin with highlighting the importance of self-respect and respect for others as necessary behaviors for the overall health of the learning environment. In every subject area and class, in every extra-curricular activity and club, in every assembly and social setting, there are natural opportunities for students to discuss and practice the importance of these behaviors. Such instances will help students begin to internalize how their personal behaviors either promote or infringe upon the opportunities of others to learn. Most importantly however, the faculty and administrators must always demonstrate respect to the students.
To strengthen the command of Standard English, focus on the personal interests of students, then incorporate those areas into required reading and assignments. Increased reading is the most natural and efficient route to the mastery of language and grammar. Even if students choose not to use Standard English throughout the school day, the command should be at their disposal for use at their discretion.
To develop goal-setting abilities, have students establish and write personal and academic short-term goals for each day. The goals could include ideas such as getting to school on time, getting a better grade on the next quiz, seeking additional help in weak subjects, completing homework before social activities, or receiving no discipline referrals during the week. Students should also establish and write personal and academic long-term goals such as having perfect attendance during the semester and obtaining higher final grades in specific subjects. Regardless of grade levels, all urban students should have written long-term post-high school goals to share with teachers and parents. Both short and long-term goals should be frequently reviewed with teachers and within small groups of friends to see if personal behaviors are aligned with the desired outcomes.
Regarding student self-motivation, students will need to practice engaging in beneficial activities without continuous prompts to do so. Although self-motivation is based on an intrinsic system of rewards, teachers and administrators should also establish an external reward system for students who report the completion of important tasks without prompts. As students develop in this area, external rewards should be replaced with sincere praise and public or private recognition for their accomplishments.
Teachers can help to sharpen students' time management skills by frequently referring to the short and long-term goals they established for themselves. The accomplishment of each goal requires a plan of action, and each plan of action must be positioned within a specific time frame. If students have set goals for daily homework completion, improved grades in specific subjects, no absences during the semester, or advanced placement status, then time and planning must be practiced and prioritized at the expense of other afterschool enjoyment.
Teachers and administrators have always encouraged students to be aware of the consequences of their actions. The emphasis at this point should highlight how negative or unthoughtful behaviors will derail students from accomplishing their goals. Even students with the best academic survival skills will exhibit normal youthful behaviors which may not always be academically productive. The goal is not to produce students with perfect behavior, but students who consider the impact of their actions on their personal and academic goals.
Summary
There is a broad consensus that urban schools have been in decline for decades. As a result, we must accept that there will be no -silver bullets‖ or -magic pills‖ to quickly change the trajectory of its students. However, the good news is that change is possible. Economically disadvantaged students are not predestined to academic failure, and race is not indicative of school success. High achievement can become an expectation of all urban students if provided adequate opportunities, conditions, and support in developing self-reliance characteristics.
Previous endeavors to improve urban student achievement have rarely capitalized on the most critical factors; the students themselves. For most urban students, school success is not determined by innate intelligence or ability, but rather is the result of a strong desire to learn combined with appropriate attitudes, skills, and behaviors that allow for learning to take place. Academic achievement is not something that can be inserted into students, accomplished by punitive threats, or purchased for them. However, many urban students are certain to find academic success when they share more responsibility for their educational outcomes. The unfortunate truth is that students who are born | 2020-10-28T19:19:37.010Z | 2020-10-20T00:00:00.000 | {
"year": 2020,
"sha1": "75583eedc7eab3780fca229add6330a6f516fe15",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/irhe/article/download/18461/11797",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ea5a63c6e70ceb02f33d6051e9096a41ed940f1",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
254354409 | pes2o/s2orc | v3-fos-license | Clinical Characteristics of 606 Patients with Community-Acquired Pyogenic Liver Abscess: A Six-Year Research in Yantai
Objective This study was designed to analyze the clinical characteristics, etiological characteristics, drug resistance, and empirical use of antibiotics for community-acquired pyogenic liver abscess (PLA) to provide a basis for rational and effective empirical treatment of PLA in the local area. Methods The clinical data, etiological characteristics, drug resistance, and empirical anti-infective therapy schemes of 606 patients with PLA were collected and analyzed retrospectively. Results The included patients were mainly males, with a male-to-female ratio of 1.3:1. The average age of the patients was 60.3 ± 14.1 years. The underlying diseases were diabetes and biliary tract disease, accounting for 38.7% and 22.3%, respectively. The main clinical manifestations were fever (92.9%), abdominal pain (44.7%), and nausea (33.3%). Imaging findings: the proportion of patients with a single lesion was 74.7%, and 67% of the patients had involvement in the right lobe of the liver. The main pathogen was Klebsiella pneumoniae accounted for 74.9% in blood culture and 84.1% in pus culture, mainly extended-spectrum β-lactamase. In 272 strains negative for extended-spectrum β-lactamase (ESBLs), 100% were resistant to ampicillin and less than 50% were sensitive to nitrofurantoin. Only 36 ESBL-positive strains had higher than 80% sensitivity to carbapenems, β-lactamase inhibitor compound, and amikacin. Patients treated with different treatment methods showed significantly different average length of hospital stay (14 [9–21] vs 13 [8–18]). Empirical anti-infective therapy: Beta-lactamase complex, carbapenems, cephalosporins, and quinolones were used in 280 (37.6%), 180 (29.7%), 180 (29.7%), and 147 (24.3%) patients, respectively. Conclusion Patients with community-acquired PLA in this area are mainly males, and the underlying diseases are mainly diabetes and hepatobiliary system disease. The main clinical manifestation is fever, so patients with fever of unknown cause should pay attention to possible liver abscesses. Based on drug sensitivity tests, the empirical use of antibiotics is somewhat unreasonable.
Introduction
Pyogenic liver abscess (PLA) is an infectious disease caused by pyogenic bacteria invading the liver through various channels, accounting for 13% of abdominal infectious diseases and 48% of visceral abscesses. 1,2 In recent years, the overall incidence rate of PLA has been increasing gradually, and the incidence rate is higher in Asian countries. [3][4][5] In the past 20 years, the infection rate of Klebsiella pneumoniae (K. pneumoniae) in the Asian population has gradually increased. At present, it has become the main pathogen causing PLA infection. [6][7][8] Patients with PLA may have nonspecific manifestations such as chills, fever, shivers, and pain in the liver area. When the infection is poorly controlled, PLAs, especially those caused by K. pneumoniae, can lead to sepsis, multiple organ dysfunction, and death; it has a mortality rate of 2%-31%. 9 However, due to the nonspecific manifestations of some patients, there are some misdiagnoses and missed diagnoses in clinics. To improve the diagnosis rate of PLA in the local area, effective empirical treatment must begin as soon as possible to improve the prognosis of patients. The selection of antimicrobial drugs requires the combination with local PLA pathogenesis and clinical manifestations. In this retrospective study, we aimed to analyze the clinical characteristics, etiological characteristics, drug resistance, and empirical use of antibiotics for community-acquired pyogenic liver abscess (PLA), and therefore to provide a basis for rational and effective empirical treatment of PLA in the local area.
Data and Methods Subjects
PLA met the diagnostic criteria proposed by Foo et al 10 in 2010. Exclusion criteria: (1) mixed infection, combined with fungal or parasitic infection; (2) the pathogenic culture results of the same patient were not the same; (3) the culture results were considered as contaminated bacteria. The study was approved by the ethics committee of the hospital.
Research Methods
A total of 606 patients with PLA, diagnosed in Yantai Yuhuangding Hospital from 2015 to 2020, were analyzed retrospectively. The basic data, laboratory results, microbiological results, and treatment methods (eg, empirical antiinfective therapy scheme and drainage) were collected.
Result Interpretation Criteria
The curative effect was evaluated after two weeks of hospitalization. (1) Effective: the symptoms were relieved, and the abscess shrunk after treatment. (2) Ineffective: the symptoms were not alleviated or worsened after treatment, the size of the pus cavity did not change, increases upon imaging examination (abdominal ultrasound, computed tomography [CT] or magnetic resonance imaging) were noted, or the death of the patient.
Culture of Drainage Fluid and Blood
The patient's ultrasound-guided percutaneous drainage fluid was immediately sent to the bacterial room for culture. An inoculating nutrient broth was used to increase the amount of the bacteria. After 24 h, the bacteria were transferred to a blood plate and a MacConkey agar plate at the same time and cultured at 35°C for 24 h. A blood culture was carried out in strict accordance with the relevant methods in the BACTEC9050 blood culture user manual. After the positive alarm of the LED display screen of the instrument, smearing and seed transfers were performed at the same time. The method of seed transfer is the same as that of ordinary culture. For the pus culture, the pus was directly inoculated in a blood plate and a MacConkey agar plate and cultured at 35°C for 24 h.
Pathogen Identification and Drug Sensitivity Test
The separated bacteria were dissolved evenly in the identification culture medium. The turbidimeter was used to adjust the turbidity (0.5 Michaelis turbidity), and 25 μL of the sample was taken from the dissolved identification culture medium into the drug sensitivity inoculation culture medium and placed in the corresponding well of the positive plate or negative plate. A full-automatic microbial analyzer, with its own bacterial identification/drug sensitivity system, was used for pathogen identification and drug sensitivity detection.
Statistical Analysis
The original data of the patients were collected and sorted with Excel. The data were statistically analyzed using statistical software SPSS 22.0. Normally distributed and approximately normally distributed measurement data were expressed as the mean ± standard deviation and compared between groups using t-tests. Non-normally distributed measurement data were expressed as the median and interquartile and compared between groups using non-parametric tests. The count data were expressed as numbers (proportion) and compared using two independent sample Chi-square tests or Fisher exact probability methods. P < 0.05 was considered statistically significant.
Basic Data of Patients with PLA
Of the 606 patients with PLA, 345 were males (56.7%), with a male-to-female ratio of 1.3:1. The age range of these patients was 18-97 years, with an average age of 60.3 ± 14.1 years. The main underlying diseases were diabetes, cerebrovascular disease, and benign biliary disease, accounted for 38.7%, 27.9%, and 22.3% respectively (Table 1).
Clinical and Laboratory Data of PLA Clinical Manifestations
Fever (563 patients, 92.9%) was the most common clinical manifestation, followed by chills and chills (74.7%). Of all patients, 323 had a peak body temperature of >39°C, accounting for 57.3% of the patients with fever. Abdominal pain was a clinical manifestation in 44.7% of patients, 33.3% had vomiting or nausea, 18.2% had fatigue and poor appetite, and 6.3% had jaundice. Some patients had other atypical clinical symptoms (Table 1).
Ultrasonic or CT Results
Of these 606 patients, 451 patients (74.4%) had a single abscess. In 406 patients (67%), the abscess was in the right lobe.
Empirical Anti-Infective Therapy
Of all patients, 228 were empirically treated with β-lactamase inhibitor compound for anti-infection treatment, and the proportion was the highest (37.6%). Of the total, 180 patients (29.7%) were empirically treated with cephalosporin, and 180 patients (29.7%) were empirically treated with carbapenems. In addition, 147 patients (24.3%) were empirically treated with quinolones, and 135 patients (22.3%) were treated with a single type of antibiotic. The remaining patients were treated with combined anti-infective therapy (cephalosporins combined with quinolones or aminoglycosides or nitroimidazoles, cefoperazone/sulbactam, or piperacillin/ sulbactam combined with nitroimidazoles). In 75 patients (12.3%), the anti-infection treatment was effective, and these patients underwent step-down treatment. A total of 105 patients (17.3%) received an upgraded treatment of carbapenem anti-infection. In 60 (9.9%) of these patients, the treatment was upgraded to antibiotics because of their serious condition, experience, or poor anti-infection treatment effect according to the drug sensitivity results (Figure 1).
Efficacy Evaluation of Different Treatment Methods
Of all patients, 154 were treated with antibiotics alone and 452 were treated with antibiotics combined with puncture and drainage. Of the 154 patients treated with antibiotics alone, 143 patients (92.9%) achieved effective outcomes, and 11 patients (7.1%) achieved ineffective outcomes. In the combined treatment group, 423 patients (93.6%) achieved effective outcomes, and 29 patients (6.4%) achieved ineffective outcomes. A comparison revealed that there was no significant difference in the efficacy between the two treatment methods. However, the average length of the hospital stays of the patients who received combined treatment was significantly shorter than that of the patients who were treated with antibiotics alone (Table 4).
Discussion
Pyogenic liver abscess is a serious infectious disease that threatens human health. It can develop into a life-threatening severe infection in patients with immunodeficiencies and older patients with underlying diseases. This study revealed that PLA usually occurs in males (the male-to-female ratio was 1.3:1). The average age was 60.3 ± 14.1 years. These results are consistent with the results of previous studies. The main underlying diseases were diabetes and biliary tract disease, similar to those of previous studies. This is mainly because the liver has two sets of blood supply, namely the hepatic artery and the portal vein. The portal vein is connected to the gastrointestinal tract, and patients with underlying diseases of the biliary tract and intestinal tract will have an increased chance of an incidence of liver abscess. [11][12][13][14][15][16] In this study, . This is basically consistent with that reported in studies both locally and globally. [17][18][19] Therefore, in patients with a fever of an unknown cause, PLA should be carefully considered, especially for patients with a high fever. Other clinical symptoms of PLA can be jaundice, fatigue, poor appetite, and fading consciousness, suggesting that the clinical manifestation of PLA is not typical, so clinicians should be vigilant to avoid misdiagnoses and missed diagnoses that prolong the condition and affect the prognosis. This study revealed that patients with a single bacterial infection accounted for 93.0% of the patients. Among them, the detection rate of K. pneumoniae was 73.1%, slightly higher than that reported in the literature (64.0%). 20 The reason for this may be related to the patients' underlying disease. 21 The literature reported that diabetes mellitus was an independent risk factor for K. pneumoniae liver abscess. 22 In this study, the proportion of diabetes in the underlying diseases was the highest. 23 In our study, empirical anti-infection therapy was mainly compound β-lactamase inhibitor (36.0%), carbapenems (31.5%), and cephalosporins (31.3%). Singapore scholars studied patients with PLA with similar basic data treated with oral ciprofloxacin and injection of levofloxacin for 28 days. The results revealed that there was no difference in the treatment effects between the two groups. 24 Scholars in China's Taiwan region revealed that 25 there was no significant difference in the treatment effect of patients with PLA and even severe infection between quinolones and compound β-lactamase inhibitors. Quinolones can also shorten the time of administration of intravenous antibiotics and hospital stay. The reason may be that the main pathogen of PLA in China's Taiwan region and Singapore is K. pneumoniae.
The limitation of this study should also be acknowledged. In terms of the selection of empirical antibiotics, there was a lack of evaluation of the efficacy of different antibiotics in patients with the same basic data. In future research, pairing research will be conducted to provide the basis for a more reasonable and effective laboratory selection of antibiotics.
Conclusion
The present study revealed that empirical treatment of patients with community-acquired liver abscess without risk factors of ESBL infection should also adopt compound β-lactamase inhibitors and even carbapenems. These findings may provide a basis for rational and effective empirical treatment of PLA in this region.
Data Sharing Statement
All data generated or analysed during this study are included in this article. Further enquiries can be directed to the corresponding author.
Ethics Approval and Consent to Participate
The study was conducted in accordance with the Declaration of Helsinki (as was revised in 2013). The study was approved by Ethics Committee of the Yuhuangding Hospital. Written informed consent was obtained from all participants.
Acknowledgments
We are particularly grateful to all the people who have given us help on our article.
Publish your work in this journal
Infection and Drug Resistance is an international, peer-reviewed open-access journal that focuses on the optimal treatment of infection (bacterial, fungal and viral) and the development and institution of preventive strategies to minimize the development and spread of resistance. The journal is specifically concerned with the epidemiology of antibiotic resistance and the mechanisms of resistance development and diffusion in both hospitals and the community. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2022-12-07T17:51:16.537Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "12fb9eed8957c9b10b6dd41f4a2a31e7f86e1534",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dcc41e23703cd2e9b8a4ad707b07ff5a054c1cc5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
43933878 | pes2o/s2orc | v3-fos-license | Cervical small cell carcinoma frequently presented in multiple high risk HPV infection and often associated with other type of epithelial tumors
Background Small cell carcinoma of the uterine cervix is a rare and highly malignant tumor, and its etiopathogenesis is strongly related to high-risk HPV infections. Methods The clinicopathological data of 30 cases of cervical primary small cell carcinoma were retrospectively analyzed. In situ hybridization, polymerase chain reaction and reverse dot-blot hybridization were employed to detect HPV DNA in both small cell carcinoma and other coexisting epithelial tumors. Immunohistochemistry was used to detect the protein expression of p16 and p53. Results Amongst 30 patients with cervical primary small cell carcinoma, 15 patients simultaneously exhibited other types of epithelial tumors, including squamous cell carcinoma, adenocarcinoma, squamous cell carcinoma in situ, and adenocarcinoma in situ. Most tumor cells infected with HPV presented integrated patterns in the nuclei by in situ hybridization. HPV DNA was detected in every small cell carcinoma case (100%) by polymerase chain reaction and reverse dot blot hybridization. 27 cases (90%) harbored type 18, and 15 (50%) displayed multiple HPV18 and 16 infections. The prevalence of HPV 18 infection in small cell carcinoma was higher than in cervical squamous and glandular epithelial neoplasms (P = 0.002). However, similar infection rates of HPV 16 were detected in both tumors (P = 0.383). Both small cell carcinoma and other types of epithelial tumors exhibited strong nuclear and cytoplasmic staining for p16 in all cases. Three cases of small cell carcinoma revealed completely negative p53 immunohistochemical expression in 15 cases of composite tumors, which suggested TP53 nonsense mutation pattern. The pure small cell carcinoma of uterine cervix had similar mutation or wild type pattern for TP53 compared with composite tumor (P = 0.224). Conclusions Cervical small cell carcinomas are often associated with squamous or glandular epithelial tumors, which might result from multiple HPV infections, especially HPV 16 infection. Multiple HPV infections were not correlated with tumor stage, size, lymphovascular invasion, lymph node metastasis, or prognosis. Furthermore, careful observation of specimens is very important in finding little proportion of small cell carcinoma in the composite lesions, specifically in cervical biopsy specimens, in order to avoid the missed diagnosis of small cell carcinoma.
Background
Numerous clinical and experimental studies have demonstrated a closed etiopathogenetic relationship between the development of cervical cancers (including squamous cell carcinoma, adenocarcinoma, and small cell carcinoma) and high-risk human papillomavirus (HPV) infection [1]. It has been implicated that squamous cell carcinoma and adenocarcinoma are correlated to HPV 16 infection, whilst cervical small cell carcinoma is linked to HPV 18 [2,3]. As an uncommon and highly malignant tumor, small cell carcinoma of uterine cervix has similar morphological features to tumors arising in the lung. The clinicopathological features of cervical small cell carcinoma and its relationship with HPV infection has been widely studied, including case series and larger retrospective population-based reports [2,4]. Although some studies report that cervical small cell carcinoma is often associated with other cervical cancer or intraepithelial neoplasia [5], the clinicopathological features of cervical composite tumors, including small cell carcinoma, were rarely characterized. In these studies, HPV infection was detected using a variety of methods [3,6] such as immunohistochemistry, polymerase chain reaction (PCR), in situ hybridization (ISH), reverse dot blot hybridization (RDDH), and Southern blot hybridization. The inconsistent results affected the understanding of the relationship between HPV infection and cervical small cell carcinoma. Many studies revealed the preponderant infection of HPV 18 in cervical small cell carcinoma [2], but multiple infections were rarely reported.
In this report, the clinicopathological features of 30 patients were analyzed, including 15 patients with pure cervical small cell carcinoma, and 15 cases with composite cervical tumors composed of small cell carcinoma and other types of epithelial tumors. The HPV infection in these tumors was detected by ISH and PCR-RDDH, and the expression of p16 and p53 proteins were examined using immunohistochemistry. Our results demonstrate that cervical small cell carcinoma is often associated with squamous or glandular epithelial tumors, and these tumor cells usually exhibit overexpression of p16. Small cell carcinomas and adenocarcinomas or squamous cell carcinoma in situ showed all completely negative p53 immunohistochemical expression in three of 15 composite tumors of uterine cervix. To the best of our knowledge, this is the first study to report that multiple infections of HPV 18 and 16 developed in half of cervical small cell carcinomas. Furthermore, this is the first report to make the distinction that HPV 18 infection is closely correlated to the occurrence of cervical small cell carcinoma while HPV 16 infection is involved in cervical squamous cell carcinoma or adenocarcinoma in the co-existing tumor cases. Multiple infections of HPV subtypes were not related to tumor stage, size, lymphovascular invasion, lymph node metastasis, or prognosis. In addition, our study also confirmed for the first time that patients with composite tumors had similar HPV infection subtypes, clinicopathological features, and prognosis compared to patients with pure small cell carcinoma.
Case selection
Thirty cases of cervical primary small cell carcinoma were retrieved from the 2009-2017 surgical pathology archives of the Xijing Hospital. These included 23 hysterectomies, 1 conization, and 6 biopsies. To rule out the possibility of metastatic small cell carcinoma from the lung or other organs, the clinical data were carefully reviewed. Hematoxylin and eosin-stained slides of primary cervical neoplasm were re-examined to confirm the original diagnosis. Immunohistochemical staining for neuroendocrine markers, including Leu-19, synaptophysin, chromogranin, and neuron-specific enolase, was performed, and more than half of small cell carcinoma cells showed positive expression for two or more markers.
The clinical records and follow-up of these patients were examined. This study received approval from the Ethics Committee of the Xijing Hospital.
ISH analysis
Three-micron sections containing small cell carcinoma and other types of epithelial tumors were examined for HPV DNA by ISH staining with INFORM® HPV III Family 16 and 6 Probe (Ventana Medical Systems, Inc., Tucson, AZ, USA) for high-risk HPV types (16,18,31,33,35,39,45,51,52,56,58, and 66) and low-risk HPV types (6 and 11), respectively. The ISH assay was performed according to the manufacturer's protocol using the Ventana BenchMark XT system (Ventana Medical Systems, Inc.). Labeling was detected with the ISH iVIEW™ Blue Plus Detection Kit (Ventana Medical Systems, Inc.). The positive and negative controls were carried out using HPV quality control slides provided by Ventana Medical Systems, Inc. The HPV signals were detected in the nuclei of tumor cells, and the signal patterns were categorized as either episomal staining pattern or punctuate staining pattern. The former presents as a homogeneous globular navy-blue to black signal in the entire nuclei of tumor cells, and the latter shows single or multiple sparsely distributed and dot-like navyblue punctae in the nuclei of the tumor cells.
DNA extraction and HPV detection using PCR-RDDH
Eight-micron sections were prepared from formalinfixed, paraffin-embedded (FFPE) tissues of cervical tumor samples. Tumor tissues were dissected using sterile blades from slides and were collected into 1.5 ml Eppendorf tubes. Amongst 15 specimens with composite carcinomas, small cell carcinoma and other types of epithelial tumors were successfully isolated in seven specimens. 37 samples of tumor cells were collected, including 22 cases of small cell carcinoma, 8 cases of mixed tumors, 2 cases of squamous cell carcinoma, 2 cases of adenocarcinoma, 2 cases of squamous cell carcinoma in situ, and 1 case of adenocarcinoma in situ. DNA extraction was carried out using QIAamp® DNA FFPE Tissue Kit (Cat No. 56404, QIAGEN GmbH, Hilden, Germany) following the manufacturer's guidelines. The samples were digested with proteinase K in a volume of 200 μL at 56°C overnight, and 20 μL of DNA aliquot was obtained finally. The quality of the genomic DNA extracted was tested by agarose gel electrophoresis with ethidium bromide staining. For the PCR, 2 μL of DNA aliquot was used, and broad-spectrum HPV DNA amplification was performed using primers for GP5+/6+ (GP5+: 5'-TTTGTTACTGTGGTAGATACTAC-3′, GP6 +: 5'-GAAAAATAAACTGTAAATCATATTC-3′) with a total reaction volume of 50 μL. PCR amplification was performed in the following conditions: 50°C for 15 . PCR amplification products were boiled 10 min to obtain single-stranded DNA. Samples were then added in low density gene patches with a total of 23 gene probes of different HPV subtypes and hybridization was performed at 51°C for 3 h. The gene patches were incubated in peroxidase solution for 30 min, and then developed in color reagent (19 ml 0.1 mol/L sodium citrate, 1 ml tetramethylbenzidine, and 2 μL 30% H 2 O 2 ) for 60 min in the dark. HPV subtypes were determined by the positive point on the HPV genotype profile on the membrane. HPV-positive and negative controls were also included in every experiment.
Immunohistochemistry
Immunohistochemistry was performed on 3 μm thick tissue sections using the Ventana BenchMark XT system (Ventana Medical Systems, Inc.). P16 (clone: 6H12) and p53 (clone: DO-7) antibodies were purchased from Maxin Corp. (Fuzhou, China) and DAKO Corp. (Carpinteria, CA), respectively. Immunohistochemical staining was conducted employing the Roche Ultraview DAB Detection Kit (Ventana Medical Systems, Inc.) following the manufacturer's instructions. The positive and negative control slices were also run simultaneously. Nuclear/cytoplasmic staining was considered positive for p16, and p53 protein showed nuclear expression. A tumor was recorded positive for p16 if more than 50% of the tumor cells showed immunoreactivity. IHC for p53 includes mutation pattern and wild type pattern. Strong and diffuse nuclear staining or complete negative with internal control is considered to be mutation pattern, and focal and weak staining correlate with a wild type pattern of p53.
Statistics
Statistical software SPSS17.0 (SPSS, Inc., Chicago, IL, USA) was employed in this report. Pearson Χ 2 test or Fisher's exact test was adopted for correlation analysis of enumeration data. Wilcoxon rank sum test was used for comparison of patients' age and tumor size. P<0.05 was considered as a statistically significant.
Clinicopathological features
Among 30 patients with cervical primary small cell carcinomas included in this study, 15 patients were diagnosed with pure cervical small cell carcinoma. In the remaining 15 cases of composite tumors, 7 cases also exhibited invasive cervical squamous cell carcinoma, 4 cases also showed cervical squamous cell carcinoma in situ, 3 cases also had cervical adenocarcinoma, and one patient also displayed cervical adenocarcinoma in situ. The age of all 30 patients at diagnosis ranged from 31 to 74 years (mean 46.4 years, median 41 years). The mean and median age of the 15 patients with cervical composite tumors was 46.4 and 45 years, respectively. 22 patients presented with abnormal vaginal bleeding or contact bleeding. Two patients showed no clinical symptoms, and the tumors were found following physical examination. Most of the tumors displayed exophytic growth with or without cervical erosion. The tumor sizes in the 24 hysterectomies and cervical conization specimens ranged from 1 to 6 cm (mean 3.0 cm, median 2.5 cm). 16 tumors (66.7%) were FIGO stage IB, 3 stage IIA, 2 stage IA, 1 stage IIB, 1 stage IIIA, and 1 stage IVB. Accurate staging information was not acquired for 6 cases of cervical biopsy specimens. 21 patients underwent radical hysterectomy with bilateral or partial adnexectomy and pelvic lymph node dissection. Two patients received radical hysterectomy with pelvic lymph node dissection, and one case underwent cervical conization owing to the superficial invasion depth of small cell carcinoma. Accurate surgical methods in 6 outpatients were not obtained in this study. 15 patients underwent postoperative chemotherapy combined with radiotherapy, and 6 patients received postoperative chemotherapy. Postoperative treatment strategies were not obtained in 9 other patients. Follow-up data of 16 cases was obtained: 7 patients died with a survival time ranging from 5 to 24 months (median survival time: 7 months). Survival time was more than two years in 4 out of 10 confirmed surviving patients, and two patients free of disease have undergone a follow-up at 67 and 88 months.
Histologically, cervical primary small cell carcinomas showed morphological features similar to those seen in the pulmonary counterpart. Densely packed small tumor cells often formed a sheet-like diffuse growth pattern. Neuroendocrine growth patterns, such as orderly tubular, trabecular, organoid, and nuclear palisading patterns, were less illustrated. Tumor cells showed round, ovoid, or spindled nuclei and scant cytoplasm. Nuclear chromatin is finely granular, and nucleoli were absent or inconspicuous. Numerous mitotic figures and extensive necrosis were commonly observed. Furthermore, squamous and glandular epithelial neoplasms were observed in 15 cases. Most squamous cell carcinomas developed in the superficial parts of small cell carcinomas (Fig. 1a). However, in two cases, the squamous cell carcinoma intermingled with the small cell carcinoma (Fig. 1b). In three cases of cervical small cell carcinoma associated with adenocarcinoma, one case illustrated mixed small cell carcinoma and adenocarcinoma (Fig. 1c), and two patients revealed adenocarcinoma present at the periphery of the small cell carcinoma (Fig. 1d). Five cases of cervical small cell carcinoma associated with carcinoma in situ all illustrated adjacent relationships between small cell carcinoma and squamous cell carcinoma in situ or adenocarcinoma in situ. Although more composite tumors were found in hysterectomy specimens than in biopsy specimens in this study, the discovery of composite lesions was not related to specimen type (P = 0.651). There was no statistical difference in patients' age, FIGO stage, lymph node metastasis, lymphovascular invasion, or prognosis between pure small cell carcinoma and composite tumors ( Table 1, Fig. 2). Patients with cervical composite tumors had similar prognosis to patients with pure small cell carcinoma (P = 0.716). The prognosis of these patients was determined by the composition of small cell carcinoma. Six cases (26.1%) had regional nodal metastasis, including 5 cases of metastatic small cell carcinoma and one case of metastatic squamous cell carcinoma. One patient displayed small cell carcinoma involving vaginal stump, and ovarian metastatic small cell carcinoma was found in another patient. In the 24 patients who underwent hysterectomies or cervical conization, lymphovascular invasion of small cell carcinoma was detected inside or within the tumor in 21 cases, of which two patients had simultaneous lymphovascular invasion of squamous cell carcinoma. Only one case presented with lymphovascular invasion of small cell carcinoma in 6 cervical biopsy specimens. Lymphovascular invasion was more likely to be found in hysterectomy or conization specimens than in biopsy specimens (P = 0.002).
HPV DNA detection and typing
ISH of tissue sections revealed that most specimens were positive for the INFORM HPV III Family 16 Probe in both small cell carcinomas and other types of epithelial tumors. 29 out of 30 cases of small cell carcinomas showed positive staining, including 21 cases with only one navy-blue to black puncta (Fig. 3a), 6 cases with multiple small blue puncta inside the nuclei (Fig. 3b), and 2 cases with multiple puncta-like staining to diffuse small particles (Fig. 3c). These staining signals represent viral integrative patterns in 27 cases. In some cases, the nuclear viral integrative pattern was so discrete that microscopic examination at a higher magnification (40× objective) was required to judge nuclear epithelial cell localization, which represented very low viral copy number. Two cases of small cell carcinoma showed both episomal and integrative staining patterns. In 7 cases of cervical squamous cell carcinoma, 5 cases illustrated punctate signals in the nuclei of tumor cells (Fig. 3d) and 2 cases displayed episomal staining patterns. Two out of four cases of squamous cell carcinoma in situ revealed episomal staining patterns (Fig. 3e), and all tumor cells of cervical adenocarcinoma and adenocarcinoma in situ illustrated punctate staining (Fig. 3f). Interestingly, one case displayed positive staining for small cell carcinoma with punctate pattern but negative for squamous cell carcinoma, and another was positive for cervical adenocarcinoma in situ with punctate staining, but negative for small cell carcinoma. The INFORM HPV III Family 6 Probe was employed to detect low-risk HPV genotypes, and no positive tumor cells were observed in both small cell carcinoma and other types of epithelial tumors.
Thirty-seven specimens of cervical small cell carcinoma and other types of epithelial tumors were tested for HPV DNA by PCR-RDDH, and all were positive (Fig. 4) Table 2). In addition, multiple infection with HPV subtypes was not related to tumor stage, size, lymphovascular invasion, lymph node metastasis, or prognosis (P = 0.187, 1.00, 1.00, 0.179, and 0.498, respectively).
Immunohistochemical expression of p16 and p53
Strong and diffuse nuclear and cytoplasmic staining for p16 protein was noted in all cases of cervical small cell carcinoma (Fig. 5a), squamous cell carcinoma (Fig. 5b), squamous cell carcinoma in situ, adenocarcinoma, and adenocarcinoma in situ. In contrast, the nonneoplastic squamous and glandular epithelia were either negative or showed focal and weak cytoplasmic positivity. Strong and diffuse nuclear staining for p53 protein was observed in one case of cervical squamous cell carcinoma (Fig. 5c) and in one case of squamous cell carcinoma in situ, which revealed TP53 missense mutation. Scattered tumor cells illustrated weak or moderate positivity for p53 with rates ranging from 1 to 60% (Fig. 5d), which meant wild type pattern of TP53. Three cases of composite tumors showed negative staining both in small cell carcinoma and in adenocarcinoma or squamous cell carcinoma in situ, which suggested TP53 nonsense mutation. The pure small cell carcinoma of uterine cervix had similar mutation or wild type pattern of TP53 compared with composite tumor (P = 0.224), and there was no difference between TP53 mutation in small cell carcinoma and those in other epithelial neoplasms of uterine cervix (P = 0.682).
Discussion
Small cell carcinoma of the uterine cervix is a rare and highly malignant tumor. The etiopathogenetic association between cervical small cell carcinoma and highrisk HPV infections has been well documented in some studies [1,2,7]. Our study extend these findings by demonstrating that all 30 cases of cervical small cell carcinomas are related to high-risk HPV types 18 and 16, However, the prevalence of HPV 18 was different in cervical small cell carcinomas from squamous or glandular epithelial neoplasm. The mean age at diagnosis for women with cervical small cell carcinoma was between 45 and 50 years [2], which is consistent with that of cervical squamous cell carcinoma. In this study, the mean age of the patients with cervical small cell carcinoma was 46.4 years. The patients had no specific clinical manifestations. Most cases presented with abnormal vaginal bleeding or contact bleeding. Exophytic growth was not different from other uterine cervical carcinomas. Of note, most of the patients in our study were in the early stage, which differs from previous studies [2,4]. However, high rates of lymph node metastasis and poor prognosis were still observed in these stage I and II patients.
Primary small cell carcinomas of uterine cervix often coexist with squamous cell carcinomas or adenocarcinomas. Wang et al. reported that in 22 cases of primary cervical small cell carcinomas, two exhibited concordant high-grade squamous intraepithelial neoplasia and adenocarcinoma in situ [8]. In the study by Abeler et al., 12 of the 26 patients with cervical small cell carcinomas were associated with other forms of carcinoma, including squamous cell carcinoma (n = 6), adenocarcinoma (n = 5), and adenocarcinoma in situ (n = 1) [9]. Ishida et al. reported 10 cases of cervical small cell carcinoma, 7 of which were mixed with adenocarcinoma and/or squamous cell carcinoma, or cervical intraepithelial neoplasia [10]. Emerson et al. reported that in 19 cases of cervical small cell carcinoma, 6 cases were associated with adenocarcinoma, and three patients also had adenosquamous carcinoma, squamous cell carcinoma in situ, and adenocarcinoma [5]. In this study, 15 patients also displayed squamous or glandular epithelial neoplasms, including squamous cell carcinoma (n = 7), adenocarcinoma (n = 3), squamous cell carcinoma in situ (n = 4), and adenocarcinoma in situ (n = 1). Our results demonstrate a high frequency of cervical small cell carcinomas associated with squamous or glandular epithelial tumors. They had similar age composition and clinical manifestations as the patients of single cervical tumor. Chan et al. found that pure, rather than mixed histological pattern was a poor prognostic factor for survival [11]. However, in our study, patients of cervical small cell carcinoma with and without other types of epithelial neoplasms had similar prognosis, which was significantly worse than that of cervical squamous cell carcinoma and adenocarcinoma [12]. Therefore, the prognosis of patients with composite cervical cancer was determined by the composition of small cell carcinoma. In addition, in the case of biopsy specimens suspected of cervical epithelial disease, careful observation should be made to avoid the omission of small cell carcinoma. In this study, more lymphovascular invasion was observed in hysterectomy and cervical conization specimens. Therefore, obtaining as many specimens as possible improves the accuracy of the diagnosis, and it will more accurately estimate the prognosis.
High-risk HPV has been implicated in the carcinogenesis of cervical small cell carcinomas [1,7]. A meta-analysis including more than 30,000 invasive cervical cancers revealed that HPV 16 (59%), 18 (13%), 58 (5%), 33 (5%), and 45 (4%) were the most prevalent subtypes in cervical squamous cell carcinomas. HPV 18 (37%), 16 (36%), 45 (5%), 31 (2%), and 33 (2%) were the most prevalent in cervical adenocarcinomas [13]. Many studies have established that the prevalence of different high-risk HPV types in cervical small cell carcinoma ranged from 50 to 100%, and that HPV 18 may be the most prevalent type [2]. Wang et al. reported that HPV18 and 16 were detected in 77.3 and 18.2% cases of cervical small cell carcinoma, respectively, and one case displayed HPV 18 and 16 co-infection [8]. Research by Abeler et al. demonstrated that HPV-18 is predominant in pure small cell carcinomas and in tumors with adenocarcinomatous areas, and that HPV-16 is found in pure small cell carcinomas and in tumors with areas of squamous cell differentiation [9]. In a study by Ishida et al., HPV 18 was detected in both small cell carcinomas and adenocarcinomatous components, and no other types of HPV were detected [10]. In our study, preponderant infection with HPV 18 was found in 27 of 30 cases of cervical small cell carcinoma. In the 15 cases of composite tumors, HPV 18 infection was more common in small cell carcinoma than in any other type of epithelial tumors. These findings are in line with the notion that HPV 18 infection is involved in the development of cervical small cell carcinoma and HPV 16 infection promotes the occurrence of cervical squamous cell carcinoma and adenocarcinoma.
Multiple infections were observed in more than half of the cervical small cell carcinomas in our study. Only one case of multiple infections was found in 7 cases of squamous and glandular epithelial neoplasm. Multiple infections of HPV 18 and 16, rather than pure HPV 18 infection, might play an important role in the pathogenesis of cervical small cell carcinoma. In previous studies, very few cases of multiple infections were reported [8]. One explanation for this might be found in the different detection methods or the population differences. In the research by Zhou et al., using HPV DNA detection by [14]. A similar multiple infection prevalence rate was measured in 9012 women whom attended cervical cancer screening in Taihu River Basin, China [15]. In addition, reported HPV infection rates are closely related to the detection methods used.
For example, in our study, the ISH assessment for HPV infection with low copy number was very difficult, displaying the lower sensitivity using ISH detection compared to PCR. Similar results were reported by Masumoto using ISH and direct sequencing of PCR products [16]. The prevalence of infection by multiple HPV genotypes was 20% in patients with cervical squamous cell carcinoma and adenocarcinoma [17]. One case of multiple HPV infection was confirmed in seven cases of squamous or glandular epithelial neoplasms in our study. Furthermore, the prevalence of multiple infections in small cell carcinoma was not significantly different in pure small cell carcinoma compared to the composite tumors, and multiple HPV infections did not affect the prognosis of these patients.
Overexpression of p16 has been well documented in high-risk HPV-related cervical squamous cell carcinomas and adenocarcinomas, as well as their precursor lesions [18,19]. Many recent studies have shown that cervical small cell carcinoma also overexpressed p16 protein [20,21]. In this study, we detected simultaneous overexpression of p16 in both small cell carcinoma and other types of epithelial tumors by immunohistochemistry. Although the overexpression of p16 was correlated with HPV18 and 16 infections in both pure tumors and composite tumors, it does not confirm that high-risk HPV infections result in the overexpression of p16. Indeed, small cell carcinoma negative for HPV DNA in the lung, colorectum, bladder, and ovaries also overexpress p16 [8,20]. In our study, the mutation or wild type pattern of TP53 in small cell carcinoma was not significantly different between pure and composite tumors. Three patients with composite tumors showed completely negative p53 protein expression both in small cell carcinoma and in other epithelial tumors. Furthermore, two cases revealed strong and diffuse positive expression of p53 only in squamous cell carcinoma or squamous cell carcinoma in situ, while small cell carcinoma components of the same patients showed wild type pattern. These observations indicated TP53 mutation might involved in the occurrence of small cell carcinoma as in squamous cell carcinoma and adenocarcinoma of uterine cervix, but the pathogenesis of small cell carcinoma was not completely same to those of squamous cell carcinoma or adenocarcinoma.
Since only a single-center retrospective case series was studied, and limited cases were employed, this study should be repeated on a larger scale to confirm our findings. Furthermore, some patients did not complete their follow-up, thus the comprehensive and effective data were not obtained.
Conclusions
In summary, our study demonstrated that cervical small cell carcinomas closely correlate with HPV18 and HPV 16 infections. The patients of cervical small cell carcinomas with multiple infection of high-risk HPV may also promote the development of squamous or glandular epithelial neoplasms. In patients with composite cervical neoplasms, multiple infections of HPV 18 and 16 were involved in the development of cervical small cell carcinoma, while the occurrence of cervical squamous cell carcinoma or adenocarcinoma was closely related to HPV 16 infection. Multiple high-risk HPV infection was not related to tumor stage, size, lymphovascular invasion, lymph node metastasis, or prognosis in cervical small cell carcinoma. Similar HPV DNA genotypes, clinicopathological features, and prognosis were observed both in patients with pure small cell carcinomas and those with composite cervical tumors. The small cell carcinoma determined the prognosis of patients with cervical composite tumors. Because of the frequent presence of co-existing tumors, it is important to carefully examine the cervical biopsy specimens in surgical pathological examination. | 2018-05-23T03:22:08.581Z | 2018-05-22T00:00:00.000 | {
"year": 2018,
"sha1": "765820ef1099ed90375c8ec125035f9f429f1e5a",
"oa_license": "CCBY",
"oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/s13000-018-0709-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "765820ef1099ed90375c8ec125035f9f429f1e5a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15941878 | pes2o/s2orc | v3-fos-license | The effect of three years of TNF alpha blocking therapy on markers of bone turnover and their predictive value for treatment discontinuation in patients with ankylosing spondylitis: a prospective longitudinal observational cohort study
Introduction The aim of this study was to investigate the effect of three years of tumor necrosis factor-alpha (TNF-α) blocking therapy on bone turnover as well as to analyze the predictive value of early changes in bone turnover markers (BTM) for treatment discontinuation in patients with ankylosing spondylitis (AS). Methods This is a prospective cohort study of 111 consecutive AS outpatients who started TNF-α blocking therapy. Clinical assessments and BTM were assessed at baseline, three and six months, as well as at one, two, and three years. Z-scores of BTM were calculated to correct for age and gender. Bone mineral density (BMD) was assessed yearly. Results After three years, 72 patients (65%) were still using their first TNF-α blocking agent. In these patients, TNF-α blocking therapy resulted in significantly increased bone-specific alkaline phosphatase, a marker of bone formation; decreased serum collagen-telopeptide (sCTX), a marker of bone resorption; and increased lumbar spine and hip BMD compared to baseline. Baseline to three months decrease in sCTX Z-score (HR: 0.394, 95% CI: 0.263 to 0.591), AS disease activity score (ASDAS; HR: 0.488, 95% CI: 0.317 to 0.752), and physician's global disease activity (HR: 0.739, 95% CI: 0.600 to 0.909) were independent inversely related predictors of time to treatment discontinuation because of inefficacy or intolerance. Early decrease in sCTX Z-score correlated significantly with good long-term response regarding disease activity, physical function and quality of life. Conclusions Three years of TNF-α blocking therapy results in a bone turnover balance that favors bone formation, especially mineralization, in combination with continuous improvement of lumbar spine BMD. Early change in sCTX can serve as an objective measure in the evaluation of TNF-α blocking therapy in AS, in addition to the currently used more subjective measures.
Introduction
Ankylosing spondylitis (AS) is a chronic inflammatory disease that mainly affects the axial skeleton. Bone formation and bone loss are both present in AS. New bone formation can lead to the formation of syndesmophytes, ankylosis of the spine and sacroiliac joints, and bone formations on enthesal sites [1,2], whereas bone loss can result in osteoporosis and vertebral fractures [3][4][5].
Randomized controlled trials (RCTs) have shown that the tumor necrosis factor-alpha (TNF-α) blocking agents infliximab, etanercept and adalimumab are effective in controlling inflammation and improving clinical assessments in AS [6][7][8]. Previous studies could not demonstrate a significant effect of two years of TNF-α blocking therapy on radiographic progression in AS [9][10][11].
Although the majority of patients responds very well, a significant proportion of patients has to withdraw from TNF-α blocking therapy due to inefficacy or adverse events [12][13][14]. Currently, subjective measures of disease activity, such as the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) or the global opinion of the physician, are most important in the evaluation of TNF-α blocking therapy in AS. The recently developed Ankylosing Spondylitis Disease Activity Score (ASDAS) captures both subjective (patient-reported measures) and objective (acute phase reactant) aspects of disease activity [14][15][16][17]. However, it would be useful to also include a purely objective measure in this evaluation process.
The early effect of TNF-α blocking therapy on bone turnover may be helpful in predicting treatment outcome. Bone turnover can be monitored using bone turnover markers (BTM) [18]. The bone formation markers, bonespecific alkaline phosphatase (BALP) and osteocalcin (OC), were reported to be increased after 2 to 52 weeks and 2 to 22 weeks of TNF-α blocking therapy, respectively [19][20][21]. On the other hand, the bone resorption markers, serum type I collagen N-telopeptide and C-telopeptide (sNTX and sCTX), remained unchanged up to 46 weeks of TNF-α blocking treatment [19,21,22]. Visvanthan et al.
showed that an early increase in BALP was associated with significant increases in bone mineral density (BMD) of the spine and hip after two years of TNF-α blocking therapy [23].
The first aim of the present study was to investigate the effect of three years of TNF-α blocking therapy on bone turnover. The second aim was to investigate whether the early effect of TNF-α blocking therapy on BTM can serve as an objective predictor of treatment discontinuation in patients with AS.
Patients
Between November 2004 and December 2007, 111 consecutive Dutch outpatients with AS, who started TNF-α blocking therapy at the University Medical Center Groningen (UMCG; n = 28) and the Medical Center Leeuwarden (MCL; n = 83), were included in this longitudinal analysis. All patients participated in the Groningen Leeuwarden Ankylosing Spondylitis (GLAS) study, a prospective longitudinal observational cohort study with follow-up visits according to a fixed protocol. For the present analysis, patients with recent fractures and/or use of bisphosphonates were excluded because of major influence on bone metabolism. All patients were over 18 years of age, fulfilled the modified New York criteria for AS (n = 109) [24] or the Assessments in Ankylosing Spondylitis (ASAS) criteria for axial spondyloarthritis including MRI (n = 2) [25]. The patients started treatment with the TNF-α blocking agents infliximab (n = 22), etanercept (n = 71), or adalimumab (n = 18) because of active disease (BASDAI ≥ 4 and/or expert opinion), according to the ASAS consensus statement [26]. As described previously [14], infliximab (5 mg/ kg) was given intravenously at zero, two and six weeks and then every eight weeks. In case of inadequate response, the frequency of infliximab treatment was raised to every six weeks. Etanercept was administered as a subcutaneous injection once (50 mg) or twice (25 mg) a week. Adalimumab (40 mg) was administered as a subcutaneous injection on alternate weeks. In 2004 and 2005, patients started treatment with either infliximab or etanercept as adalimumab was only registered in The Netherlands as of 2006. The choice of the TNF-α blocking agent was based on the judgment of the treating rheumatologist and/or the specific preference of the patient. Continuation of treatment was based on a decrease in the BASDAI of at least 50% or two units compared with baseline and/or expert opinion in favor of treatment continuation. Reasons for discontinuation of TNF-α blocking therapy were classified into the categories: intolerance due to adverse events, inefficacy or other reasons. Patients were allowed to receive concomitant medication as usual in daily clinical practice. The GLAS study was approved by the local ethics committees of the UMCG and the MCL. All patients provided written informed consent according to the Declaration of Helsinki.
Clinical and laboratory assessments
Patients were evaluated at baseline and after three months (mean 3.3 mo, SD ± 0.5), six months (mean 6.4 mo, SD ± 0.8), one year (mean 1.0 yr, SD ± 0.1), two years (mean 2.1 yr, SD ± 0.1), and three years (mean 3.1 yr, SD ± 0.1) of TNF-α blocking therapy. Disease activity was assessed using the BASDAI (on a scale of 0 to 10), physician's and patient's global assessment of disease activity (GDA; on a scale of 0 to 10), erythrocyte sedimentation rate (ESR), Creactive protein (CRP), and ASDAS CRP (calculated from BASDAI questions 2, 3 and 6, patient's GDA, and CRP) [15,16]. Increased ESR and CRP levels were based on local standardized values. Physical function was assessed using the Bath Ankylosing Spondylitis Functional Index (BASFI; on a scale of 0 to 10). Spinal mobility assessments included chest expansion, modified Schober test, occiput to wall distance and lateral lumbar flexion (left and right). Quality of life was assessed using the Ankylosing Spondylitis Quality of Life questionnaire (ASQoL; on a scale of 0 to 18). Peripheral arthritis was defined as at least one swollen joint (excluding the hip) at baseline.
Z-scores of BTM were used to correct for the normal influence that age and gender have on bone turnover. Zscores, the number of standard deviations (SD) from the normal mean corrected for age and gender, were calculated using matched 10-year-cohorts of a Dutch reference group (200 men or 350 women) checked for serum 25hydroxyvitamin D levels > 50 nmol/liter as well as for the absence of osteoporosis (BMD T-score > -2.5) after 50 years of age. Z-scores were calculated as follows: (BTM value of individual patient -mean BTM value of matched 10-year-cohort of reference group)/SD of matched reference cohort.
BMD measurement
BMD of the lumbar spine (anterior-posterior projection at L1 to L4) and hip (total proximal femur) was assessed at baseline and after one year (mean 1.1 yr, SD ± 0.1), two years (mean 2.2 yr, SD ± 0.2), and three years (mean 3.2 yr, SD ± 0.2) of TNF-α blocking therapy. BMD was measured using DXA (Hologic QDR Discovery (UMCG) or Hologic QDR Delphi (MCL), Waltham, MA, USA). T-scores, the number of SD from the normal mean obtained from young healthy adults, and Z-scores, the number of SD from the normal mean corrected for age and gender, were calculated using the NHANES reference database. According to the World Health Organization (WHO) classification, osteopenia was defined as a T-score between -1 and -2.5 and osteoporosis as a T-score ≤ -2.5 [27].
Statistical analysis
Statistical analysis was performed with PASW Statistics 18 (SPSS, Chicago, IL, USA). Results were expressed as mean ± SD or median (range) for normally distributed and nonnormally distributed data, respectively. Predictor analysis of time to discontinuation of TNF-α blocking therapy (yes/no) was performed using forward conditional Cox regression of variables with a P-value ≤ 0.3 in univariate Cox regression. The probability of F for entry was 0.05. Receiver Operating Characteristic (ROC) analysis was performed to determine the accuracy of early change in BTM to predict discontinuation of TNF-α blocking therapy during the first three years. Area under the curve (AUC) < 0.70 was interpreted as poor accuracy, 0.70 < AUC < 0.90 as moderate accuracy, and AUC > 0.90 as high accuracy [28]. Pearson's and Spearman's correlation coefficients were used as appropriate to analyze the relation between early change in BTM and clinical assessments. Generalized estimating equations (GEE) were used to analyze clinical assessments, BTM, and BMD over time within subjects. Pairwise contrasts were used to compare baseline and follow-up visits. P-values < 0.05 were considered statistically significant.
Results
The mean age of the 111 AS patients was 42.2 years (SD ± 10.3), 70% were male, and median disease duration was 16 years (range 1 to 49). All baseline characteristics are shown in Table 1.
After three years, 72 patients (65%) continued to use their first TNF-α blocking agent. In these patients, all assessments of disease activity (Table 2), physical function, spinal mobility and quality of life (data not shown) improved after three months and remained significantly better compared to baseline up to three years of TNF-α blocking therapy.
Effect of TNF-a blocking therapy on bone turnover
Data of the 72 AS patients who were still using their first TNF-α blocking agent after three years were analyzed to investigate the effect of TNF-α blocking therapy on bone turnover ( Table 2). TNF-α blocking therapy resulted in a significant increase in the bone formation marker BALP Z-score after three months (P < 0.001) and BALP Z-score continued at a higher level up to three years. The bone formation marker PINP Z-score was found to be significantly increased only after three and 24 months of TNF-α blocking therapy (P < 0.05). The bone resorption marker sCTX Z-score decreased significantly after three months (P < 0.001) and remained decreased during three years of treatment ( Figure 1). The course of the absolute BTM values, analyzed separately for male and female patients because of gender differences in BTM, were in line with the results for the BTM Z-scores (data not shown).
Lumbar spine and hip BMD Z-scores improved significantly after one year of TNF-α blocking therapy (P < 0.001). Subsequently, the lumbar spine BMD Z-score increased further after two and three years (P < 0.05 and P < 0.01, respectively). The hip BMD Z-score tended to increase further after two years (P = 0.050), but remained stable after two to three years of treatment (P = 0.780) ( Figure 2).
Predictive value of early change in bone turnover
Data of 105 AS patients were analyzed to investigate the predictive value of early change (0-3 months) in BTM for treatment discontinuation; 72 patients who continued versus 33 patients who discontinued their first TNF-α blocking agent because of inefficacy and/or adverse events. Patients who discontinued treatment due to other reasons (n = 6) were excluded from this analysis. Baseline to three months decrease in BASDAI, ASDAS CRP , patient's GDA, physician's GDA and sCTX Z-score were inversely associated with time to discontinuation of TNF-α blocking therapy in univariate Cox regression. Multivariate analysis showed that baseline to three months decrease in sCTX Z-score (HR: 0.394, 95% CI: 0.263 to 0.591), ASDAS CRP (HR: 0.488, 95% CI: 3.9 ± 0.8 1.7 ± 0.7 0.000 1.6 ± 0.7 0.000 1.9 ± 0.9 0.000 1.9 ± 0.8 0.000 1.8 ± 0. Data of the 72 AS patients who were still using their first TNF-α blocking agent after three years were analyzed. Values are mean ± SD or median (range). See Table 1 0.317 to 0.752), and physician's GDA (HR: 0.739, 95% CI: 0.600 to 0.909) were independent inversely related predictors of time to discontinuation of TNF-α blocking therapy (Table 3).
Since the number of female patients was relatively small, multivariate analysis using absolute BTM values was performed only in male patients. Baseline to three months decrease in sCTX (HR: 0.986, 95% CI: 0.979 to 0.993), BASDAI (HR: 0.707, 95% CI: 0.569 to 0.878) or alternatively, ASDAS CRP and physician's GDA (HR: 0.685, 95% CI: 0.522 to 0.898) were identified as independent inversely related predictors of time to treatment discontinuation.
The accuracy of baseline to three months change in sCTX Z-score to discriminate between patients who continued and discontinued TNF-α blocking therapy during the first three years was moderate, with an AUC of 0.741 (95% CI: 0.640 to 0.841), and comparable to the accuracy of early change in ASDAS CRP (AUC: 0.790, 95% CI: 0.699 to 0.880) or in physician's GDA (AUC: 0.730, 95% CI: 0.624 to 0.837).
In addition, baseline to three months change in sCTX Z-score was significantly associated with disease activity (BASDAI, ASDAS CRP , physician's and patient's GDA, ESR, and CRP), physical function (BASFI), spinal mobility (chest expansion), and quality of life (ASQoL) at last follow-up (defined as at three years of TNF-α blocking therapy or at the moment of treatment discontinuation) ( Table 4).
Discussion
This is the first study that investigates the predictive value of early changes in bone turnover with regard to discontinuation of TNF-α blocking therapy in AS. Currently, in clinical practice, continuation of TNF-α blocking therapy is mainly based on subjective measures, such as the BASDAI and the global opinion of the patient and the physician. Recent studies showed the usefulness of the ASDAS as a more objective measure of disease activity [14][15][16][17]21]. However, a purely objective measure is still lacking in the evaluation process of TNF-α blocking therapy. The present analysis shows Data of 105 AS patients were analyzed: 72 patients who continued versus 33 patients who discontinued their first TNF-α blocking agent because of inefficacy and/or adverse events. Δ0 to 3 m, change from baseline to three months. See Table 1 for other abbreviations.
HR refers to the risk of anti-TNF-α treatment discontinuation per 1 grade or 1 point. * The variable was not selected during multivariate Cox regression analysis (P ≥ 0.05). ** The variable was not tested in multivariate Cox regression analysis because of a P-value > 0.3 in univariate analysis. that a baseline to three months' decrease in sCTX Zscore was inversely related to time to discontinuation of TNF-α blocking therapy. Interestingly, sCTX Z-score remained a significant predictor of treatment discontinuation in the presence of ASDAS and physician's GDA, which underlines the value of sCTX in addition to the currently used measures. The accuracy of decrease in sCTX Z-score from baseline to three months in predicting treatment continuation was comparable to the moderate accuracy of early decrease in ASDAS or physician's GDA. Furthermore, early decrease in sCTX Z-score was significantly associated with good long-term response regarding disease activity, physical function, spinal mobility, and quality of life. Based on these results, early change in sCTX can be useful as an objective biomarker in the evaluation of TNF-α blocking therapy in patients with AS. A major advantage of BTM is that they can easily be measured in the blood of patients at different time points with relatively low costs. sCTX is widely available on automated immunoassay analysers or as ELISA. However, it is important to standardize the serum sample collection to reduce variability within and between patients [18]. Until now, several studies investigated the influence of TNF-α blocking therapy on bone formation and bone resorption up to one year of treatment [19][20][21][22][23]. The present study shows that three years of TNF-α blocking therapy resulted in a significant increase in the bone formation marker BALP, which plays a central role in the mineralization process of bone, at all time-points compared to baseline, while the effect on the bone formation marker PINP, a product of collagen formation, was found to be less evident. Furthermore, a significant decrease in the bone resorption marker sCTX, a product of collagen degradation, was found during three years of TNF-α blocking therapy. The significant increase in bone formation is in line with previous findings after one year of TNF-α blocking therapy [19][20][21]. Until now, no clear effect of TNF-α blocking therapy on bone resorption was reported [19,[21][22][23]. In the present study, we had the unique availability of a healthy reference cohort on BTM, which allows us to correct the BTM levels of an individual AS patient for age and gender (using Z-scores). In this way, the rate of bone turnover can be studied without the confounding influence of age and gender, similar to the methodology of interpreting BMD. Nevertheless, our findings using the absolute BTM values (analysis split for gender) were in line with the results for the BTM Z-scores (data not shown).
The changes in BTM over time found in this study cannot be specifically attributed to TNF-α blocking therapy because a placebo group is lacking. Visvanathan et al. showed no significant changes in BALP or sCTX during 24 weeks of placebo treatment [23], which indicates that the present significant changes in BTM compared to baseline are the result of TNF-α blocking therapy. How these results fit into the pathogenesis of AS remains to be studied.
Importantly, the present results regarding BTM should not be extrapolated to any possible effect of TNF-α blocking therapy on radiographic progression in AS since no imaging method was used to measure new bone formation resulting in the formation of syndesmophytes and joint ankylosis. Furthermore, long-term observation is needed in order to see any effect of TNF-α blocking therapy on new bone formation in patients with AS.
Interestingly, both lumbar spine and hip BMD improved significantly during three years of TNF-α blocking therapy, which can be explained by the increase in mineralization and decrease in bone resorption. Alternatively, the increase in lumbar spine BMD may in part be confounded by the progression of formation of ligament calcifications and fusion of facet joints [5,29,30]. However, we expect that excessive bone formation will only have minor influence on the increase in lumbar spine BMD found in the present study, since previous studies reported a radiological progression of approximately one point in Modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS; on a scale of 0 to 72) after two years of TNF-α blocking therapy [31][32][33]. Moreover, the improvement in lumbar spine and hip BMD after TNF-α blocking therapy is in line with previous findings [23,34].
Conclusions
This prospective longitudinal observational cohort study shows that three years of TNF-α blocking therapy results in a bone turnover balance that favors bone formation (especially mineralization), in combination with continuous improvement of lumbar spine BMD. Furthermore, a baseline to three months' decrease in sCTX Z-score is identified as a significant inversely related predictor of time to treatment discontinuation, independent from ASDAS and physician's GDA. Based on these results, early change in the bone resorption marker sCTX seems useful as a purely objective biomarker in the evaluation of TNFα blocking therapy in AS, in addition to the currently used more subjective measures. Medical Center Leeuwarden; MRI: magnetic resonance imaging; OC: osteocalcin; PINP: procollagen type 1 N-terminal peptide; RCT: randomized controlled trial; RIA: radioimmunoassay; ROC: receiver operating characteristic; sCTX: serum type I collagen C-telopeptide; SD: standard deviation; sNTX: serum type I collagen N-telopeptide; TNF-α: tumor necrosis factor alpha; UMCG: University Medical Center Groningen; WHO: World Health Organization | 2016-05-12T22:15:10.714Z | 2012-04-30T00:00:00.000 | {
"year": 2012,
"sha1": "35295cd083ec0424d931ba3bec5ba9b3216ecffc",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar3823",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f110faa9f446eb3cdf09e22ccd7c4b21a0e6c87c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228864580 | pes2o/s2orc | v3-fos-license | Comparison of post operative pain in tonsillectomy: a three year prospective study
Background: Tonsillectomy is the oldest surgery performed by otorhinolaryngologists worldwide. Through ages different techniques have been tried to improve the post surgical outcome and reduce morbidity among patients. Aim of the current study was to compare the post-operative pain among the patients undergoing tonsillectomy by cold dissection, bipolar cautery dissection and coblation dissection.Methods: 142 Patients undergoing tonsillectomy in ENT department of TMMC during the period of 3 year were included in the study. Patients were randomly distributed to undergo different techniques of tonsillectomy. The post-operative pain in patients was assessed using the pre-standardized visual analogue pain scale and results were analyzed.Results: No statistically significant difference was found among the groups undergoing tonsillectomy by cold dissection, bipolar dissection and coblator dissection as the p>0.05. The immediate post-operative pain was found to be slightly higher among the group undergoing tonsillectomy by coblator dissection and the analgesics dose needed in the post-operative period remained the same for all for patients of all the three groups.Conclusions: No statistically significant difference was found in the post-operative pain scores of patients undergoing tonsillectomy by CD, BD and CBD techniques.
INTRODUCTION
Tonsillectomy is among the one of the commonest surgeries performed by otorhinolaryngologists worldwide. The earliest tonsillectomy in literature date back to 1000 BC where it was performed by Sushruta in India. 1 The practice of tonsillectomy as a safe procedure started from Celcus a Roman aristocrat (25 AD to 50 AD) who described the technique by using his finger for dissection. 2 After 100 years the surgery was evolved with different techniques being employed to make it a safer procedure and improve the post-operative outcome. The various indications for tonsillectomy in a patient range from chronic tonsillitis, peritonsillar abscess, suspicious of malignancy to tonsillar hypertrophy causing obstructive sleep apnea and in styloid process excision. The different techniques used for tonsillectomy include cold dissection, bipolar cautery dissection, gulliotine tonsillectomy, ultrasonic dissection, coblator dissection, laser tonsillectomy, microdebrider tonsillectomy while none of the technique can be accepted as the best with best outcomes compared to others. [3][4][5][6][7][8][9][10] Coblation makes use of radiofrequency energy. 11 This radiofrequency energy when passes through a conductive medium like normal saline, breaks saline into sodium and chloride ions. These highly energized ions form a plasma field and break intercellular bonds within soft tissue causing its dissolution. Temperature achieved during this procedure is between 60-70°C in comparison to electrocautery where temperature reached between 400-600°C.
The post-operative pain is one of the major morbidity factor in tonsillectomy patients. 12 Some study shows coblation give a better result in view of post operative pain while other study remain equivocal. [13][14][15] Current study mainly aims to compare and analyze whether there is any statistically significant difference in the post-operative pain in patients undergoing tonsillectomy by CD, BD and CBD techniques.
Study setting
Patients who had indications and were planned for elective tonsillectomy in ENT department of TMMC and research center, Moradabad.
Study design and duration
Current study is a randomized prospective observational study, conducted for three years from December 2016 to November 2019.
Sample size
Sample size for present study was 142 patients who underwent tonsillectomy during the above given period were included in the study.
Inclusion criteria
Inclusion criteria for current study were patients of age group between 5 to 50 years, patients with complain of recurrent tonsillar infections i.e. chronic tonsillitis; 7 or more episodes/year or 5 or more episodes/year for 2 years or 3 or more episodes/year for 3 years and patients with complain of obstructive symptoms related to tonsil hypertrophy.
Exclusion criteria
Exclusion criteria for current study were need of any concurrent surgery other than tonsillectomy like adenoidectomy, myringotomy with ventilation tube insertion, patients with acute tonsillitis, impaired mental status of patient, any bleeding disorder and hypersensitivity/allergy to drugs involved in procedure.
Procedure
Tonsillectomies by all three techniques were done under general anaesthesia with endotracheal intubation. Patients were placed in supine position with extension at the neck using a shoulder roll (rose position). A Boyle-Davis mouth gag was used to keep the mouth open and it was fixed using Draffins metallic bipod stand. A tonsil holding forceps was used to pull the tonsil medially and incision was given in the superior pole of the tonsil using the tooth forceps in the CD technique. The tonsil was dissected from underlying bed by gauze dissection till the inferior pole. The inferior pole tonsil was snared using Eve's tonsillar snare. The bed of the tonsil was observed for any bleeding point and any bleeding point found was ligated using 3-0 vicryl sutures.
In bipolar dissection after general anaesthesia and similar positioning of the patient, bipolar cautery was used to give incision in the superior pole of the tonsil and the same was used to dissect the tonsil from the bed. Snaring of the inferior pole was done using the Eve's tonsillar snare. The bleeding point if any found were cauterized using bipolar cautery.
The CBD tonsillectomy was done using a coblator under microscope. The dissection was started at the inferior pole of the tonsil and continued till the superior pole. Irrigation with normal saline and simultaneous suctioning was done during the surgery followed by coagulation of the bleeding points using the coblator. Patients were discharged 48 hours after the surgery and were given a combination of ibuprofen and paracetamol syrup based on their body weight three times a day for 5 days and intravenous diclofenac was given as rescue therapy in case of more pain based on body weight on day 1, 3, 7 and 14. Post tonsillectomy pain was assessed using standard visual analogue pain scale (Figure 1).
Result analysis
The post operative pain and demographic data was compared among the three tonsillectomy groups. Standardized VAS was used to assess the post operative pain among the groups. The statistical differences among the variables were tested using the Chi-square test. The results were analysed using a confidence interval of 95% and significant p<0.05.
Age and sex
142 patients were included in the study, out of which 86 patients (61%) were males and 56 patients (39%) were females. The mean age group of patients was 28.4±3.3 years. 48 patients with mean age of 26.1±5.8 years ranged between 5-50 years underwent tonsillectomy by cold dissection method, 44 patients with the mean age of 28.6±2.3 years underwent tonsillectomy by bipolar dissection method and 50 patients with a mean age group of 27.4±7.1 years underwent tonsillectomy by coblator dissection.
Post-operative pain
Post-operative pain scores were assessed using VAS scoring. The mean VAS score for post operative pain in patients who underwent tonsillectomy by cold dissection method were 4.7 on day 1, 6.14 on day 3, 4.89 on day 7 and 1.26 on day 14. The patients who underwent tonsillectomy by bipolar dissection method had mean VAS score of 4.28 on day 1, 6.26 on day 3, 5.10 on day 7 and 1.4 on day 14. The mean VAS scores for patients who underwent tonsillectomy by coblator dissection were 6.01 on day 1, 6.33 on day 3, 5.12 on day 7 and 1.38 on day 14. A statistically significant difference was found in day 1 of post operative period among three groups (p=0.01) while the mean pain scores on day 3, 7 and 14 showed no significant difference among the three groups (Table 1).
DISCUSSION
For tonsillectomy patients post operative pain remains one of the major morbidity factors. Over years different technique have been evolved to reduce morbidities due to tonsillectomy. Researchers had done many studies but a controversy regarding best technique for tonsillectomy still persists.
Study of Silveira et al suggests that there is more pain in patients having bipolar electrodissection tonsillectomy than cold dissection. Similar finding was reported by studies of Adoga and Mofatteh et al. [16][17][18] Mofatteh et al recorded the pain after 4 and 24 hours after operation. In our study we do not find any significant difference of pain intensity between two groups while scaling pain intensity at day 1, 3, 7 and 14 after operation. While a study done by Pang suggests no significant difference of pain between bipolar electrodissection tonsillectomy and cold dissection, which is more consistent with our finding. 5 As coblator uses radiofrequency energy and produces a temperature of 60-70°C during procedure, it is being hypothesized to cause less post operative pain. 15 The study of Burton et al was not able to found adequate evidence in support of superiority of coblation tonsillectomy in view of post operative pain. Parker et al performed a randomized controlled trial on children between 4 to 16 years, found that coblation tonsillectomy is not superior to cold dissection with bipolar haemostasis technique in terms of post operative pain but yet the amount of analgesics required was less in coblation tonsillectomy group. 19 22 In their study they reported less pain in coblation tonsillectomy group than other groups (electrocautery and ultrasonic scalpel), while pain in electrocautery and ultrasonic scalpel group did not differ significantly. Polites et al compare post tonsillectomy pain of 20 adult patients by coblation and dissection method and found coblation to be significantly less painful during the first 3 days while no significant difference shown on 4 to 10 days. 23 In current study we found a statistically significant higher pain scores in patients who underwent coblation tonsillectomy on first post operative day while the pain scores on post operative day 3, 7 and 14 were similar in all groups with no significant differences between them.
CONCLUSION
Statistically significant pain scores were found on post operative day 1 (p<0.05) between the three techniques with higher pain scores in patients who underwent coblation tonsillectomy. The pain scores on post operative day 3, 7 and 14 were similar with no significant differences between them. However studies with larger sample size are needed to prove conclusively whether any difference exists in various techniques of tonsillectomy or not. | 2020-11-19T09:15:34.995Z | 2020-11-24T00:00:00.000 | {
"year": 2020,
"sha1": "80431fff587efb5be17708c6fe87ee2e8fa53026",
"oa_license": null,
"oa_url": "https://www.ijorl.com/index.php/ijorl/article/download/2676/1491",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "15f4a7eb99f76e62d52f0709839b9f3df92db51a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219881386 | pes2o/s2orc | v3-fos-license | Breaking the Linguistic Minority Complex through Creative Writing and Self-Translation
Generally speaking, a minority language is “one spoken by less than 50 per cent of a population in a given region, state or country” (Grenoble and Singerman, 2017, n.p.). In this article, I propose a more con tex tualized defi nition that applies to the realm of lit erary writing and (self-)translation. Thus, I define a minority language as any language which a bi lingual or plurilingual writer perceives as not being the dominant one in the sociocultural and linguistic context in which s/he is active as an author or as a (self-)translator. Assuming this alternative definition as a point of depar ture, I discuss the creative and selftranslational practice of the Canadian writer Antonio D’Alfonso. D’Alfonso is one of those rare pluri lingual writers who feel linguis ti cally defamiliarized, claiming that instead of having a proper mother tongue he has a mixed baggage of native Molisano dialect, French, English and Italian. Thus, he tends to write, think and (self-)translate immersed in a kind of 3D(or even 4D-) linguistic landscape (Pivato, 2002). D’Alfonso’s self-translations from French into English and/or vice ver sa are testimony to the author’s experimental way of challenging the “crude sub ju gation” (Whyte, 2002, p. 69) of a language over another and of over coming any minority-language complex he might have developed on his path to becoming a lin guis tically uprooted writer.
Article abstract
Generally speaking, a minority language is "one spoken by less than 50 percent of a population in a given region, state or country" (Grenoble and Singerman, 2017, n.p.). In this article, I propose a more contextualized definition that applies to the realm of literary writing and (self-)translation. Thus, I define a minority language as any language which a bilingual or plurilingual writer perceives as not being the dominant one in the sociocultural and linguistic context in which s/he is active as an author or as a (self-)translator. Assuming this alternative definition as a point of departure, I discuss the creative and self-translational practice of the Canadian writer Antonio D'Alfonso. D'Alfonso is one of those rare plurilingual writers who feel linguistically defamiliarized, claiming that instead of having a proper mother tongue he has a mixed baggage of native Molisano dialect, French, English and Italian. Thus, he tends to write, think and (self-)translate immersed in a kind of 3D-(or even 4D-) linguistic landscape (Pivato, 2002). D'Alfonso's self-translations from French into English and/or vice versa are testimony to the author's experimental way of challenging the "crude subjugation" (Whyte, 2002, p. 69) of a language over another and of overcoming any minority-language complex he might have developed on his path to becoming a linguistically uprooted writer.
Introduction
In this article, I discuss the way in which, more or less consciously, the polyglot and translingual Canadian writer Antonio D'Alfonso uses cre ative writing and self-translation to overcome his minoritylanguage complex and to symbolically minorize-that is, de cen tralize-the main dominant languages (English and French) in which he happens to be creatively active. Methodologically, I mainly draw upon an interview with D'Alfonso. Conceptually, I assume a trans cultural perspective which, at the macro-level, looks beyond the divides between languages and cultures in search of their commonalities, entanglements, and amalgamations (Welsch, 2010), and at the microlevel pays attention to the individual's multiple, cross-cutting cultural interconnections and identity formations (Epstein, 2009).
Major and Minor Languages in the Global Ecumene of Letters
In its common understanding, a minority language is "one spoken by less than 50 per cent of a population in a given region, state or country" (Grenoble and Singerman, 2017, n.p.). However, if we look at the complex web of power dy namics at play in the "global galaxy" of languages (De Swaan, 1993, 2001 and in the "World Republic of Letters" (Casanova, 2004), we are confronted with a range of asymme tric relations that further complicates the per ception of what is minor and major, central and peripheral, domi nant and subordinate, canonized and non-canonized. Drawing upon Abram De Swaan's (1993) model of the "galaxy of languages," Pascale Casanova (2004Casanova ( , 2009) attaches different levels of symbolic capi tal to each and every literary national system. National literatures compete for dominance in the global ecumene of letters, instituting a system of centre and periphery. Thus, according to Casanova, the major literary power brokers are still to be found in the West-London, Paris (more par ticularly), and New York. Consequently, the same inequalities, hierar chies, and power struggles that are at play within and between nations-where we currently witness a mani fest predominance of the Anglo-American cultural sphere-also determine the fate and the global standing of national/ethnic lan guages and their related liter atures within and outside national borders (Grutman, 2015). As Rainier Grutman reminds us, Significa que hay desgraciadamente diferencias apreciables entre las lenguas del mundo en términos de Mercado y de valor de intercambio (!que es otra cosa muy distinta del valor intrínseco!) (2011, p. 79) [It means that unfortunately there are considerable differences between the world's languages in terms of market and exchange value (which is something quite different from intrinsic value!] (my translation) However, it is worth taking into consideration that in an increas ingly pluricentric linguistic world even the prestige, qualitative prominence, and weight of national languages tend to vary dramatically depending on the context and the literary stature of writers and (self-) translators. According to Eva Hoffman, there is no one geographic center pulling the world together and glowing with the allure of the real thing […] in a decentered world we are always simultaneously in the center and on the periphery […] every competing center makes us marginal. (1989, pp. 274-275) 1 Having said that, at this moment in time we may undoubtedly consider English as the global dominant language, that is, the idiom with the highest symbolic and real capital in the so-called global "stock exchange of languages" (Calvet, 2006, p. 4; see also Grutman, 2011Grutman, , 2015. At an international level, English is the established language of communication in any major field of human activity: diplo macy, business, finance, science, technology, publishing, travel, tourism, research, and academia (Phillipson, 2009;Galloway and Rose, 2015). Any other language-including major ones such as French, Spanish, Arabic, and Chinese-needs to be evaluated against the benchmark set by English, and thus provided with its specific "power differential." 2 We might thus establish a basic rule: the less linguistic capital a lan guage has in the world stock exchange of idioms, the higher its power differential is. Hence, if Spanish has a low power differential towards English and French even a lower one, Tagalog has a high-power diff erential, and Gikuyu has an even higher one. In very specific and extre mely localized contexts, the power differential between English and another major language can be more or less arti ficially kept to a minimum, or heavily constrained. It is the case, for example, with English in the Quebec context, where language laws are notably meant to mitigate the linguistic power divide and "increase the status of French relative to English in the Province" (Bourhis, 2001, p. 101).
In this article, however, I would like to propose a more con textualized definition of minority language applied to the realm of literary writing and (self-)translation. A minority language is any language which a bilingual or plurilingual writer perceives as not being the domi nant one in the socio-cultural and linguistic context in which s/he is creatively active (either by choice, life's circumstances or outer forces) as an author and/or as a (self-)translator.
Let's take the example of Tim Parks, a British writer who has been living in Italy for most of his adult life. For many years, Parks wrote and published in English, his mother tongue (and, as we said, the accepted global dominant language). At a certain stage of his literary career, he felt compelled to write and self-translate his work into Italian, since Italian is the idiom in which he is culturally immersed for most of the time. Yet, Parks confesses to have miser ably failed at both-writing in or self-translating into Italian. He found the tasks too impervious, almost impossible, and the results not good enough for his taste, lacking the necessary depth: [It is] my own growing conviction that a very great deal of literature, poetry, and prose can only be truly exciting and efficacious in its original language, a conviction that goes hand in hand with my decision not to write any more in Italian, never to translate into Italian, and never to translate except for the purposes of elucidation. This is a personal decision, I should stress, not a prescription. (Parks, 2000, n.p.) 2. The way I use the expression power differential is similar to the way Castro, Meiner and Page talk about the power differential between two languages (2017, p. 2).
Despite his admitted failures, Park's attempts at bilingual writing and at self-translation show that even authors writing in a major, dominant language may perceive it as minor if that language is not shared, acknowledged, or regularly practiced by the literary community of their adopted country.
A Taxonomy of Self-translation in Relation to Symbolic Capital
Anton Popovič initially defined self-translation as "the translation of an original work into another language by the author himself " (1976, p. 19). In this article, I adopt Grutman's extended definition of selftranslation as both a process and a product: "the act of translating one's own writings into another language and the result of such an undertaking" (2009, p. 257).
In the world's linguistic stock exchange described by De Swaan and Casanova, and further explored by Grutman (2015), translations (and thus also self-translations) can be either horizontal or vertical, de pending on the value given to the languages involved. They are hori zontal when they happen between national languages that have the same linguistic capital, that is, when the languages involved are equally juxtaposed, autonomous, dominant, and belong to wellestablished national literary systems. They are vertical when they "mettent en présence des langues de statut trop inégal pour que le trans fert puisse ressembler à un échange (mot qui implique une forme de réciprocité" [they involve languages whose status is too unequal to resemble a veritable "exchange" (a word that implies a form of reciprocity] (Grutman 2015, p. 21;my trans.).
Vertical translations may further be qualified as "supraductions" if the text is translated with an ascending movement (uphill), from a minor language into a dominant and more central one, or, in other words, from the periphery to the centre; or they can be "infraductions" if the text is translated with a des cending movement (downhill), from a major and more wide spread language into a minor and marginalized one, that is, from the centre to the periphery (ibid.). The same distinction applies to self-translation, which can be thought of as "infraautotraducción" (infraself-translation) or "supraautotraducción" (supraself-translation; Grutman, 2011, p. 81). 3
Bilingualism and Self-translation
The reasons why nowadays writers decide to self-translate their work are mani fold (economic, psychological, sociological, aesthetic, or cultural) 4 and often linked to growing migratory flows, which create new gen e rations of bilingual or semi-bilingual writers. In this article, the cate gory "semi-bilingual" mainly refers to writers who were not raised as bilingual since their birth but who, due to a series of circumstances, happened (more or less forcefully, more or less willingly) to become bi lingual in their youth or later in life. In her book Nord Perdu, the bilingual (French/English) writer Nancy Huston makes a similar distinc tion between "les vrais bilingues" and "les faux bilingues" (1999, p. 53).
In an attempt at summarizing-accepting the risk of sim plifying-for the purposes of study, we may thus say that writers decide to self-translate in order to: 5 • Sell their book: that is, to find interested publishers in their country of adoption. 6 This happens especially with aspiring authors who are in the process of becoming bilingual, and often avail themselves of the help of native speakers. I call these writers the Sellers and the product of their self-translation the sellable. 7 • Widen their readership or expose their work to a wider interna tional market: that is, acquire recognition-and, possibly, financial gain-in the dominant language. 8 This happens especially with emerging writers or mid-career writers who are also in the process of becoming fully bilingual and who are keen to give their work "an afterlife" in their adopted language (Grutman, 2013, p. 71). I call them the Wideners (or the Exposers) and the product of their self-translation the widened (or the exposed).
• Maintain a degree of ownership, autonomy, and/or au thoriality. This especially happens with mid-career writers or with writers who belong to linguistic/ethnic minorities. These writers can be either semi-or fully bilingual. 9 While some authors are particularly interested in the politics of promoting a minority language against the dominance of a major language, others confess to self-translate in order to "rescue" their work from mistranslation or to "avoid inaccuracy" . 10 I call them the Owners (or the Authorialists) and the product of their selftranslation the owned (or the authorialised). Indeed, irrespective of their actual qualities, self-translations are often considered superior to non-authorial translations. This is due to the fact that, as Brian T. Fitch remarks, "the writer-translator is no doubt felt to have been in a better position to recapture the intentions of the author of the original than any ordinary translator" (1988, p. 125). Moreover, self-translators have the authority to allow them selves shifts in the translation that might not be "allowed" by another translator.
• Reflect their bilingual identity and bicultural intermediation. This happens especially with mid-career or established writers who are at least semi-bilingual and in the process of becoming fully bilingual. 11 As an unidentified writer in Corinna Krause's study states: "I like seeing the same idea expressed in the other language; getting a bilingual perspective on what I'm actually trying to say […]. I […] like the challenge of making it work in the second language" (2007, p. 110). I call them the Bireflectors (or the Intermediaries) and the product of their self-translation the bireflected (or the intermediated). 12 • Explore or exploit self-translation as a creative device that enables them to rewrite, reshape, alter, or reword their originals. This especially happens with well-established writers and fully bilingual writers. I call them the Explorers (or the Exploiters) and the product of their self-translation the explored (or the exploited). It is particularly in this case that writers have the unique opportunity of accessing a sphere of "added creativity" 9. See and Gentes (2016 Grutman and Van Bolderen, 2014). Suffice it to think of the self-translation practice of Samuel Beckett, Nancy Huston, or André Brink. Brink once remarked: "it depends very much on the mind-set and on the way that you want to approach it [self-translation], whether it is going to be disrupting or creative, whether it is going to add something to you or take something away from you" (cited in Recuendo Peñalver, 2015).
• Majorize or decentralize a language. A writer may decide either to give relevance to a minor language by self-translating his/ her work into that language from a major one (majorization), or to decentralize and diminish the self-importance of two equally dominant languages by self-translating one into the other (decentralization). Depending on the case, I respectively call this kind of self-translators the Majorizers (and the product of their self-translation the majorized), and the Decentralizers (and the product of their self-translation the decentralized). This approach is not dissimilar to the one described by Grutman in regard to a certain way of using "supraself-translation"-and, even more so, "infraself-translation," according to which: el escritor mantiene visibles ambas lenguas en una producción total, que es, por lo tanto, bilingüe. No sacrifica el escritor su habla nativa en el altar de la "gran" lengua supuestamente "universal," sino que mantiene el contacto con el público de su comunidad de origen, cuyo idioma quizás se considere menos prestigioso pero que le da un sello de autenticidad. (2011, p. 84) [The writer keeps both languages visible in a total production, which is therefore bilingual. The writer does not sacrifice his native speech to the altar of the supposedly "great," "universal" language, but maintains contact with the public of his community of origin, whose language may be considered less prestigious but which gives it a stamp of authenticity.] (my trans.) Gilles Deleuze and Félix Guattari (1975) have explored and revealed the power of "minor literatures" for destabilizing dominant social codes. In the same way, and to a certain extent, minor languages in self-translation may act as tools to subvert, escape from and, possibly, de centralize the dominant role of any language perceived as "major" in a particular cultural context. As Christian Lagarde explains: La subjectivité, liée à une survalorisation symbolique, tend alors à inverser dans le texte [...] le rapport de force réel, de la même manière que, selon Deleuze et Guattari (1975), une "langue mineure" forgée par l' écrivain parvient à déstabiliser de l'intérieur (à faire boiter) quelque langue que ce soit. (2015, p. 6) This artificial and rather generic categorization of self-translators according to their aspirations, aims, and self-perceived level of bilin gualism provides us with a useful interpretive frame regarding the reasons that may lead writers to self-translate and to the kind of self-translation they produce. However, we must keep in mind that these categories are neither fixed nor impervious: they tend instead to overlap, intersect or conflate into each other. Moreover, once they embark on the process of self-translation, writers tend to jump from one box to the other over the course of time-and, sometimes, even within the same book-depending on their publishing status, cultural manifestations, identity issues, or exploratory/creative drives.
Majorizing a Minor Language through Writing and Selftranslation
As we have seen, despite the unequal power relations between languages in the global scene (Casanova, 2009;Grutman, 2015), a perceived minority language can-through a subversive creative or translational act-be majorized, thus disrupting the binary ideological frame work of what is minor and what is major, what is dominant and what is subaltern, what is relevant and what is irrelevant. There are sev eral reasons that lead an author to majorize a language and several ways to do it, whether consciously or subconsciously, defiantly or placidly.
Let's take the case of Jhumpa Lahiri. After having acquired fame and literary success by writing in English and pub lishing with major American publishing houses, Lahiri decided to abandon English and write in her newly adopted language, Italian. Echoing Beckett's famous explanation for his turn to French ("parce qu' en français c' est plus facile d' écrire sans style"-because in French it's easier to write without style), Lahiri declared: "In italiano scrivo senza stile, in modo primitivo" ("In Italian I write without style, in a primitive way") (2015, pp. 52-53). Lahiri chose Italian to find a freer way of writing, without having to keep up with the expectations created by her nuanced English style (see Kellman, 2017). She published her first book in Italian-a memoir-with an Italian publisher. She was so adamant about wanting to revel in the new expressive freedom provided by Ital ian (a third language painstakingly acquired in her adult age) that she even refused to self-translate her work into English, leaving the translation task to a professional translator, Ann Goldstein. The translation appeared in a bilingual edition, with Lahiri's Italian on one side and Goldstein's English on the other, thus placing the two lan guages on an equal footing even from a visual point of view. By choosing to write and publish in Italian, Lahiri found a creative way to dislocate the centrality of English. Not surprisingly, the book was an instant success. One wonders, however, whether it would have met the same enthusiastic reception if, instead of Italian, Lahiri had chosen a language less appealing and with a lesser symbolic capital, such as Finnish or Yoruba.
The case of Ngugi Wa Thiong' o is even more radical in this regard, and, more particularly, in the context of self-translation. After writing several successful novels in English, Wa Thiong' o decided to go back to his native Gikuyu (Baker, 2017). Now, every time he writes a play or a novel in his Kenyan mother tongue, he self-translates it into English, making sure that on the cover of the book there is written: "Translated from Gikuyu by the Author." In this act of creative and trans lational resistance, Wa Thiong' o not only honors and ennobles the linguistic tradition in which he is active but also tries to reduce the power differential between Gikuyu and the (post)colonial language, English. Indeed, as Homi Bhabha reminds us, "[c]ultural translation desacralizes the transparent assumptions of cultural supremacy" (1994, p. 228).
Although in a more veiled and subtle way, Francesca Duranti also decentralizes the English of her self-translated novel Left-Handed Dreams by infusing it with a certain ethnic flavor-"a scent of basil," as she calls it (Dagnino and Duranti, 2017). She does so by making sure that certain turns of the phrase, linguistic quirks, and neologisms in her prose in English act as a reminiscence of her native Italian. For example, she insisted in keeping the neologism "to de-southernize" (demeridionalizzare) in relation to accents and dialects, and in keeping the Italian word "naturalezza" (naturalness)-another way of referring to the concept of "sprezzatura" (ibid.).
The examples here provided deal with language combinations that, according to the global linguistic stock exchange, are considered asymmetrical-the English having certainly more symbolic capital than Italian or Gikuyu. At any rate, it is difficult to deny that major writers, that is writers endowed with enough symbolic capital at an international-or at least national-level, have more chances to successfully majorize a minor language. Success, in this case, might mean to be acknowledged for one's writing attempts by the Englishbased media establishment or by the international academic community. Thus, if Jhumpa Lahiri has been lauded for her creative endeavors in Italian by no less than The New Yorker 13 , Ngugi Wa Thiong' o has, for his part, become the champion of postcolonial literati. As Susan Bassnett notes, among many others, [t]he case of a writer like Ngugi Wa Thiong'o reveals a shift of language as a means of asserting the status of a minority language. Ngugi chose to make a political statement by rejecting English, the global language, by preferring to write in Gikuyu. (2013, p. 18) Undoubtedly, writers who are successful according to the parameters adopted, or imposed, either by the global market or by a consecrated literary community are more willingly allowed those poetic licenses that, in other cases, might be easily disapproved of, rejected, or simply not registered-obliterated behind a wall of silence. As we shall see in the case of Antonio D'Alfonso, minor writers, that is writers (self-)perceived as literary outsiders, have a much harder time when they, more or less consciously, more or less subtly, try to decentralize a major language through self-translation. Most of the time their attempts are perceived as linguistic faults, blunders, naïvetés, improprieties, betrayals, or awkward efforts to get free of a particular national literary mold.
Antonio D'Alfonso: A Writer without a (Dominant) Tongue
Together with Alberto Manguel, Antonio D'Alfonso is one of those rather rare cases of pluri lingual writers who constantly feel linguis tically destabilized or defamil iar ized (Dagnino and D'Alfonso, 2017). 14 Put it simply, D'Alfonso claims that, instead of having a pro-13. See Leyshon (2018). 14. The Argentinian-Canadian and polyglot writer Alberto Manguel considers not Spanish but rather English-which his mother did not speak-his mother tongue. "I was looked after by a nanny who was a German-speaking Czech; she taught me English and German and those were my languages until the age of seven; in fact, I didn't speak with my parents till that age: they spoke Spanish and French, so I got to know them when we returned to Argentina and I learned Spanish" (Dagnino and Manguel, in Dagnino, 2015, p. 75). per mother tongue, he has a mixed bag gage of native Molisano dialect (literally, his mother's tongue), French (the language of his youth in Quebec), English (the language of his schooling), and Italian (the language of his doctorate studies): I do not feel at home in any language. All languages are foreign to me. A language is a tight suit I have to wear… I have no language, and so I feel awkward with them all, and I am extremely clumsy when dealing with language. (ibid.) Not only is D'Alfonso a writer without a dominant tongue, he also considers himself a writer without a voice, due to the fact that he feels he never broke through the literary scene, whichever this may be: the English Canadian, the French Canadian or the Italian. In our interview, he stated: And if major publishers don't notice you-D'Alfonso implies-you can still try to have a voice in another language, in another literary realm, in another official culture-in an attempt to break the wall of silence of your "minority" status.
D'Alfonso's Self-translation: A Case in Linguistic Dislocation
Born in Montreal, the son of Italian parents, in his long quest for a linguistic and cultural home D'Alfonso has been creatively writing in English, French and Italian (Pivato, 2002). Due to his selfperceived and self-confessed "linguistic atheism"-as he calls it in our interview-he tends to write, think, and (self-)translate immersed, or lost, in a kind of 3D-(or even 4D-) linguistic landscape (Dagnino and D'Alfonso, 2017). There seems to be no dominant, ur-language he can go back to or rely upon (ibid.; see also Shafiq, 2006) but rather a constant shift from one linguistic realm to another and back. As Joseph Pivato observes, commenting on D'Alfonso's sense of linguistic and cultural homelessness emerging from his poetic self-translations, "poem after poem the sense of loss is traced in three dimensions with geometric precision" (2002, p. 247). In our in-depth interview (Dagnino and D'Alfonso, 2017), D'Alfonso repeatedly remarks how, at any given moment of his life, he felt out of tune with the majority language of the local community around him. It was the case with his mother's Molisano dialect when he was a toddler in Montreal, with the English of his schooling while he was living in Quebec, with his fluent French-Québécois when he moved to English-speaking Toronto, with his heavily dialectized Italian when he first visited Italy and tried to break through the Italian literary scene. It is exactly due to this feeling of linguistic displacement that D'Alfonso's self-translations end up acting as a reaction against the per sisting domination of a language over another. 15 In this regard, his multilingual poem "Babel"-a mix of Italian, French, English, and even Spanish-published in the collection The Other Shore works as a poetic manifesto, capturing and at the same time celebrating the condition of the multilingual individual in multicultural Canada (Pivato, 2002, p. 248). The poem reads as follows: Nativo di Montreal élevé comme Québecois forced to learn the tongue of power vivi en Mexico como alternativa figlio del sole e della campagna par les franc-parleurs aimé finding thousands like me suffering me case y divorcié en tierra fria nipote di Guglionesi parlant politique malgré moi steeled in the school of Old Aquinas queriendo luchar con mis amigos latinos Dio where shall I will be demain (trop vif ) qué puedo saber yo spero che la terra be mine (D'Alfonso, 1988, p. 57) Seen in this light, D'Alfonso's published and unpublished selftranslations from French into English and/or vice versa are tes timony 15. See Whyte (2002). to his experimental way of counteracting the "crude subjugation" (Whyte, 2002, p. 69) of a language over another and of opposing any kind of minority-language complex he might have developed on his path to becoming a linguistically uprooted writer. In his creative hands and in his readers' reception, self-translation thus seems to act as a dislocating device able to decentralize-in that constant switching from one language to another-even two dominant idioms like English and French. As Grutman reminds us, L'inclusione di una lingua implica l' esclusione di un'altra, di modo che la scelta "positiva" [di una lingua] ha anche un lato "negativo", che ne inverte i valori tonali (come fa l'immagine negativa di una fotografia). (2016, p. 11) [The inclusion of a language implies the exclusion of another, so that the "positive" choice [of a language] has also a "negative" side, which inverts its tonal values (like the negative image of a photography does]. (my trans.) If, according to D'Alfonso, every language is a nationalistic tool and thus "irrefutably centralized," then translation may be used as a sub versive "act of decentralization." This is how he puts it in our interview: Self-translating in French or in English does not imply I possess a French or an English spirit. I come from elsewhere, and it is this "elsewhere" and how this affects the "here" and "now" that must be stressed. Not being from Britain or France liberates me from the canons of those specific traditions. I belong to no canon, and I make sure that this displacement is present in all my self-translations. (Dagnino and D'Alfonso, 2017) D'Alfonso has been self-translating his works (poems, essays, novels, plays) from French to English (or vice versa) since the 1970s. Among others, he translated from French into English his novel Avril ou l' Anti-passion (1990) Not to expect anything anymore in literature […]. If it were not for foreign scholars who have noticed some literary and cultural merit in what I have produced, I can say in all frankness that in English Canadaand, for that matter, French Canada-I do not exist as a poet, novelist, or essayist and this despite the thousands of pages that have appeared in almost 50 years of career. I mention this not to divulge any bitterness on my part. I am simply listing facts. (Dagnino and D'Alfonso, 2017) Obviously, this condition of feeling almost "invisible" to the local or wider national literary scene-whether it is within a French or English Canadian context-is not peculiar to D'Alfonso alone. Many other bilingual/plurilingual writers belonging to minority ethnic com munities in Canada share the same destiny (see, for example, Pivato, 2002). What is most striking in D'Alfonso's case, though, is his life-long attempt to build bridges and fill the on-going chasm between the two major and official languages of Canada from a thirdparty linguistic position (Italian/Molisano dialect) that would end up constantly challenging Canada's linguistic status quo.
It is not within the scope of this article to analyze and compare D'Alfonso's self-translations, which in most cases are produced through a creative process of rewriting, as the author is keen to under line: The versions in the two languages are quite different from one another.
[…] I do not believe that self-translation is translation at all, except as a librarian's category. Antigone in French is not the same as Antigone in English. There are subtleties that distinguish the two versions. The source is the same but the disclosures are different. (Dagnino and D'Alfonso, 2017) In any case, other scholars have already aptly performed this task. Lucia Canton, among others, has revealed the highly creative nature of D'Alfonso's self-translations. About Avril ou l'anti-passion, which he translated as Fabrizio's Passion, she writes: the author chose to publish an English adaptation of his original French novel instead of a straightforward translation. Essentially, the texts present the same narrative: Fabrizio Notte's quest for identity in a trilingual and tricultural environment. However, the specific language and cultural contexts that each inhabits makes the novel a different nar rative experience in the writing, in the telling and in the reading. (Canton, 2015, n.p.) What most of all I am interested in here is to understand the moti vations, objectives, (self-)perception, and reception related to such trans lational practices. In this regard, we might even agree with D'Alfonso's provocation when he says he finds the whole exercise of comparing source text and target text useless, "self-referential" and a "tad arrogant" while we should rather focus on how the work is received/interpreted in a new linguistic context: I never read Samuel Beckett by comparing his self-translations. The exercise seems to me scholarly and not at all literary. The fact that I write in two or three languages is not one's business except my own. It seems rather indecent to make a show of it, in fact. It is as though I were exposed for having multiple lovers. The serious issue about selftranslation is not about egotism, about the self. It is about breaking down borders, revealing to the world that there are no dissimilarities between language A and language B. Every language has to follow its codes, and what a writer does in one code might not work in another, so why bother to imitate the inimitable? Ticks are individual and, at times, though brilliant in one context, they might be insulting in another context. This over-emphasis on the lexis is a symptom of post-war literary practices that will in the future appear to be outdated mannerisms. I was never a believer of deconstructionism, and I believe that comparing original texts with their translations is very much the by-product of deconstructionism. (Dagnino and D'Alfonso, 2017) In transcultural terms, D'Alfonso seems to be experiencing firsthand-and be revelling in-a cultural, literary, and linguistic state of "unbelonging" in his writing and (self-)translating (Dagnino, 2016, p. 7). In other words, he seems to have found his creative and linguistic raison d'être in an in-between and neutral cultural space of positive or wise estrangement in which even a felt sense of exclusion can be used in one's favor as a point of strength instead of as a weakness. In analyzing his unconventional modus operandi in his pro cess of writing, translating and publishing his texts Avril ou l'anti-passion and Fabrizio's Passion, Canton mentions the author's way of writing passages or even whole chapters in three languages (Italian, English and French), thus showing how the process of translation had already started prior to the publication of the "original" French text. In doing so, she remarks: D'Alfonso "is illustrating more strongly what it means to be an artist who lives simultaneously in three languages, three cultures, three environments that cohabit one territory" (Canton, 2015, n.p.).
This state of unbelonging allows D'Alfonso a privileged, perhaps already transcultural, intellectual, and psychological dimension where he is able to feel quietly in place, rather than constantly out of place. As he states in our interview, To be a bilingual-bicultural person is somewhat a lie. There is not a single day when a person is not bombarded by a variety of noise, music, voices, images. A writer should capture this influx of dimensions and include them in his work. I have never written simply in two languages. I have written always in as many languages as my mind can understand. (Dagnino and D'Alfonso, 2017) I open here a small parenthesis to elucidate my way of using the term unbelonging. Dubravka Ugresic has adopted this term in her book Europe in Sepia when talking about "the intoxication of belonging (to a home, a homeland, a country, a faith) and the trauma of unbelonging" (2014, p. 204). In a personal email exchange, the writer Inez Baranay has provided a different nuance to the concept of unbelonging, which adheres to a transcultural viewpoint: "the transcultural is a theoretical arena, in which the company is fine with a sense of belonging among the unbelonging." This outlook is not dis similar to D'Alfonso's: My culture is global, and my viewpoint cosmopolitical. I am therefore an apatride writer, without a culture, without a country, without a language. I am a soul that moves from one body politic into another […]. My culture is weak, dead, and there is none I wish to embrace. I might sound rather arrogant or, worse, an idiot. However, this imperfect position will be in the future the only viable position in a world of centrified slaves. (Dagnino and D'Alfonso, 2017) In Place of a Conclusion As we have seen, bilingualism and self-translation may be used to question or redefine one's cultural identity 16 and to dislocate and decen tra lize contextual dominant idioms. I stress the word contextual be cause idioms become dominant depending on the context in which they are actively practiced and pursued. D'Alfonso's Italian, for instance, is perceived as a minority language in the Canadian con text, but it is definitely lived as a dominant idiom by migrant or foreign writers trying to find their way into the Italian literary system. In this regard, D'Alfonso, like Parks, is considered an outsider whose Italian is not sufficiently refined or literary enough in the eyes of the local/national intelligentsia. As the Italian writer Francesca Marciano comments, "Italians haven't yet got rid of a cer-tain elitist and pretentious view of literary style. They still have to un dergo the stylistic revolution that the English went through with its Hemingways, Carvers and Faulkners" (Dagnino and Marciano, 2017).
By his own admission, D'Alfonso started off self-translating with the aim of expanding his readership and acquiring literary recognition outside the stifling cultural and linguistic borders of French Quebec: Most of my essays written in French have never been published in French. All my essays I had to translate and publish in English. My anti-nationalism […] is clearly not appreciated by my French-language publishers […]. (Dagnino and D'Alfonso, 2017) D'Alfonso thus started off as a Widener, willing to expose his work to a wider, English-reading audience. In the process, though, he under stood that he could also use self-translation as a tool to call into question the centrality of two of the most influential languages (and their related literary cultures) on the global scene-namely, English and French. Consequently, he assumed the role of a Decentralizer: Translations are required to demonstratively promote the nation's agenda. This is why in many cases, it is the translator who is applauded and not the author of the original text. When critics speak of one translation being better than another, it is often because the translator has elaborated something that is uniquely national. We experience this reservation whenever we have to negotiate the French translation of an English-language writer: does the publisher hire a translator from Canada or one from France? This proves that language is irrefutably centralized. Whenever translation is decentralized, it is ignored. (Dagnino and D'Alfonso, 2017) That is why D'Alfonso's self-translations may also be readquoting him-as "subversive acts, perhaps the most subversive acts in the world today" (ibid.). We should not forget that, indeed, we are dealing with a global literary scene in which, if we just look at the United States, the biggest publishing market on earth, only an in fini tesimal part of published books are translations: "The sad statistics indicate that in the United States and the United Kingdom, for example, only two to three percent of books published each year are literary translations" (Grossman, 2011, n.p.). 17 A closer look reveals an even worse state of affairs, as the two to three percent figure is 17. That is as opposed to 35%-ish in Europe and Latin America, and who-could-say what percentage in Romania or Lebanon.
considerably bolstered by technical manuals and other non-fiction texts. For literary fiction and poetry, the figure is actually closer to 0.7%. 18 D'Alfonso's task of acting as a language dislocator through selftranslation is tremendously ambitious and perhaps defiantly hopeless, as he admits: (Self-)Translation means leaving your windows open for the passers-by… [But] who are we to want to pretend to have something new to offer to cultures that have shut tight the gates of national imagination?
[…] If one considers that translations are rarely read and never reviewed, a translation is a waste of time for any writer who is content on reading himself and his buddies. Why read an author who introduces a worldview and works in a style totally foreign to yours? To do so would demons trate an openness of spirit that is, in fact, atypical. (Dagnino and D'Alfonso, 2017) | 2020-06-04T09:05:52.812Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "105b19fc04caed1f7ab03257e2c213c94eab2c01",
"oa_license": null,
"oa_url": "http://www.erudit.org/fr/revues/ttr/2019-v32-n2-ttr05252/1068905ar.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a0b2b147f7b61fafa7c8836965c4f8ce060b321e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Sociology"
]
} |
21660953 | pes2o/s2orc | v3-fos-license | Exact solutions for the collaborative pickup and delivery problem
In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.
Introduction
Horizontal collaboration is a relatively recent phenomenon, where companies at the same level of the supply chain establish partnerships. An example of such a type of collaboration in logistics are carriers who exchange transportation requests in order to increase vehicle fill rates or reduce transportation costs as well as emissions of harmful B Margaretha Gansterer margaretha.gansterer@univie.ac.at 1 Department for Business Administration, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria substances (Beliën et al. 2017). Not surprisingly, collaborative vehicle routing is an active research area of high practical importance (Gansterer and Hartl 2017).
If collaborative decisions are made by a central authority having full information, this is referred to as centralized collaborative planning. An example for such a central authority might be an online platform providing services for collaborative decision making (Dai and Chen 2012). In our study, we focus on such a centralized decision making problem occurring in the less than truckload pickup and delivery market, where customer requests have specified origins and destinations. In this branch of the transportation industry collaborative planning is of particular importance since shipments from different customers can be moved on the same vehicle. This gives carriers much flexibility to share customer requests among each other (Archetti et al. 2014;). In Fig. 1 we illustrate the investigated setting with three carriers.
We assume a central authority having full information, aiming at an efficient distribution of customer requests to carriers. The problem has been introduced by Berger and Bierwirth (2010), but no efficient solution techniques have been presented so far. Furthermore, a natural assumption is that carriers are not willing to share all their customers. In real-world applications a reasonably even distribution of workload among carriers is a minimum requirement to make collaborative solutions acceptable for competing carriers. Thus, we extend the problem by constraints ensuring that each carrier is assigned a minimum number of customers. We refer to this problem as multi-depot traveling salesman problem with pickups and deliveries (MDTSPPD). For both problem variants (with and without minimum workload constraints), we apply three well-known exact solution methods and compare their performance against a commercial solver. Our computational study shows that Benders decomposition is the method of choice for the original problem formulation. However, if minimum workload constraints are considered, column generation clearly dominates all other solution techniques.
The remainder of the paper is organized as follows. Section 2 provides a literature review. Mathematical models are presented in Sect. 3. We discuss the applied solution methods in Sect. 4. Details on the computational study are presented in Sect. 5. Conclusions and further research are summed up in Sect. 6.
Literature review
The first studies to systematically assess the potentials of collaborative vehicle routing were presented by Krajewska and Kopfer (2006) and Cruijssen et al. (2007).
A real-world setting of a local courier service of a multi-national logistics company is investigated by Lin (2008). Joint route planning of cooperative carriers is researched by, e.g., Dai and Chen (2012), Buijs et al. (2016), Liu et al. (2010), while Adenso-Díaz et al. (2014), Ergun Ö et al. (2007, Kuyzu (2017) focus on shippers, who want to merge full truckload lanes. The full truckload multi-depot capacitated vehicle routing problem (VRP) in carrier collaboration is presented in Liu et al. (2010). a, b, c). The upper part shows the pre-collaborative setting. An efficient redistribution of customer requests to carriers is shown in the lower part (Gansterer and Hartl 2017) Several recent studies focus on ecological aspects, like reduced road congestion, noise pollution, and emissions of harmful substances (Montoya-Torres et al. 2016;Pérez-Bernabeu et al. 2015;Sanchez et al. 2016).
In order to approximate optimal solutions even for large real world collaboration problems, many authors propose decomposition strategies. Dai and Chen (2012) use such an approach for a carriers collaborative less than truckload transportation plan-ning problem with pickups and deliveries. Their method consists of two steps. First, a mixed integer programming model, which is a generalization of the lane covering problem, is proposed. Secondly, a set of feasible vehicle tours is constructed. Nadarajah and Bookbinder (2013) also present a two stage framework for less than truckload carrier collaborations. The first stage refers to collaboration between multiple carriers at the entrance to a city, which can be formulated as a VRP with time windows. The second stage involves collaboration between carriers at transshipment facilities. Buijs et al. (2016) study the collaboration between two business units of Fritom, a Dutch logistics service provider, and propose alternatives to improve its collaborative transport planning. They introduce the generalized pickup and delivery problem (PDP), which relaxes the common constraints that a load must be transported from its origin to its destination using one vehicle within a single planning period. The authors show different decomposition approaches, which are necessary to solve the real-world instances. Wang et al. (2014) present a combination of horizontal and vertical carrier collaboration, where both subcontracting and collaborative request exchange are taken into account. Literature reviews on collaborative vehicle routing are presented by Verdonck et al. (2013) and Gansterer and Hartl (2017). Berger and Bierwirth (2010) introduce the collaborative carrier routing problem, which assigns transportation requests to carriers. The authors come up with two decentralized solution approaches, but no efficient solution method for the centralized problem is presented. Gansterer and Hartl (2016) show that these decentralized mechanisms are particularly powerful if carriers select request based on geographical information. The multiple vehicle VRP in a non-collaborative setting is researched by Lu and Dessouky (2004). As a matter of fact, the authors do not consider workload constraints.
Multi-depot VRP in general are investigated by, e.g., Polacek et al. (2008), Dondo and Cerdá (2007) and Currie and Salhi (2003), while multiple depots and backhaul customers are researched in Salhi and Nagy (1999) and Min et al. (1992). PDP with heterogeneous vehicles are tackled by Irnich (2000). Nagy and Salhi (2005) look at a multi-depot VRP with mixed backhauls and simultaneous pickups and deliveries where a customer can both receive and send goods at the same time. A multi-depot heterogeneous PDP with soft time windows is presented by Bettinelli et al. (2014). Detti et al. (2017) present a multi-depot dial-a-ride problem with heterogeneous vehicles. A survey and typology on multi-depot PDP in multiple regions is provided by Dragomir et al. (2017).
Surveys on pickup and delivery problems are presented by Parragh et al. (2008), Berbeglia et al. (2007), and Berbeglia et al. (2010). Various exact solution methods are applied to this problem class. A branch-and-cut-and-price algorithm for the PDP with shuttle routes is developed by Masson et al. (2014). An extended branch-andbound algorithm is presented by Kalantari et al. (1985). The PDP with time windows is solved with a column generation scheme by Dumas et al. (1991), a banch-andcut algorithm by Ropke et al. (2007), and a branch-and-cut-and-price approach by Ropke and Cordeau (2009) and Baldacci et al. (2011). Cherkesly et al. (2016) extend the problem by multiple stacks and solve it using branch-and-price-and-cut, while Cordeau et al. (2010) solve the problem with loading constraints using branch-and-cut. Branch-and-cut as well as branch-and-price are applied by Xue et al. (2016) to the PDP with loading cost. The multiple vehicle PDP is solved using an branch-and-cut algorithm by Lu and Dessouky (2004).
To the best of our knowledge, we are the first to compare different exact solution techniques for the MDTSPPD, and to extend it by the realistic assumption of minimum workload constraints.
Problem description
The MDTSPPD can be formulated as a routing problem with multiple depots, each used by a single vehicle, i.e. a carrier. The customer requests are paired pickup and delivery requests, meaning that each request is associated with a prespecified origin and destination. The problem belongs to the class of traveling salesman problems with precedence constraints (TSPPC). At each depot, we face the single vehicle case of the VRP with pickups and deliveries (SPDP), which are, according to Parragh et al. (2008), a subclass of VRP with pickups and deliveries (VRPPD). It is classified by Berbeglia et al. (2007) as one-to-one pickup and delivery problem. For the mathematical model we use a Hamiltonian tour formulation as suggested by Lu and Dessouky (2004), where the destination depot of one vehicle is the departure depot of the next vehicle. The model is based on formulations presented in Lu and Dessouky (2004), Gansterer and Hartl (2016) The objective function (1) minimizes the total travel cost. Each vertex has to be entered and left exactly once. This is ensured by constraints (2) and (3). In constraints (4)-(6) we copy the values of the routing decision variables to the precedence decision variables (Lu and Dessouky 2004). Precedences among depots and customers are met by (7)-(14), where constrain (7)-(10) ensure that each pickup node is visited before its associated delivery node, and that customers being assigned to the same depot are served by the same vehicle. In constraint (11) we ensure that no node is visited prior to the first depot. The sequence of depots is determined by constraints (12) and (13). Constraint (14) ensures that the depot (2n + m + 1) is the last node in the Hamiltonian tour. While it is necessary to ensure that the routing decision variable x i j is binary, Lu and Dessouky (2004) show that constraint (16) can be relaxed. Subtours are implicitly eliminated by constraints (4)-(6).
Workload constraints
In order use the model in the setting of collaborative carriers, it is necessary to include workload limitations. These limitations might be loading quantities or number of customers visited along a tour. Otherwise, in a feasible solution, all requests might be assigned to one single carrier, which will probably not be accepted by the competitors. Thus, we introduce an additional sets of parameters and decision variables: i workload available at customer i ( i > 0 at pickup nodes, and i < 0 at delivery nodes) R i maximum workload for tours at depot i R i minimum workload for tours at depot i q i workload when arriving at customer i Constraints (17) and (18) are required to determine the workload along the route. In the following two constraints, we ensure that a maximum (19) or minimum (20) workload is not violated. If the workload constraint refers to a number of customers that have to be assigned to a depot, i is set to 1 for all pickup nodes i, i ∈ P. Nonnegativity of decision variables q i is defined by (21).
Solution methods
In this study, we assess the performance of three different exact approaches, being applied to the MDTSPPD. These approaches are (i) branch-and-cut, (ii) benders decomposition, and (iii) column generation. Benchmark results are generated using a standard optimization software (CPLEX Optimizer 12.7). 1
Branch-and-cut
The branch-and-cut algorithm is an extension of the well known branch-and-bound approach. The main difference is the way solutions on nodes of the search tree are processed. In a branch-and-cut algorithm, additional constraints are used to strengthen the linear programming relaxation if required. These cuts do not exist in the original problem definition. A similar approach are lazy constraints. Lazy constraints are part of the problem definition, but in the branch-and-cut search, they are only added if required. If a candidate solution is found, the algorithm checks if it is feasible with respect to the lazy constraints. A violated constraint is added, and the solution gets re-evaluated. We use the procedure proposed by Lu and Dessouky (2004), including the following cuts (Lu and Dessouky 2004;Ropke and Cordeau 2009): Transfer constraints The following valid inequalities hold for an arbitrary collection of nodes (h 1 , . . . , h k ) ∈ N \{i, 2n + m + 1}, 1 ≤ k ≤ |N | − 2 (Lu and Dessouky 2004): Adjacent constraints These valid inequalities strengthen the precedence constraints by checking the requirements for pairs of directly connected nodes (Lu and Dessouky 2004). Whenever a pickup node i is visited before some node k, the corresponding delivery node (i + n) has to be visited after k.
Pairing constraints A delivery node has to have more preceding nodes than its associated pickup node (Lu and Dessouky 2004): Demand constraints In paired PDP it can be assumed that the demand at the delivery node is equal to the supply at the pickup node. Making use of this characteristic, Lu and Dessouky (2004) present a cut that can be used even if the real demand values are not known. The idea is that only very few combinations will sum up to zero (which is always the case for a specific pickup and delivery pair). Thus, if demands are not known, an artificial demand is assigned to each of the customers. This demand has to be unique and benefits from being not constructable from other demands. The easiest way to determine such sets of demands is to use prime numbers. For cutting, we sum up all demands served by a given depot: where d k is the demand of customer k (it is assumed that d k > 0 if k is a pickup node, and d k < 0 if k is a delivery node).
In the proposed branch-and-cut approach, precedence constraints (4)-(6) are used as lazy constraints. Since in the problem formulation, precedence constraints are implicitly used to eliminate subtours, it seems to be beneficial to use additional subtour elimination constraints as lazy constraints (S ⊆ N , ∅ = S = N ): where E(s) := {e ∈ E : |e ∩ S| = 2}. These lazy constraints will enforce that the sum of connections within each subset of nodes S is smaller than the size of the subset.
Benders decomposition
As a second approach, we embed Benders Decomposition (Benders 1962) into the branch-and-cut procedure. This approach is described in, e.g., Sridhar and Park (2000).
In each node of the branch-and-cut tree, Benders Decomposition is applied to the linear relaxations. The general concept is to decompose the problem into smaller subproblems, called stages. Each stage contains a set of variables and constraints of the original problem. The stages are solved iteratively. Once a solution of the first stage is determined, the second stage is solved using the solution of the first stage. As long as the first stage solution leads to an infeasible second stage solution, new constraints are added to the first stage master problem. These iteratively added constraints are called Benders feasibility cuts. The optimal solution is found, if the first stage solution leads to a valid second stage solution and no further cuts are required. The original problem (see Sect. 3) has two types of decision variables: one for the routing decisions and one for the precedence relations. Since the latter restricts the routing decisions, it seems reasonable to use them for the second stage problem, while the first stage generates candidate solutions. However, these routes do not incorporate the precedence constraints (6)-(14), since these are in the second stage, and may therefore contain subtours. Therefore, the second stage ensures that solutions that contain subtours and violate precedence constraints are withdrawn from the solution space (cf. Sexton and Bodin 1985a, b;Contardo and Martinelli 2014). The constraints of the proposed model extensions (see Sect. 3) are added to the first stage.
Column generation
In column generation, the decision problem is decomposed into a master-and a subproblem. While the number of constraints is fixed, the number of decision variables (columns) increases over time. For the MDTSPPD, we separate the routing decision (subproblem) from the route selection (master problem), which is a linear relaxation of the following set partitioning problem: The objective function (28) minimizes the total routing costs, while constraint (29) ensures that each request is serviced by a route. Each depot has to be assigned to at least one route (30). The binary property of the decision variables are defined in (31).
As part of an iterative process, the proposed set partitioning formulation (master problem) will serve two purposes: (i) selecting a set of routes that ensure that every request is serviced, and (ii) updating the dual costs of each request and depot. A shortest path problem based on the properties of the vehicle flow formulation (Sect. 3) generates new promising routes based on the updated dual information provided by the master problem. This iterative process is performed until the subproblem is not able to identify additional routes that could reduce the objective function of the master problem. Dominance rules can be used to decrease the number of routes in the subproblem.
For the subproblem, we apply a labeling algorithm, where we start from a depot and gradually expand the route by a new node or customer. Every time doing so we have to check the feasibility (i.e. capacity restrictions) of the newly created route. Every time a route gets extended the new and the existing routes are compared based on the dominance rules (see below). Dominated routes get discarded. In our algorithm, we deal with all depots at once. By introducing an artificial depot with a distance of zero to each of the customers, the shortest path problem with pickup and delivery can be used (e.g. Desrosiers and Dumas 1988). Every route that gets extended to a depot, has to be extended to each of the depots, and after being checked for dominance, added to the master problem.
We apply dominance rules proposed by Ropke and Cordeau (2009), and Contardo and Martinelli (2014) for multi-depot VRP: x.costs ≤ y.costs (32) x.nodesVisited ⊇ y.nodesVisited (33) x.openRequests ⊆ y.openRequests (34) x.lastNodeOfTour = y.lastNodeofTour (35) x.depotUsed = y.depotUsed A route x dominates a route y if it is cheaper, and visits at least the same nodes. Furthermore, we use open requests, i.e requests where the pickup but not the delivery node is visited, as additional dominance criterion. All criteria have to be met for a pair of routes having the same starting and ending node.
Computational results
For our computational study we use data developed by Berger and Bierwirth (2010). The authors present three instance sets which refer to different degrees of competition between the carriers: (i) adjacent (ii) overlapping, and (iii) identical customer regions. For each scenario (A, O, and I), there are 30 instances, with 3 depots and 9 transportation requests. For all computational experiments, we limit the runtime to 30 min. The algorithms are run single-threaded on an Intel Xeon CPU with 2.50 GHz. It should be noted that Berger and Bierwirth (2010) set their time limit to 120 min on a PC P4 with 2800 MHz.
Original problem
In the first part of our computational study, we assess the solution methods being applied to the original problem, i.e. without the additional constraints (see Sect. 3) on minimum workload.
In Sect. 4, we presented three variants of the Branch-and-cut algorithm, which are (i) with lazy constraints (LC), (ii) with lazy constraints and subtour elimination (LS&SE), (iii) without lazy constraints (woLC). As a first step, we want to investigate the necessity of including the proposed lazy constraints. The results are presented in Table 1.
From the average runtimes we see that variant woLC outperforms the other two. For all three scenarios, this methods requires the minimum average runtime, and is able to solve the maximum number of instances. The reason for this is that the lazy constraints are rarely binding constraints. Thus, for all remaining tests, we only use the woLC configuration for the branch-and-cut algorithm. Not surprisingly, instances The last line reports the number of instances that could not be solved within the given time limit of 30 min O require the longest runtimes, since the solution space increases with the degree of competition.
In Table 2 we compare Branch-and-cut, Benders decomposition, and column generation against CPLEX.
The results show that Benders decomposition outperforms all other approaches. For each of the test scenarios, this methods needs the lowest computational time to reach the optimal solution, while there are only 4 instances (out of 90) that could not be solved within the given time limit of 30 min. It should be noted that the proposed column generation approach cannot solve any of the instances within the time limit of 30 min. This can be explained by the scarcely constrained solution space, which is disadvantageous for column generation-based methods. In the second part of our computational study (see Sect. 5.2), we see that additional constraints are a boost for the column generation approach.
Workload constraints
In the second part of our computational study, we use the extended model presented in Sect. 3, where each carrier requires a minimum workload. In Table 3 we show the increase in total cost, depending on the degree of required workload.
The results show that the inclusion of a minimum workload constraint of 1 customer increases the total costs on average by 18.32%. If each carrier has to keep at least 2 of the initial customers, the average cost increase is more than 30%. This is in line with the literature on collaborative vehicle routing, where several studies show that centralized solutions yield up to 30% higher collaboration profits than decentralized For cust1 and cust2 we report the percentage increase compared to no The lower part lists the number of instances that could not be solved within the given time limit (30 min) solutions (Gansterer and Hartl 2017;Cruijssen et al. 2007;Montoya-Torres et al. 2016;Lin 2008).
In Table 4 we present the average runtimes needed to solve the problem with workload constraints.
In case of low or no workload limits, Benders decomposition is still the method of choice. However, if carriers have to keep more than one of their customers, column generation finds the optimal solutions much faster. Also the number of instances that could not be solved within the given time limit of 30 min, clearly depends on the workload constraint. If there is a strong restriction on the number of customers each carrier has to keep, column generation finds all optimal solutions within very short amount of time, while Benders decomposition fails in 43 (out of 90) instances. In Table 5 we provide more detailed results on the setting, where carriers have to keep 2 of their customers (2cust).
We see that column generation shows a very strong performance for instance sets with a high degree of competition (I and O). For these instances, this method finds the optimal solutions about 3 times as fast as the second best method. This is not very surprising, since it is well-known that column generation takes advantage of solution spaces with few valid solutions. However, it is remarkable that even instance set O can be solved without any loss in performance. This makes column generation a very powerful method for problems with a high degree of competition. Hence, we can conclude that for the original problem proposed by Berger and Bierwirth (2010), Benders decomposition is the method of choice, while for the newly introduced setting, column generation should be preferred.
Conclusion
In this study we investigated a decision problem faced by a centralized decision maker in carrier collaborations. Pickup and delivery requests are to be redistributed among participants, such that the total cost is minimized. This problem was formulated as MDTSPPD. Three well-established exact solutions approaches were compared in terms of their computational performance.
To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach.
We showed that the proposed minimum workload constraints have a surprisingly strong impact on the total costs. If carriers want to keep a minimum workload of at least 30% of their initial one, the total costs increase on average by 18.32%. If 60% of the initial customers have to remain unchanged, the average cost go up by more than 30%.
The results of the computational study can be used as benchmarks for decentralized mechanisms in collaborative PDP problems. The insights on the performance of the investigated methods are useful for generating results for similar test cases. | 2018-05-21T22:38:44.863Z | 2017-11-15T00:00:00.000 | {
"year": 2017,
"sha1": "78b7223a5ed89a61ae65096108ef69e21297b22a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10100-017-0503-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "78b7223a5ed89a61ae65096108ef69e21297b22a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
7403316 | pes2o/s2orc | v3-fos-license | New drugs in psychiatry: focus on new pharmacological targets
The approval of psychotropic drugs with novel mechanisms of action has been rare in recent years. To address this issue, further analysis of the pathophysiology of neuropsychiatric disorders is essential for identifying new pharmacological targets for psychotropic medications. In this report, we detail drug candidates being examined as treatments for psychiatric disorders. Particular emphasis is placed on agents with novel mechanisms of action that are being tested as therapies for depression, schizophrenia, or Alzheimer’s disease. All of the compounds considered were recently approved for human use or are in advanced clinical trials. Drugs included here are new antipsychotic medications endowed with a preferential affinity at dopamine D3 receptor (cariprazine) or at glutamatergic or cannabinoid receptors, as well as vortioxetine, a drug approved for managing the cognitive deficits associated with major depression. New mechanistic approaches for the treatment of depression include intravenous ketamine or esketamine or intranasal esketamine. As for Alzheimer’s disease, the possible value of passive immunotherapy with agents such as aducanumab is considered to be a potential disease-modifying approach that could slow or halt the progressive decline associated with this devastating disorder.
Introduction
Epidemiological data indicate neuropsychiatric disorders as being some of the most prevalent, devastating, and yet poorly treated illnesses. As the approval of central nervous system drugs with novel mechanisms of action has been rare in recent years, there is a critical need to enhance drug discovery in neuropsychopharmacology 1 . To achieve this goal, it is essential to focus on developing drugs that target the pathophysiology underlying the disease, which increases the likelihood of identifying efficacious agents rather than symptomatic treatments. The pathology-to-drug discovery approach specifies that an understanding of the pathophysiology of neuropsychiatric disorders is the required initial step in identifying disease pathways and for validating new pharmacological targets 1 . A more complete understanding of the disease pathways will facilitate both the selection of therapeutic targets and the development of relevant models for screening drug candidates 2 . The pathology-to-drug discovery approach inspired the creation of NEWMEDS (Novel Methods Leading to New Medications in Depression and Schizophrenia), a European project designed to identify specific brain circuits, particularly those involving the prefrontal cortex, that are involved in the pathophysiology and treatment of major depression and schizophrenia 3 . On the other hand, a better knowledge of the mechanism of action of older drugs (the so-called reverse translation approach) has permitted, in part, a better understanding of the pathophysiology of neuropsychiatric diseases, enabling drug design according to the pathology-to-drug discovery approach.
As the neuropsychopharmacology community has only recently adopted the pathology-to-drug discovery approach, it remains unknown whether agents discovered in this way are currently available for the clinical management of psychiatric disorders and whether such drugs display significant advantages over conventional therapies with respect to efficacy and tolerability.
Summarized below are some of the more exciting and relevant advances in the field of neuropsychopharmacology as they pertain to the design and development of novel psychotherapeutics. Highlighted are molecules displaying novel mechanisms of action that were recently approved for human use or that are now undergoing phase II/III clinical trials. Particular emphasis is placed on the identification of new drugs and drug candidates for the treatment of schizophrenia and major depression. Finally, a specific section is dedicated to neurodegenerative disorders such as Alzheimer's disease (AD), where pharmacological strategies can significantly differ from the approaches currently adopted in psychotic and affective disorders.
Schizophrenia
At a mechanistic level, drug treatments for schizophrenia are presently based on the dopamine hypothesis concerning the symptoms of this disorder 4 . The development of second-generation antipsychotics that began 25 years ago has yielded some advances in terms of efficacy, with some modest improvement in addressing the negative symptoms of this condition, and in tolerability, particularly with regard to extrapyramidal side effects 5 . However, no antipsychotics display robust effects on the cognitive deficits or impaired social processing that are important components of this disorder 4 . For years, the limited efficacy of conventional and second-generation agents has led to theories about whether the manipulation of brain targets other than, or in addition to, the dopamine D2 receptor (D2R) may be necessary for treating this disorder and to significantly improve safety and tolerability.
In recent years, the N-methyl-D-aspartate (NMDA) receptor hypothesis of schizophrenia has been validated in preclinical animal models and humans 6 . According to this theory, the excessive dopamine release in the mesolimbic pathway and the decrease in dopamine release from the mesocortical pathway in the prefrontal cortex, which are responsible for some of the symptoms of schizophrenia, are secondary to a decrease in NMDA receptor control of inhibitory GABAergic neurons. No drugs capable of selectively enhancing NMDA receptor activity in this key brain region have yet been approved for human use. Studies indicate that prodromal and early-in-disease schizophrenic patients have elevated brain glutamate levels compared to healthy controls and ultra-high-risk patients who do not become psychotic 7 . Alterations in NMDA receptor-mediated excitation of GABAergic neurons indicate that schizophrenia is associated with dysfunctional glutamatergic systems in the prefrontal cortex and limbic regions of the brain. One approach has been to target the glutamatergic system using pomaglumetad methionil, a potent and highly selective orthosteric metabotropic glutamate receptor (mGluR) 2/3 agonist 8 , but alternative approaches directed at group III mGluRs are also currently studied in preclinical models of schizophrenia 9 . Preclinical research suggests that by reducing glutamate release, pomaglumetad methionil normalizes heightened glutamate activity in cortical pyramidal neurons 8 . While pomaglumetad methionil was reported to display beneficial effects on both the positive and the negative symptoms of schizophrenia in initial clinical trials, positive results were not obtained in phase III studies. Analysis of the clinical data suggests that pomaglumetad methionil is most efficacious in early-in-disease schizophrenics (<3 years' duration) with a known hyperactive glutamatergic pathophysiology 8 . It is anticipated, therefore, that new antipsychotics acting on the mGluR2/3 will be developed to treat schizophrenic patients in an early phase of the illness with the hope of slowing disease progression and improving prognosis.
The dopamine D3 receptor (D3R) is another pharmacological target that appears to play a prominent role in the pathogenesis of schizophrenia. Unlike the D2R, which has been extensively studied with respect to the symptoms of schizophrenia, little is known about the extent to which changes in D3R and dopamine D4 receptor (D4R) activity contribute to the symptoms of this disorder. It is known that the molecular structure of D3R is very similar to that of D2R and D4R and that in a variety of animal species D2R and D3R share high homology 10 and identity in the transmembrane regions, including the binding site 11 . Such structural similarities make it difficult to design ligands that selectively interact with D3R. This, in turn, has compromised the ability to fully define the localization and functions of this site. A number of approved drugs that were believed to act primarily at the D2R recognition site have now been found to interact with the D3R as well (see 12 for review). Cariprazine is the first example of a D3R partial agonist that occupies this receptor at doses that produce antipsychotic-like effects in preclinical animal models 13,14 . Cariprazine displays higher affinity at D3R and a similar affinity at D2 and serotonin 2B (5-HT 2B ) receptors 13 . In rodents, cariprazine reverses the deficit in novel object recognition that is caused by neonatal administration of phencyclidine 14 or by subchronic phencyclidine exposure in adults 15,16 . Preclinical and clinical data suggest that by selectively activating D3R, cariprazine may have a positive impact on the cognitive symptoms of schizophrenia. Cariprazine was approved in 2015 by the U.S. Food and Drug Administration for the treatment of schizophrenia 17 . Further clinical studies with cariprazine and other antipsychotics selectively targeting D3R will be essential to validate the role of D3R as a new pharmacological target for the treatment of cognitive symptoms in schizophrenia.
Another new approach proposed for the treatment of schizophrenia is the development of disease-modifying agents to prevent the full onset of schizophrenia and/or to slow its progression 4 . Many questions remain as to whether it will be possible to identify drugs that will directly affect the underlying disease process in a way that would delay or prevent disease progression in schizophrenia.
There are also many challenges to consider for designing clinical trials to prove that a given drug candidate can modify the course of this disorder over time. A first step might be to identify young individuals who are at high risk for psychosis. Particular emphasis could be placed on those who are also heavy users of cannabis, as there is an increased risk for schizophrenia in those who abuse highly potent preparations of cannabis (so-called "skunk" variants) containing a high percentage of delta-9-tetrahydrocannabinol (THC) (about 15%) with a scarcity of cannabidiol (CBD 0-1%) 4,18 . As a negative allosteric modulator of the cannabinoid 1 (CB1) receptor, CBD ameliorates the psychotogenic effect of THC and may possess antipsychotic properties 19 . Given these findings, the CB1 receptor has been proposed as a new pharmacological target for the treatment of schizophrenia 6,19 . Indeed, there have been reports that 800 to 1,000 mg of cannabidiol per day safely reduces the signs and symptoms of schizophrenia 19 . However, uncertainty remains about the mechanism of CBD action. Recently, Seeman reported that CBD, like aripiprazole, is a partial agonist at the D2R 20 . Thus, CBD could be the first molecule of a class of antipsychotics that interact with both the CB1 and the D2R. Ideal candidates for clinical trials with such agents would be high-risk individuals to assess whether CBD dampens the acute psychotic symptoms and cognitive deficits associated with schizophrenia. Three different phase II clinical trials are underway to assess the clinical efficacy of cannabidiol monotherapy in newly diagnosed schizophrenic patients and as adjunctive therapy with conventional second-generation antipsychotics (NCT02088060 and NCT02504151).
Depression
While currently available antidepressants, such as selective serotonin reuptake inhibitors (SSRIs) and serotonin and noradrenaline reuptake inhibitors (SNRIs), are effective for most patients, approximately 30% of those with major depressive disorder (MDD) fail to respond to these agents 2 . Cognitive dysfunction represents a distinct biological and clinical dimension in MDD 21 , with evidence suggesting that the presence of cognitive symptoms in depressed patients can predict a low rate of response to antidepressants and reduced remission rates 22,23 . Multimodal drugs, such as vortioxetine, represent a new class of antidepressants. These agents display multiple molecular mechanisms of action in addition to inhibition of the serotonin transporter 24,25 .
Vortioxetine is a multimodal antidepressant that is thought to act by inhibition of transmitter reuptake and interactions with various 5-HT receptor sites. This pharmacodynamic profile is well described in the Neuroscience-Based Nomenclature, a new pharmacologically driven classification of psychotropic drugs that reflects current knowledge about the underlying neurobiological characteristics of the disorder, an understanding of the neurotransmitter/molecule/system being modified ("pharmacological domain"), and the mode/mechanism of drug action 26 . Vortioxetine is not the only example of a multimodal antidepressant, with others, such as vilazodone (a serotonin transport inhibitor and a 5-HT 1A receptor partial agonist), having been approved for the treatment of major depression 27,28 . Other psychotropic drugs developed in the last 30 years can possess a multimodal pharmacodynamic profile (see e.g. trazodone), but the novelty of this approach is to combine multiple pharmacological actions affecting both monoamine targets and other non-monoaminergic systems (e.g. glutamatergic system) 24 . The multimodal approach seems to be an interesting approach to target the different biological and clinical dimensions of MDD. Current evidence does not suggest a global greater efficacy of multimodal antidepressants compared to SSRIs or SNRIs but an improved efficacy on specific clinical domains where SSRIs or SNRIs are less effective, as observed with vortioxetine, which displays a specific clinical efficacy in the treatment of cognitive deficits associated with MDD 25 .
Vortioxetine is a 5-HT 3 , 5-HT 7 , and 5-HT 1D receptor antagonist, 5-HT 1B receptor partial agonist, 5-HT 1A receptor agonist, and an inhibitor of the serotonin transporter 29 . It is thought to activate the glutamatergic system in rat frontal cortex by blockade of 5-HT 3 and 5-HT 7 receptors. It is reported that, as compared to fluoxetine, vortioxetine displays a superior efficacy in aged mice as a treatment for visuospatial memory and depression-like behavior 30 . The pharmacodynamic profile is consistent with the results of clinical studies indicating that vortioxetine has antidepressant properties as well as positive effects on cognitive function (e.g. memory and executive functioning) 25 . The most common side effects associated with this drug are nausea, vomiting, and constipation. Vortioxetine is currently studied as a potential therapeutic alternative for patients who fail to respond to SSRIs or SNRIs 31 .
Drugs, such as ketamine, that are known to affect descending glutamatergic systems represent a new approach for managing treatment-resistant depression (TRD) 32 . Several well-controlled clinical studies indicate that a single ketamine infusion (0.5 mg/kg) induces a rapid, generally transient, antidepressant effect in addition to small, but significant, increases in psychotomimetic and dissociative symptoms 33 . The ketamine-induced antidepressant effect occurs within 1 to 2 hours following its intravenous administration and may be sustained for up to 2 weeks 34 . This rapid onset of action has stimulated studies to explore the possibility that ketamine may represent a life-saving drug for TRD patients at imminent risk of suicide 35 . Twice-and thrice-weekly administration of ketamine at 0.5 mg/kg maintains antidepressant efficacy for over 2 weeks with no signs of tolerance 36 .
It is believed that by blocking NMDA receptors on GABAergic interneurons, ketamine causes a rapid, but transient, increase in extracellular glutamate in the prefrontal cortex. At the molecular level, the ketamine-induced blockade of NMDA receptors results in inhibition of elongation factor 2 (eF2) kinase, dephosphorylation of eF2, and a consequent augmentation of brain-derived neurotrophic factor (BDNF) synthesis 32 . Preclinical and clinical studies with ketamine have resulted in the identification of new pharmacological targets related to NMDA receptor activation or inhibition, such as the NR2B receptor subunit, and the mammalian target of rapamycin (mTOR), a signaling system that controls synaptic plasticity and appears to be a key element in the antidepressant response to ketamine 32 . Recent studies also suggest that the pharmacological profile of ketamine is more complex than being a simple NMDA receptor antagonist because this drug also shows a significant affinity for D2R and opioid receptors as well as for monoamine transporters 37 . Further studies are needed to identify all the molecular mechanisms underlying the rapid-acting antidepressant effects of ketamine. Efforts are now directed towards the development of new antidepressants displaying the efficacy and rapid onset of action of ketamine but lacking its psychotomimetic effects 34 .
Attempts are also underway to exploit the established clinical value of ketamine and its derivatives, such as S(+) ketamine (esketamine), while reducing their side effect potential by administering them intramuscularly or intranasally at doses (e.g. 0.2 mg/kg) lower than those used for the intravenous studies 38 . Phase III clinical trials are ongoing to assess the safety and clinical efficacy of intranasal esketamine in patients with TRD (NCT02782104, NCT02133001, and NCT02497287).
An alternative approach to target the glutamatergic system in MDD and mimic ketamine's effects might be the use of mGluR5selective antagonists or negative allosteric modulators 39 . This novel approach stems from the evidence that mGluR5s are functionally involved in the mild modulation of NMDA receptor activity and mGluR5 antagonists exert significant antidepressant effects in animal models of depression by acting as mild NMDA receptor negative modulators 39 . Therefore, mGluR5s have been recently considered as a new target of novel antidepressants, and basimglurant, a negative allosteric modulator of mGluR5, is now in clinical development for the treatment of MDD (NCT00809562 and NCT01437657).
Alzheimer's disease
AD is a neurodegenerative disorder characterized by memory loss, cognitive decline, and neuropsychiatric symptoms that interfere with normal daily activities 40 . This disorder is associated with the presence of senile plaques containing amyloid β (Aβ), intracellular aggregates of tau protein in neurofibrillary tangles, and progressive neuronal loss. The amyloid cascade hypothesis 41 posits that overproduction of Aβ, or failure to clear this peptide, causes AD because of the aggregation of monomeric Aβ species into higher-molecular-weight Aβ oligomers that results in neuronal cell loss 42 .
Current drug therapies for AD treat only the symptoms, such as memory loss, while having no effect on the progression of the disease. Drugs included in this group are cholinesterase inhibitors (donepezil, rivastigmine, and galantamine), which are approved for treating mild to moderate AD, and memantine, an NMDA receptor antagonist, which is used to treat patients with moderate to severe AD.
Much effort has been expended in developing disease-modifying drugs for the treatment of this condition. Such agents would be able to slow the progression of the pathological changes and their effects would persist even after terminating treatment 42 . To date, this effort has failed to yield any clinically effective drugs. One of the difficulties in testing such compounds has been the lack of reliable biomarkers identifying patients in the early stage of the disease. For this reason, most clinical trials were conducted in patients in advanced stages of the disease after irreversible damage to the brain had already occurred. However, the criteria for the diagnosis of AD, revised in 2011 by the National Institute on Aging and the Alzheimer's Association workgroup 43 , now include biomarkers for identifying AD patients earlier. Such individuals are much better candidates for treatment with disease-modifying drugs 44 . The development of these biomarkers increases the likelihood of identifying that cohort of patients that is most likely to respond to disease-modifying drugs.
Immunotherapy directed towards Aβ has been considered a promising approach because it would, in theory, decrease the aggregation and brain deposition of Aβ. Tau immunotherapy has also been explored as a possible means for inhibiting disease progression in AD 45 . Given problems with vaccines, Aβ passive immunotherapy is currently the most popular approach for developing disease-modifying drugs for the treatment of AD 42,46 .
As antibody-based immunotherapy against Aβ has so far failed to yield positive clinical results, questions are being raised about the validity of the amyloid hypothesis of AD. However, it remains unknown whether the clinical failures reported for bapineuzumab, gantenerumab, and solanezumab are due to their inability to reduce the formation of Aβ oligomers or because they were tested in inappropriate populations of AD patients 47,48 . Unfortunately, encouraging preliminary results obtained with solanezumab have not been confirmed in recent phase III clinical trials 47,48 . This drug is, however, still in clinical development in prodromal AD patients (NCT02008357 and NCT02760602).
The recent positive results with aducanumab in prodromal and mild AD in a phase Ib trial 49,50 suggest that the amyloid hypothesis is still consistent with the development of new disease-modifying drugs, based on this hypothesis, in the near future. Aducanumab is the first example of an antibody developed by selecting human B-cell clones triggered by neo-epitopes present in soluble oligomers and insoluble fibrils. That is, only the pathogenic forms of Aβ 49 are affected without interfering with Aβ monomers that exert a critical role in maintaining neuronal survival, learning, and memory 50,51 . It has been established that aducanumab enters the brain and that 1 year of monthly intravenous infusion reduces brain Aβ in a dose-and time-dependent manner in patients with prodromal or mild AD. The greatest effects are observed at doses of 3 and 10 mg/kg. This effect was accompanied by a slowing of the clinical decline as measured by the Clinical Dementia Rating-Sum of Boxes and Mini Mental State Examination scores. As already observed with solanezumab, amyloid-related imaging abnormalities (ARIA), such as vasogenic edema, indicate a dose-dependent response in AD patients 49 . The clinical efficacy of aducanumab must be confirmed in the long-term extension phase of this study as well as in the ongoing phase III clinical trials, which may finally validate the amyloid hypothesis in the field of AD.
Conclusion
As indicated in this report, the pathology-to-drug discovery approach is now being applied for the identification of new drugs for the treatment of psychiatric disorders (Table 1). This has been responsible, in part, for the identification of new psychiatric medications with novel mechanisms of action (D3R antagonists, vortioxetine, and esketamine). Secondary prevention strategies with glutamatergic agents (mGluR2/3 agonists) or negative allosteric modulators of CB1 (cannabidiol) are under study for the treatment of schizophrenia. If confirmed in ongoing clinical trials, the early results with aducanumab, a possible disease-modifying agent for AD, suggest that the pathology-to-drug discovery approach may be applicable for designing and developing new medications for the treatment of a host of neuropsychiatric disorders.
Competing interests
The author(s) declare that they have no competing interests.
Grant information
The author(s) declared that no grants were involved in supporting this work. | 2017-08-24T19:10:41.897Z | 2017-03-30T00:00:00.000 | {
"year": 2017,
"sha1": "010336e425bbd41ed2a3791238eaec8bf829e18a",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/6-397/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1ff955537d0780cd1489cf67e64b73c9cb73729",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196812329 | pes2o/s2orc | v3-fos-license | Fully connecting the Observational Health Data Science and Informatics (OHDSI) initiative with the world of linked open data
The usage of controlled biomedical vocabularies is the cornerstone that enables seamless interoperability when using a common data model across multiple data sites. The Observational Health Data Science and Informatics (OHDSI) initiative combines over 100 controlled vocabularies into its own. However, the OHDSI vocabulary is limited in the sense that it combines multiple terminologies and does not provide a direct way to link them outside of their own self-contained scope. This issue makes the tasks of enriching feature sets by using external resources extremely difficult. In order to address these shortcomings, we have created a linked data version of the OHDSI vocabulary, connecting it with already established linked resources like bioportal, bio2rdf, etc. with the ultimate purpose of enabling the interoperability of resources previously foreign to the OHDSI universe.
Introduction
The Observational Health Data Science and Informatics (OHDSI) is a world-wide initiative, which over the course of five years has managed to bring groups of researchers all over the world together in converting their clinical patient data (electronic health records, claims, clinical registries) into the Observational Medical Outcomes Partnership (OMOP) common data model (CDM). This initiative has built a large set of publicly available tools which allow researchers to standardize the way they build patient cohorts, characterize their data [1], perform large scale patient level prediction studies [2], and perform electronic phenotyping [3]. In just a few years the OHDSI initiative has managed to perform large-scale studies involving over 200 million patients [4], answer drug safety questions by analyzing the association of the anticonvulsant levetiracetam with increased risk for angioedema in 10 international databases [5], and has characterized the effectiveness of second-line treatment of type 2 diabetes after initial therapy with metformin in over 246 million patients [6]. All of these massive studies have been made possible thanks to the use of a CDM and a standardized vocabulary. This strength becomes a weakness as the vocabulary standardizes multiple external vocabularies, ontologies and term sets, such as SNOMED, RxNorm, MeSH, and 90+ others, but it does not provide an easy way to link them to additional resources such as the Unified Medical Language System (UMLS) [7] and other linked open data resources like Bio2rdf [8] and BioPortal [9]. During our time at the Biomedical Link Data Hackathon 5 in Kashiwa, Japan we developed the first attempt to create an RDF version of the OHDSI vocabulary with linkages to UMLS and BioPortal.
Methods
In order to link the OHDSI vocabulary with UMLS, we will leverage Ananke [10], a resource built for the mapping of UMLS Concept Unique Identifiers (CUIs) into OHDSI concept_id's, which are the unique identifiers assigned to all concepts in the vocabulary. This will allow us to use BioPortals URI's for the CUIs and make the necessary connections when using their SPARQL endpoints for federated queries. All other Python 2.7 code just iterates through the vocabulary concepts, find proper UMLS matches and writes out each entry using a predefined schema. The conversion process assumes the OHDSI vocabulary files are in the same folder, as well as the Ananke mappings. If the researcher does not have a full copy of the OHDSI vocabulary, we provide an already built RDF graph for Vocabulary version v5.0 11-FEB-19.
Results and Discussion
The RDF conversion results in a total of 24 million triples and takes around 15 minutes. Our resource links a total of 861,732 OHDSI concept_id's from SNOMED, 286,256 concept_id's from Rx-NORM, 109,706 concept_id's from ICD10, and 22,029 concept_ id's from ICD9, all linked directly to bioportal. We also include 1,321,986 mappings to UMLS via Ananke [10].
Our initial goals for this resource were to bring into the OHDSI context semantic enrichment of longitudinal clinical study data, as it has been shown to be quite effective in the past [11,12]. Our particular practical application of interest is taking advantage of the resource for electronic phenotyping purposes. As authors of the Automated PHenotype Routine for Observational Definition, Identification, Training and Evaluation (APHRODITE) R package [3], our goals were as follows.
(1) Be able to expand and enrich our feature sets for phenotyping. With one of the main feature spaces of APHRODITE being clinical narratives, these are annotated using the OHDSI vocabulary. Having a linked version of it will allow us to expand any particular feature domain with other linked resources to SNOMEDCT, RxNORM, etc. Fig. 1 shows a sample query were we expand the SNOMED concept for "Type 2 diabetes mellitus" with all its available parents in BioPortal via a federated query.
(2) One of the outputs of APHRODITE, besides a machine learning model for the target phenotype, is a list of relevant features that add interpretability to any model. This list of features covers the most important domains in the OHDSI CDM and vocabulary. We want to be able to produce this list as a linked resource that will allow researchers to enhance their understanding by being able to semantically link them to other resources like the Human Phenotype Ontology [13] among others.
We believe that such interoperability will enable other researchers to generate enhanced evidence by linking outside of the OHDSI CDM and vocabulary with additional resources available, such as phenotype annotations from PubMed abstracts automatically [14], provide extra context for word embeddings models built from clini- cal narratives [15], which in theory can help the embeddings be more specific by providing additional context [16], and many additional applications. This resource brings us one step closer to enrich EHR, claims, and registry patient data with the world of linked open data.
Conflicts of Interest
No potential conflict of interest relevant to this article was reported. | 2019-07-17T13:03:19.962Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "13eb3667b458c7e5991a475c0e783a5c7260de61",
"oa_license": "CCBY",
"oa_url": "https://genominfo.org/upload/pdf/gi-2019-17-2-e13.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "13eb3667b458c7e5991a475c0e783a5c7260de61",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
8715119 | pes2o/s2orc | v3-fos-license | Tarantula Toxins Interact with Voltage Sensors within Lipid Membranes
Voltage-activated ion channels are essential for electrical signaling, yet the mechanism of voltage sensing remains under intense investigation. The voltage-sensor paddle is a crucial structural motif in voltage-activated potassium (Kv) channels that has been proposed to move at the protein–lipid interface in response to changes in membrane voltage. Here we explore whether tarantula toxins like hanatoxin and SGTx1 inhibit Kv channels by interacting with paddle motifs within the membrane. We find that these toxins can partition into membranes under physiologically relevant conditions, but that the toxin–membrane interaction is not sufficient to inhibit Kv channels. From mutagenesis studies we identify regions of the toxin involved in binding to the paddle motif, and those important for interacting with membranes. Modification of membranes with sphingomyelinase D dramatically alters the stability of the toxin–channel complex, suggesting that tarantula toxins interact with paddle motifs within the membrane and that they are sensitive detectors of lipid–channel interactions.
I N T R O D U C T I O N
Voltage-dependent cation channels open and close in response to changes in the membrane voltage, a property that is essential for the generation and propagation of rapid and long-range electrical signaling within the nervous system. These channels have a modular architecture, with a central pore domain that determines ion selectivity, and four surrounding voltage-sensing domains that move in response to changes in membrane voltage and drive opening of the pore (Kubo et al., 1993;Doyle et al., 1998;Li-Smerin and Swartz, 1998;Lu et al., 2001;Jiang et al., 2003a). Although X-ray structures have now been solved for two voltage-activated potassium (K v ) channels (Jiang et al., 2003a;Lee et al., 2005;Long et al., 2005), the structural basis of voltage sensing remains the subject of intense investigation. One remarkable aspect of the X-ray structures of the K v AP channel and its isolated S1-S4 voltage-sensing domain is the presence of the voltage-sensor paddle motif, a helix-turn-helix structure that is comprised of the S3b helix and the chargebearing S4 helix (Jiang et al., 2003a;Lee et al., 2005;Long et al., 2005). In the K v AP structures, as well as the more recent K v 1.2 structure, the voltage-sensor paddle is predicted to be buried in the membrane and positioned at the protein-lipid interface (Jiang et al., 2003a,b;Ruta et al., 2003;Cuello et al., 2004;Lee et al., 2005;Ruta et al., 2005;Schmidt et al., 2006), which is conceptually distinct from the conventional models where the S4 charges are protected from membrane lipids by other parts of the protein (Ahern and Horn, 2004;Tombola et al., 2005).
A particularly intriguing aspect of the voltage-sensor paddle motif is that venomous creatures synthesize protein toxins that interact with this motif and thereby modify voltage-dependent gating. For example, hanatoxin is a tarantula toxin that inhibits the K v 2.1 channel by interacting with the S3b and S4 helices, and stabilizing the voltage sensors in a resting conformation (Swartz and MacKinnon, 1997a,b;Swartz, 2000, 2001;Lee et al., 2003;Phillips et al., 2005;Swartz, 2007). Related toxins from spiders, scorpions, and sea anemone also target the equivalent regions of voltage-activated sodium (Na v ) and calcium (Ca v ) channels (Rogers et al., 1996;Cestele et al., 1998;Li-Smerin and Swartz, 1998;Winterfi eld and Swartz, 2000;Cestele et al., 2006). Such widespread targeting of the paddle motif by venom toxins is consistent with the idea that the paddle is a uniquely mobile part of the voltage-sensing domain, as suggested by functional studies on the K v AP channel (Ruta et al., 2005). The proposed location of the paddle motif at the protein-lipid interface raises the possibility that the target of voltage-sensor toxins is buried in the membrane, and that these toxins interact with the channel within the lipid bilayer. Several tarantula toxins have been shown to partition into model membranes containing anionic lipids, including VSTx, GsMTx-4, ProTx-II, and hanatoxin (Lee and MacKinnon, 2004;Suchyna et al., 2004;Jung et al., 2005;Phillips et al., 2005;Smith et al., 2005). The structures of these toxins are highly amphipathic (see Fig. 1, A and B), with one face containing a cluster of hydrophobic residues surrounded by polar residues, most of which are basic (Takahashi et al., 2000;. These structural features, together with previous depth-dependent fl uorescence quenching experiments (Phillips et al., 2005) and molecular dynamics simulations (Bemporad et al., 2006;Wee et al., 2007), point to a relatively superfi cial position of these toxins in model membranes and raise the possibility that partitioning may be very sensitive to the presence of charged moieties on phospholipid membranes. Anionic lipids are scarce in the external leafl et of native membranes (Simons and van Meer, 1988;Calderon and DeVries, 1997;Hill et al., 2005), the side from which these toxins act, and thus it is crucial to understand whether tarantula toxins can partition into native membranes containing an abundance of zwitterionic lipids. Indeed, recent studies have questioned whether membrane partitioning is involved in the inhibitory mechanisms for toxins that modify gating of voltage-activated ion channels (Cohen et al., 2006;Posokhov et al., 2007b). In the case of GsMTx-4, partitioning of the toxin has been proposed to alter lipid packing around stretch-activated cation channels, and thus to inhibit channel activity without the toxin specifi cally binding to the channel protein (Suchyna et al., 2004). In a related fashion, might partitioning of tarantula toxins be suffi cient for altering the activity of K v channels? If tarantula toxins do interact directly with the channel, which regions of the toxin are involved in protein-protein interactions and which are important for their interactions with membranes? Finally, are interactions of these toxins with membranes required for inhibition of K v channels? We set out to address these fundamental questions for hanatoxin and SGTx1, two closely related tarantula toxins that inhibit the K v 2.1 channel. We characterized partitioning of these toxins into membranes of varying composition using fl uorescence and separation methods, and synthesized the D-enantiomer of SGTx1 to address whether toxin-induced perturbations of the bilayer are suffi cient to inhibit the channel. To defi ne regions of tarantula toxins that are important for interacting with membranes and voltage-sensor paddles, we studied the effects of SGTx1 mutations on membrane partitioning and on the concentration dependence of toxin occupancy of the channel. We also examined the effects of membrane modifi cations on the stability of the toxin-channel complex. Our results suggest that tarantula toxins interact with the paddle motif within the lipid membrane and that modifi cation of the lipid membrane can have dramatic effects on the toxin-channel interaction.
Toxin Production
Hanatoxin was purifi ed from Grammostola spatula venom (Spider Pharm) as previously described (Swartz and MacKinnon, 1995). WT and mutants of SGTx1 were synthesized using an Applied Biosystems model 433A peptide synthesizer. The linear precursors were synthesized using solid-phase methodology with Fmoc chemistry, starting from Fmoc-Phe-Alko resin using a variety of blocking groups for the protection of the amino acids. After trifl uoroacetic acid cleavage, a crude linear peptide was extracted with 2 M acetic acid and diluted to a fi nal concentration of 25 μM. A solution containing 0.1 M ammonium acetate, 2 M urea, and 2.5 mM reduced/0.25 mM oxidized glutathione was adjusted to pH 7.8 with aqueous NH 4 OH and stirred slowly at 4°C for 3 d. The folding reaction was monitored with RP-HPLC, and the crude oxidized product was purifi ed by successive chromatography steps with CM-cellulose CM-52 and preparative RP-HPLC with a C18 silica column. The purity of the synthetic SGTx1 was confi rmed by analytical RP-HPLC and MALDI-TOF-MS measurements. The concentration of SGTx1 was determined from dry weight of the protein. To confi rm toxin concentration we also measured absorbance at 280 nm and calculated the concentration of the toxin using an extinction coeffi cient of 8.6 × 10 3 M −1 cm −1 (Gill and von Hippel, 1989). With the exception of W30A, a mutation that removes the only Trp, concentrations determined from dry weight were within 5% of those determined from absorbance. Circular dichroism (CD) spectra were examined for each mutant to establish that the correct fold had been obtained and examples for each have been published previously . D-SGTx1 was synthesized using D-enantiomer Fmoc precursors.
Circular Dichroism Measurements of SGTx1
The CD spectra of D-and L-SGTx1 were obtained using a JASCO J-715 spectrophotometer (20 mM sodium phosphate buffer, pH 7.0) at 25°C with a quartz cell of 1 mm path length. Wavelengths from 190 to 250 nm were measured at 50 nm/min; the step resolution was 0.1 nm, the response time 0.5 s, and the bandwidth 1 nm. Spectra were collected and averaged over four scans. The mean residue ellipticity [θ] (deg·cm 2 ·dmol −1 ) was calculated as [θ] = [θ] obs (MRW/10lc), where [θ] obs is the ellipticity measured in millidegrees, MRW is the mean residue molecular weight of the peptide, c is the concentration of the sample in mg/ml, and l is the optical path length of the cell in cm. The spectra are expressed as molar ellipticity [θ] vs. wavelength.
Preparation of Lipid Vesicles
Phospholipids were dried from a chloroform solution under a nitrogen stream. The dried lipid fi lm was rehydrated in buffers used for the partitioning assay: HEB (10 mM HEPES, 1 mM EDTA, pH 7.0 with NaOH) with or without 100 mM KCl and 2 mM CaCl 2 (1 mM free Ca 2+ ; Patton et al., 2004); and physiological buffer (PB) (10 mM HEPES, 100 mM KCl, 1 mM MgCl 2 , 0.3 mM CaCl 2 , pH 7.6 with NaOH). The resulting dispersions were extruded through 100 nm pore size polycarbonate fi lters (Millipore Corp.) to form large unilamellar vesicles (LUVs).
Fluorescence Spectroscopy
All fl uorescence measurements were performed in quartz cuvettes with 1 cm path length. LUVs composed of POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine) or a mix of 1:1 molar ratios of POPG (1-palmitoyl-2-oleoyl-sn-glycero-3-[phospho-rac-(1-glycerol)]) and POPC were added to a solution of toxin (2 μM final concentration of toxin), maintained at 25°C with continuous stirring in a total volume of 2 ml. Fluorescence spectra (averaging three spectra) were recorded between 300 and 400 nm (5 nm band pass, 0° polarizer) using an excitation wavelength of 280 nm (5 nm band pass, 90° polarizer) (SPEX FluoroMax 3 spectrofl uoro meter) and corrected for vesicle scattering (Ladokhin et al., 2000). For calculating mole-fraction partitioning coeffi cients (K x ), fl uorescence intensity (F) at 320 nm was measured and normalized to the zero lipid fl uorescence intensity (F 0 ). K x was calculated based on the best fi ts of the following equation to the data: F/F 0 (L) = 1 + (F/F 0 max − 1)K x [L]/([W] + K x [L]), where F/F 0 (L) is the change in fl uorescence intensity for a given concentration of lipid, F/F 0 max is the maximum fl uorescence increase at high lipid concentrations, [L] is the average available lipid concentration (60% of total lipid concentration), and [W] is the molar concentration of water (55.3 M). Two mutants (H18A and D31A) exhibit only very small blue shifts in λ max upon addition of lipid vesicles, precluding the use of F/F 0 measurements at 320 nm to determine the fraction partitioned. In these instances, K x values were calculated by fi tting the following equation to the data: , where the fraction of toxin partitioned into the membrane (f p ) is equal to ([Toxin total ] − [Toxin free ])/ [Toxin total ]. Toxin free and Toxin bound were determined from the deconvolution of the emission spectra into membrane-bound and free components: I(L) = (Toxin free )I w + (Toxin bound )I m , where I w(m) = I 0 exp[(ln2/ln 2 ρ)ln 2 (1 + (λ − λ max )(ρ 2 − 1)/ρΓ)]. I(L) is the change in fl uorescence intensity for a given concentration of lipid; I w and I m are the specifi c intensities for the toxin in water and in membrane, respectively; I 0 is the intensity at λ max ; Γ is the width at I 0 /2; and ρ is the asymmetry of the distribution (Polozov et al., 1998;Ladokhin et al., 2000).
Quenching of tryptophan fl uorescence was examined for a mixture of toxin:lipid (1:450 molar ratio; 2 μM:0.9 mM) by titration of 0.4 M acrylamide. The Stern-Volmer quenching constant (K SV ) was calculated based on the best fi ts of the following equation to the data: F 0 /F = 1 + K SV [Q], where F 0 and F are fl uorescence of the toxin in the absence and presence of acrylamide, and [Q] is the concentration of acrylamide.
Toxin Depletion Assays
Varying amounts of lipid vesicles or 300 defolliculated Xenopus laevis oocytes were added to an aqueous toxin solution (10 μM) and incubated with gentle agitation for 30 min at room temperature (°22فC). For experiments with oocytes the fi nal volume was 400 μl. LUVs were separated by centrifugation (20 min, 100,000 g) and the oocytes by decantation, and toxin remaining in the aqueous phase was determined using RP-HPLC with an ODS (C-18) column (4.6 × 250 mm; 5 μM, 90 Å, Beckman). Hanatoxin and SGTx1 toxins were eluted with a linear gradient of 20-80% mobile phase B over 50 min at 1 ml/min (A was 0.1% TFA in water and B was 0.08% TFA in acetonitrile). Agitoxin-2 was eluted with a linear gradient of 0-50% mobile phase B over 30 min at 1 ml/min. K x values were calculated as K x = (f p /(1 − f p ))[W]/[L], where f p = ([Toxin total ] − [Toxin free ])/[Toxin total ]. We estimated that an the outer leafl et of an oocyte contains 0.15 nmol of lipid using an oocyte capacitance of 0.5 μF, a specifi c capacitance as 1 μF/cm 2 , and a surface area per lipid of 50 Å 2 (Nagle and Tristram-Nagle, 2000); similar values are obtained when considering the results of phospholipid analyses (Stith et al., 2000;Hill et al., 2005).
Electrophysiological Recordings
Oocytes from Xenopus laevis were removed surgically and incubated with agitation for 1 h in a solution containing (in mM) 82.5 NaCl, 2.5 KCl, 1 MgCl 2 , 5 HEPES, pH 7.6 with NaOH, and collagenase (2 mg/ml; Worthington Biochemical). Defolliculated oocytes were injected with cRNA encoding the K v 2.1∆7 K + channel (Li-Smerin and Swartz, 1998) and incubated at 17°C in a solution containing (in mM) 96 NaCl, 2 KCl, 1 MgCl 2 , 1.8 CaCl 2 , 5 HEPES, pH 7.6 with NaOH, and gentamicin (50 μg/ml, GIBCO-BRL), for 24-48 h before electrophysiological recording. Oocyte membrane voltage was controlled using an OC-725C oocyte clamp (Warner Instruments). Data were fi ltered at 2 kHz (8-pole Bessel) and digitized at 10 kHz. Microelectrode resistances were 0.1-0.5 MΩ when fi lled with 3 M KCl. For the experiments in Figs. 1 and 9 the external recording solution contained (in mM) 20 KCl, 80 NaCl, 1 MgCl 2 , 0.3 CaCl 2 , 10 HEPES, pH 7.6 with NaOH. For the experiments in Fig. 7, the external recording solution contained (in mM) 50 RbCl, 50 NaCl, 1 MgCl 2 , 0.3 CaCl 2 , 10 HEPES, pH 7.6 with NaOH. For experiments with sphingomyelinase D (SMaseD), recombinant enzyme (8 ng/μl) was added to the external recording Side chain colors are as follows: green, hydrophobic; blue, basic; red, acidic; pink, Ser/Thr; gray, other side chains and backbone atoms. Protein database accession codes are 1D1H for hanatoxin and 1LA4 for SGTx1. (C) Voltage-clamp recording from an oocyte expressing the K v 2.1 channel. Currents were elicited by depolarization to −10 mV (above) or +50 mV (below), in the absence (black) or presence of 4 μM hanatoxin (gray). The holding voltage was −100 mV, and the tail voltage was −80 mV. The dashed line indicates the level of zero current. Leak, background, and capacitive currents were subtracted after blocking the channel with agitoxin-2. (D) Voltageactivation relations in the absence (black) and presence of 4 μM hanatoxin (gray). Tail currents obtained following depolarizations were averaged for 0.2 ms beginning 2 ms after repolarization to −80 mV. Data points are the mean ± SEM (n = 3). chamber for 15-20 min and the effects on the K v 2.1 followed by measuring the tail current amplitudes at −80 mV elicited after depolarizations to −40 mV. After the effects on K v 2.1 stabilized, the enzyme was removed from the recording chamber and the cell washed extensively for 10-20 min before applying hanatoxin. All experiments were performed at room temperature (°22فC). Leak and background conductance were identifi ed by blocking the channel with agitoxin-2 and subsequently subtracted (Garcia et al., 1994).
R E S U LT S
The objective of the present study is to explore whether tarantula toxins interact with lipid membranes to modify the activity of K v channels. Previous results with VSTx1, ProTx-II, and hanatoxin, three tarantula toxins that modify the gating of K v or Na v channels, show that these toxins can partition quite favorably into model membranes containing anionic lipids (Lee and MacKinnon, 2004;Jung et al., 2005;Phillips et al., 2005;Smith et al., 2005). We focused our study on hanatoxin and the closely related SGTx1, two tarantula toxins of known structure (Fig. 1, A and B) that inhibit the K v 2.1 channel (Swartz and MacKinnon, 1997b;Takahashi et al., 2000;Swartz, 2007). Hanatoxin is one of the more extensively studied tarantula toxins that interacts with voltage sensors (Swartz and MacKinnon, 1997a,b;Swartz, 1998, 2000;Takahashi et al., 2000;Li-Smerin and Swartz, 2001;Lee et al., 2003Phillips et al., 2005;Swartz, 2007), while SGTx1, unlike hanatoxin, can be effi ciently folded in vitro , making possible the study of toxin mutants . Application of either toxin to the external solution bathing a cell expressing the K v 2.1 channel results in pronounced inhibition of opening at negative voltages, as shown in Fig. 1 (C and D) for hanatoxin, with robust opening of toxin-bound channels in response to strong membrane depolarizations.
Characterization of Toxin Partitioning into Membranes
We began by characterizing the partitioning of hanatoxin and SGTx1 into LUVs composed of a 1:1 mixture of zwitterionic (POPC) and anionic (POPG) phospholipids using intrinsic tryptophan fl uorescence to monitor the toxin-membrane interaction. Both toxins contain a single solvent-accessible tryptophan residue (W30) located on the protruding hydrophobic surface (Fig. 1, A and B). When these toxins are dissolved in a simple HEB solution (10 mM HEPES, 1 mM EDTA, pH 7) and excited at 280 nm, their fl uorescence emissions spectra have maxima (λ max ) near 353 nm (Fig. 2, A and B; Table I; Fig. 6 A), typical for tryptophan in an aqueous environment Reshetnyak and Burstein, 2001). Comparison of the emission spectra for WT and the W30A mutant of SGTx1 shows that most of the fl uorescence emission originates from this single tryptophan residue (Fig. 2 B). Upon addition of lipid vesicles (2.4 mM) both hanatoxin and SGTx1 display blue shifts in their fl uorescence emission spectra (Fig. 2, A and B, blue traces; Table I). A blue shift in tryptophan fl uorescence is frequently observed for proteins that partition into membranes, and can be explained by a change in the polarity and rigidity of the environment surrounding the tryptophan residue (Ladokhin et al., 2000). The blue shift observed for SGTx1 (15.4 nm) is larger than for hanatoxin (10.9 nm), and the maximum fl uorescence intensity decreases for SGTx1, whereas it Spectrum of W30A mutant of SGTx1 in solution is shown for comparison. Stern-Volmer plots for acrylamide quenching of W30 fl uorescence of hanatoxin (C) and SGTx1 (D) in solution (2 μM, black diamonds) and in the presence of lipid vesicles (0.9 mM, blue circles). Fluorescence intensity at 320 nm plotted as a function of available lipid concentration (60% of total lipids) for hanatoxin (E) or SGTx1 (F). Smooth curves correspond to partition functions with K x = (4.8 ± 0.4) × 10 6 and F/F 0 max = 2.0 ± 0.05 for hanatoxin and K x = (3.4 ± 0.6) × 10 6 and F/F 0 max = 1.6 ± 0.05 for SGTx1. All data were obtained using HEB solution (10 mM HEPES, 1 mM EDTA, pH 7). In all cases data points are the mean ± SEM (n = 3). exhibits a modest increase for hanatoxin, suggesting that the tryptophan residues of the two membrane-bound toxins have distinguishable microenvironments. Similar results were obtained for the two tryptophan residues within the hydrophobic face of the related toxin GsMTx4 (Posokhov et al., 2007a). Partitioning of hanatoxin and SGTx1 into membranes can also be examined by measuring the extent to which lipid vesicles protect W30 from quenching by acrylamide, a small polar molecule that effi ciently quenches tryptophan fl uorescence in solution. The robust quenching of hanatoxin and SGTx1 fl uorescence observed in control solutions is dramatically reduced after the addition of vesicles (Fig. 2, C and D; Table II), suggesting that in both toxins W30 is protected from the aqueous phase when these toxins interact with membranes. To characterize the strength of toxin-membrane interactions we determined mole fraction partition coeffi cients (K x ) from titration experiments in which the fraction partitioned was estimated by measuring changes in tryptophan fl uorescence at 320 nm (F/F 0 ) as a function of vesicle concentration (Fig. 2, E and F) (Ladokhin et al., 2000). The maximal value of F/F 0 in the limit of high lipid concentrations (F/F 0 max ) varies between the two toxins because the blue shift in λ max and the change in maximal fl uorescence intensity are different. However, fi tting a partition function to the titration data reveals that the K x values for the interaction of the two toxins with membranes composed of a 1:1 mix of POPC and POPG are similar, (4.8 ± 0.4) × 10 6 for hanatoxin and (3.4 ± 0.6) × 10 6 for SGTx1 (Table I), values indicative of relatively strong toxin-membrane interactions (Beschiaschvili and Seelig, 1990).
Next we prepared vesicles containing only the zwitterionic lipid POPC and compared the results with those obtained for vesicles containing both POPC and the anionic POPG. When high concentrations (2.4 mM) of POPC vesicles are added to aqueous solutions of hanatoxin or SGTx1 we observe much smaller blue shifts and F/F 0 max values compared with POPG-containing vesicles, preventing a rigorous evaluation of K x values (Fig. 3, A-D). However, acrylamide quenching experiments show that POPC vesicles can partially protect toxin fl uorescence from quenching by acrylamide (Fig. 3, E and F; Table II), suggesting that both toxins can partition into zwitterionic membranes. To estimate the strength of partitioning into zwitterionic membranes we used a toxin depletion assay in which the ability of vesicles to deplete the toxin from the aqueous phase is examined. After incubation of aqueous solutions of toxin with POPC vesicles (9 mM), the membrane-bound peptide was separated by centrifugation, and the toxin remaining in the aqueous phase quantifi ed using HPLC. Vesicles containing POPC can deplete SGTx1 from the aqueous phase ( Fig. 3 H; f p = 0.41 ± 0.05; n = 3), yielding a K x value of 6.1 ± 0.2 × 10 3 , considerably weaker than observed for anionic membranes (Fig. 2 F; Fig. 3 G).
The importance of electrostatic interactions, revealed by a comparison of toxin partitioning into anionic and zwitterionic membranes, raises the possibility that the strength of partitioning into the external leafl et of native membranes might be rather weak. The results thus far were obtained with low ionic strength aqueous solutions without divalent ions, which would be expected to enhance the strength of partitioning compared with physiological solutions. To explore the infl uence of solution composition we initially examined partitioning of both toxins into anionic membranes from solutions of different composition. The K x values for partitioning of both toxins into anionic vesicles are approximately an order of magnitude lower in an HEB solution containing monovalent cations (100 mM K + , pH 7) compared with HEB solution alone (Fig. 4, A and B; Table I), and the addition of 1 mM free Ca 2+ (HEB, 100 mM K + , 2 mM CaCl 2 , pH 7) weakens toxin partitioning even further (K x ف 10 4 ; Fig. 4, A and B; Table I). In a solution mimicking the electrophysiological conditions used to study the interaction of tarantula toxins with K v channels (PB: 100 mM K + , 1 mM Mg 2+ , 0.3 mM Ca 2+ , pH 7.6), the observed blue shifts and F/F 0 max values are too small to obtain reliable estimates of K x (Fig. 4, A and B). To assess whether partitioning occurs under physiological ionic conditions we performed acrylamide quenching experiments, which show that W30 is partially protected from quenching by acrylamide in the presence of anionic membranes (Fig. 4, C and D; Table II). We used a similar approach to determine whether hanatoxin and SGTx1 can partition into zwitterionic membranes in the presence of physiological ionic solutions, and fi nd that acrylamide quenching of W30 in both toxins is diminished by zwitterionic membranes (Fig. 4, E and F; Table II). Taken together, these results establish that tarantula toxins can interact with membranes under physiologically relevant conditions (e.g., zwitterionic membranes and physiological solutions). Although the outer leafl et of native membranes contain an abundance of zwitterionic lipids, its overall composition is rather complex, including cholesterol and glycolipids, for example (Simons and van Meer, 1988;Calderon and DeVries, 1997;Hill et al., 2005). To confirm that tarantula toxins can partition into native membranes we examined the ability of intact oocytes to deplete tarantula toxins from physiological aqueous solutions (PB). When oocytes are agitated gently in a solution containing either hanatoxin or SGTx1, we consistently observe depletion of the toxins from the aqueous phase (Fig. 4 G), but not in the case of agitoxin-2, a scorpion toxin that interacts with the external vestibule of the channel (Garcia et al., 1994;Miller, 1995). The estimated K x values for hanatoxin and SGTx1 are 01ف 3 (see Materials and methods), consistent with our measurements of toxin partitioning into zwitterionic model membranes.
Structure-Function Relationships for Toxin-Membrane Interactions
To better understand the molecular basis of the interaction of tarantula toxins with membranes we examined how mutations throughout SGTx1 infl uence toxin partitioning into membranes. We initially used intrinsic Traces were normalized to the maximum in the absence of lipid vesicles. All data were obtained using HEB solution (10 mM HEPES, 1 mM EDTA, pH 7). tryptophan fl uorescence to determine K x values for the interaction of SGTx1 mutants with anionic vesicles, where the most accurate measurements of K x can be obtained. Fig. 5 (A and B) shows example spectra in control aqueous solution and in the presence of a saturating concentration of lipid vesicles, and corresponding titration experiments for WT, D24A, and R3A SGTx1. In most instances, the strength of partitioning into model membranes is weakened by the mutation (e.g., R3A), while in others partitioning is strengthened (e.g., D24A) (Fig. 5, A-C; Table III). The K x values for partitioning of all 24 mutants into anionic membranes are plotted in Fig. 5 C, and the changes in partitioning free energy (∆∆G P = −RT lnK x mut /K x WT ) are mapped onto the NMR structure of SGTx1 in Fig. 5 D. There are two interesting observations that emerge from these data. First, the face of SGTx1 containing the hydrophobic protrusion and the C-terminal tail contain the most important determinants of toxin partitioning. Second, both hydrophobic and electrostatic interactions are important for the interaction of SGTx1 with anionic membranes. We also used the depletion assay described above for oocytes to examine how a select group of mutations infl uence partitioning into native membranes, including mutations that neutralize basic residues (R3A, R22A, K26A) or acidic residues (D24A, D31A), and those that truncate hydrophobic side chains (Y4A, L5A, F6A, W30A, F34A) (Fig. 5, E-G). Although a rigorous quantitative evaluation of these results is not possible given the qualitative nature of the oocyte assay, the results suggest that partitioning of toxins into native membranes relies more on hydrophobic interactions (Fig. 5, E and F). In contrast to the results with anionic membranes, the basic residue neutralizations do not significantly weaken toxin interactions with oocytes and the effects of the enhanced partitioning of the D24A mutation appear less signifi cant (Fig. 5 G). These results are consistent with the notion that anionic lipids are scarce in the external leafl et of oocyte membranes and they are in agreement with the recent studies indicating a complex interplay between hydrophobic and electrostatic interactions between SGTx1 and model membranes (Posokhov et al., 2007b).
Mutations in specifi c regions of SGTx1 also have interesting effects on the spectral properties of the Smooth curves are the fi ts of a partition function to the data as follows: diamonds for 100 mM KCl data: K x = (5.3 ± 0.03) × 10 5 and F/F 0 max = 2.0 ± 0.02 for hanatoxin, and K x = (1.1 ± 0.001) × 10 5 and F/F 0 max = 1.7 ± 0.03 for SGTx1; triangles for 100 mM K + , 1 mM free Ca 2+ data: K x = (4.8 ± 0.01) × 10 4 and F/F 0 max = 2.1 ± 0.1 for hanatoxin, and K x = (1.9 ± 0.01) × 10 4 and F/F 0 max = 2.0 ± 0.5 for SGTx1; closed circles for 100 mM K + , 1 mM Ca 2+ , 0.3 mM Mg 2+ , pH 7.6 (PB). Data from Fig 2 (E and F) for HEB solution are shown for comparison using open circles. Stern-Volmer plots for acrylamide quenching of W30 fl uorescence of hanatoxin (C and E) and SGTx1 (D and F) in physiological buffer (PB) (2 μM, black diamonds) and in the presence of either anionic (C and D) or neutral (E and F) lipid vesicles (0.9 mM, blue circles). In all cases data points are the mean ± SEM (n = 3). (G) Reversed-phase HPLC profi les of hanatoxin, SGTx, and agitoxin-2 present in the supernatant in the absence (black) or in the presence of 300 oocytes (blue) resuspended in physiological buffer (PB). The calculated fraction partitioned (f P = ([Toxin total ] − [Toxin free ])/[Toxin total ]) is 0.31 ± 0.05 for hanatoxin, 0.25 ± 0.04 for SGTx1, and 0.01 ± 0.01 for agitoxin-2. Data are the mean ± SEM (n = 3). Traces were normalized to the maximum absorbance in the absence of oocytes. membrane-bound peptides. Whereas for all mutants examined, the spectra in aqueous solution were very similar, with λ max values near 353 nm (Fig. 5 A; Fig. 6 A), we observe a relatively wide distribution of λ max values in the presence of high lipid concentrations (Fig. 6 A), suggesting that W30 can be positioned in distinct microenvironments when the toxins are bound to membranes. The WT toxin and many of the mutants fall into the most blue-shifted end of the distribution, with λ max values between 335 and 339 nm, consistent with the notion that W30 experiences a rather buried hydrophobic environment in these cases. The other end of the distribution consists of six mutants (F6A, H18A, D31A, G32A, T33A, and F34A) that have λ max values between 341 and 349 nm, more typical of incompletely buried tryptophan residues in microenvironments that are partially exposed to bound water Reshetnyak and Burstein, 2001). These two groups of partitioning into anionic membranes mapped onto the SGTx1 NMR solution structure, shown as a surface rendering with a probe radius of 1 Å. Sidechain colors are as follows: light gray for |∆∆G P | < 1 kcal/mol, pink for |∆∆G P | = 1-1.5 kcal/mol, red for |∆∆G P | > 1.5 kcal/mol, and purple for ∆∆G P < −1 kcal/mol. Backbone and all other residues are colored dark gray. Structure in the right panel was rotated 180° about the indicated axis. Changes in free energy were calculated as: ∆∆G P = −RTln(K x mut /K x WT ). The change in K x value for W30A relative to WT (∆∆G P 1ف kcal mol −1 ) was estimated from depletion experiments. (E) Reversed-phase HPLC profi les of WT and mutant SGTx1 toxins present in the supernatant in the absence (black) or in the presence of 300 X. laevis oocytes (blue). Traces were normalized to the maximum in the absence of oocytes. (F) Fraction of toxin partitioned into oocytes membranes. Data are the mean ± SEM (n = 3). The dotted line corresponds to the value for wild type SGTx1. (G) Comparison between the effect of mutations on partitioning into model (gray, same as in C and D) and native membranes (black). mutants are segregated in an intriguing fashion in the structure of SGTx1, with those that perturb the environment of W30 (purple residues) clustered on one side of the toxin near W30, and those that don't (blue residues) on the other side of the toxin (Fig. 6 B). Presumably those mutants that perturb the membrane environment of W30 cause the toxin to adopt a somewhat different position within the membrane, changing either the depth or orientation of the toxin to expose W30 to water within the headgroup layer. This possibility is supported by acrylamide-quenching experiments on membrane-bound H18A SGTx1, which reveal greatly increased accessibility of W30 to aqueous acrylamide (unpublished data). Interestingly, the mutations that perturb the microenvironment of W30 partially overlap with those that perturb the strength of partitioning (compare Fig. 5 D and Fig. 6 B), suggesting that this region is crucial for both the position and stability of the toxin in lipid membranes.
Toxin Stereoselectivity
It is well established that amphipathic molecules alter the mechanical properties of lipid bilayers (Lundbaek and Andersen, 1994;Andersen et al., 1999), and one TA B L E I I I might predict that hanatoxin and SGTx1 would have signifi cant effects on membrane properties. GsMTx-4 is a related tarantula toxin that inhibits stretch-activated cation channels, and it has been shown that both D and L enantiomers are active, suggesting that the toxin inhibits the channel through a perturbation of the bilayer rather than through a direct toxin-channel interaction (Suchyna et al., 2004). Although mutations in the paddle can disrupt the inhibitory effects of hanatoxin (Swartz and MacKinnon, 1997b;Swartz, 2000, 2001;Phillips et al., 2005), it is possible that these channel residues detect toxin-induced bilayer perturbations rather than taking part in a protein-protein interaction. This a reasonable possibility considering that partitioning under physiologically relevant conditions (Kx ف 10 3 ) would be expected to result in a toxin:lipid ratio in the outer leafl et of ,000,01:1ف which means that the toxin concentration in the outer leafl et could be as much as 1,000-fold higher than the water phase. To investigate whether tarantula toxins might inhibit K v channels indirectly without binding the channel, we synthesized the D-enantiomer of SGTx1, which should infl uence bilayer properties similarly to the L-enantiomer given the fl uid nature of the lipid membrane, but should not participate in a protein-protein interaction. The D and L enantiomers have identical reverse-phase HPLC profi les (Fig. 7 A) and similar secondary structure as shown by CD spectroscopy (Fig. 7 B). As expected, the CD spectra nicely illustrate that the two enantiomers are mirror images. Examination of the strength of membrane partitioning using intrinsic tryptophan fl uorescence shows that D-SGTx partitions into anionic membranes with a K x of (1.2 ± 0.2) × 10 6 ( Fig. 7 C), similar to the value of 3.4 × 10 6 obtained for the L-enantiomer under identical conditions. In control experiments, addition of L-SGTx1 to the extracellular solution produces nearly complete inhibition of macroscopic K v channel currents when activating the channel with weak depolarizations (Fig. 7 D, top), and shifts channel activation to more depolarized voltages (Fig. 7 E), in agreement with previous reports Wang et al., 2004). In contrast, the change in K v channel current observed upon addition of D-SGTx is quite small (Fig. 7 D) and the voltage dependence of the channel activation is similar to that in the absence of toxin (Fig. 7 F). Although it is possible that the small effects observed with the Denantiomer refl ect a change in membrane properties, the pronounced inhibitory effects of the L-enantiomer are not observed with the D-enantiomer, suggesting that partitioning of the toxin into the membrane is not sufficient to modify the gating of K v channels. We conclude that the mechanism of inhibition must involve direct toxin-channel interactions.
Region of Tarantula Toxins Interacting with Paddle Motifs
The effects of SGTx1 mutations on the apparent affi nity of the toxin have been previously assessed from experiments in which the aqueous concentration of the toxin is varied and the resulting changes in toxin occupancy of the channel determined . If tarantula toxins interact with paddle motifs within the membrane, toxin mutations might perturb the apparent K d through several mechanisms, including (a) perturbing the strength of toxin partitioning into the bulk membrane, which would alter the concentration of the toxin in the membrane, (b) altering the interaction between the toxin and the membrane in the toxin-channel complex, which could influence the stability of the complex, and (c) perturbing protein-protein interactions between the toxin and the channel. Although mutants might have rather complex effects through these mechanisms, those that disproportionately perturb the apparent affi nity relative to membrane partitioning would likely do so by altering the strength of the protein-protein interaction. To look for mutations of this type we plotted the change in partitioning free energy estimated from K x values (∆∆G P = −RT lnK x mut /K x WT ) against the change in overall free energy estimated from apparent K d values (∆∆G O = −RT lnK d mut /K d WT ) (Fig. 8 A). Although there appears to be a loose relationship between ∆∆G P and ∆∆G O , the two quantities do not tightly correlate, and in general the perturbations in the toxin-membrane interaction are relatively modest (|∆∆G P | < 2 kcal mol −1 ) compared with those for the overall energetics, which can exhibit ∆∆G O values >3.4 kcal mol −1 (Fig. 8 A). In particular, one group of mutations, including R3A, L5A, F6A, R22A, Smooth red curve is the fi t to the D-SGTx data and corresponds to a partition function with K x = (1.2 ± 0.2) × 10 6 and F/F 0 max = 1.7 ± 0.03. Data for L-SGTx is shown for comparison (gray, same as in Fig. 2 F).
(D) Voltage-clamp recording from an oocyte expressing the K v 2.1 channel. Currents were elicited by depolarization to −10 mV in the absence (black) or presence of 8 μM L-SGTx1 (gray) or 8 μM D-SGTx1 (red). The holding voltage was −100 mV, and the tail voltage was −50 mV. The light gray line indicates the level of zero current. Leak, background, and capacitive currents were subtracted after blocking the channel with agitoxin-2. Voltage-activation relations in the absence (black) and presence of 8 μM L-SGTx1 (gray) (E) or 8 μM D-SGTx1 (red) (F). Tail currents obtained following depolarizations were averaged for 0.2 ms beginning 2 ms after repolarization to −50 mV. In all case data points are the mean ± SEM (n = 3). and W30, produce much larger perturbations of the apparent K d compared with the toxin-membrane interaction (Fig. 8 A). The disproportionate perturbation in apparent affi nity relative to membrane partitioning is even more pronounced if perturbations in toxinmembrane interactions are assessed using the depletion assay described above for oocyte membranes (Fig. 8 B). Importantly, residues in this group form a cluster on a single face of the toxins (Fig. 8, C and D), thus identifying the region of the toxin that likely participates in a direct protein-protein interaction with the paddle motif (Fig. 8 D).
Membrane Modifi cation Alters Toxin Inhibition
The results thus far shed light on the interaction of tarantula toxins with membranes of varying composition, defi ne the regions of the toxin that are important for interacting with membranes, and constrain regions that are important for binding to voltage-sensor paddle motifs. To look for evidence that the toxin-channel interaction actually occurs within the membrane we examined whether biochemical modifi cation of the lipid bilayer infl uences the interaction of tarantula toxins with the K v 2.1 channel. Recent results from Lu and colleagues (Ramu et al., 2006) show that treatment of oocytes with SMaseD, an enzyme that converts the zwitterionic lipid sphingomyelin (SM) to the anionic ceramide-1-phosphate (C-1-P), shifts activation of K v 2.1 by 03ف mV. Following enzyme treatment, the Kv2.1 channel retains sensitivity to hanatoxin (Ramu et al., 2006), although the apparent kinetics of the toxin-channel interaction are altered (Ramu, Y., and Lu, Z., personal communication). In control experiments the addition of hanatoxin to the extracellular solution inhibits activation of K v 2.1 by shifting activation to more depolarized voltages (Fig. 9, A and B), and channel activity recovers very slowly upon removal of the toxin from the aqueous solution, requiring 000,1ف s to reach control values. Kinetic analysis of mutations within the voltage-sensor paddle suggest that slow recovery results from slow unbinding of the toxin, with a dwell time of several hundred seconds (Phillips et al., 2005). In contrast to what is observed under control conditions, the kinetics for both the onset and recovery from inhibition by hanatoxin are , where K d is the apparent equilibrium dissociation constant determined from experiments in which aqueous toxin concentration is varied and the resulting fraction of unbound channels measured when activating the channel using weak depolarizations (Swartz and MacKinnon, 1997a). Apparent K d values are from Wang et al. (2004) . (D) Residues proposed to participate in direct toxinchannel interaction are colored blue, all other residues studied are colored white, backbone and unstudied residues are colored dark gray. In the right panels the structures were rotated 180° about the indicated axis.
dramatically altered following treatment with SMaseD ( Fig. 9 A). In particular, recovery of channel activity occurs quite rapidly, requiring little more than 100 s to reach control values. These results suggest that modification of SM speeds dissociation of the toxin by weakening the toxin-channel interaction, as observed with mutations in the paddle motif (Phillips et al., 2005). However, rather counter intuitively, comparison of the concentration dependence for toxin occupancy of the channel reveals that SMaseD treatment does not shift the apparent K d to higher toxin concentrations, and in fact produces a modest shift of the relation to lower toxin concentrations (Fig. 9, D and E). The most straightforward interpretation of these results is that SM interacts intimately with the channel in the local vicinity of where the toxin binds and that hydrolysis of the lipid weakens the toxin-channel interaction. The pronounced effects of anionic lipids on toxin partitioning can explain why the apparent affi nity doesn't decrease, because the anionic product of SMase action(C-1-P) would be expected to increase the concentration of the toxin in the membrane. We do not observe an increase in hanatoxin partitioning into bulk Xenopus laevis oocyte membranes after SMaseD treatment (unpublished data), suggesting that the increase in toxin concentration may only occur in the local vicinity of the channel. Regardless of the actual mechanism, the opposing effects of SMaseD on the kinetics of recovery and the apparent K d strongly suggest that the toxin and the channel interact within the lipid membrane. The destabilizing effect of SMaseD on the toxin-channel interaction supports the proposal that SM interacts relatively specifi cally with certain K v channels (Ramu et al., 2006).
D I S C U S S I O N
The objective of the present study was to characterize the interaction of voltage-sensor toxins with membranes, and to explore whether membrane partitioning is involved in the mechanism by which these toxins inhibit K v channels. Our experiments with model membranes demonstrate that the strength of partitioning depends on the type of lipids and the ionic composition of the aqueous solution. Partitioning is most favorable in the presence of anionic lipids, where electrostatic interactions help to stabilize the toxin in the membrane (Figs. 2-5), but signifi cant partitioning into zwitterionic membranes can be observed even in the presence of physiological aqueous solutions (Fig. 4). Since anionic lipids are relatively scarce in the outer leafl et of native membranes (Simons and van Meer, 1988;Calderon and DeVries, 1997;Hill et al., 2005), partitioning into native membranes should be most similar to what we observed for zwitterionic model membranes. Our depletion experiments with whole oocytes are fully compatible with measurements on zwitterionic model membranes (Fig. 4), both of which indicate that tarantula toxins can partition into native cell membranes with a K x of 01ف 3 . The evidence that voltage-sensor toxins can partition into membranes under physiological conditions raises the possibility that partitioning of these toxins may be suffi cient to inhibit K v channels. Perhaps these channels are sensitive to the properties of the bilayer, and the observed inhibitory effects result from indirect perturbations that occur without the toxin actually binding to the channel. We explored this possibility by synthesizing and studying the D-enantiomer of SGTx1, which would be expected to inhibit activation of the K v 2.1 channel if the mechanism involves a perturbation of the bilayer without the toxin actually binding to the channel. Although the D and L enantiomers interact with membranes in an indistinguishable fashion, D-SGTx1 is essentially inactive on the K v 2.1 channel (Fig. 7), indicating that direct protein-protein interactions are indeed involved in the inhibitory mechanism of the toxin.
The present results also substantially refi ne of our understanding of which toxin residues are involved in forming the complex between toxin and paddle. Although a previous study identifi ed the face of the toxin that is crucial for inhibitory activity , toxin mutations that affect the apparent affi nity could do so by perturbing both toxin-membrane and toxin-channel interactions. The present results suggest that the toxin side of the protein-protein interface likely involves R3, L5, F6, R22, and W30, which stand out in the plot of ∆∆G P vs. ∆∆G O because their mutation weakens toxin-membrane interactions rather modestly (∆∆G P < 1.1 kcal mol −1 ) compared with the apparent affi nity (∆∆G O > 3 kcal mol −1 ; Fig. 8). The three hydrophobic residues in the group of outliers (L5, F6, and W30) are tightly packed together on one face of the toxin, with the two basic residues (R3, R22) positioned nearby in the surrounding ring of polar residues, constraining the surface of the toxin that participates in protein-protein interactions with the voltage-sensor paddle (Fig. 8 D). This picture is Figure 10. Interaction of tarantula toxins with voltage-sensor paddles within the membrane. Illustration of SGTx1 partitioning into the membrane and interacting with S3-S4 helices. Side chain colors for left SGTx1 structure are green for hydrophobic, blue for basic, red for acidic, pink for Ser/Thr, and gray for other side chains and backbone atoms. Right SGTx1 structure shows residues important for protein-protein interactions in light blue. In both cases G32-F34 have been removed for clarity. Backbone fold of the activated/open conformation of the K v 1.2 (protein database accession code2A79) shown with only one of four voltage-sensing domains. Side chains of the outer four S4 Arg residues are shown and both S1 and S2 helices have been deleted for clarity. The contribution of the back subunit to the pore domain has also been omitted for clarity. Purple spheres are positions of potassium ions within the ion conduction pathway. consistent with mutagenesis studies on K v 2.1, which have identifi ed a glutamate and several hydrophobic residues in S3b of the paddle motif that are critical for interacting with tarantula toxins (Swartz and MacKinnon, 1997b;Swartz, 2000, 2001).
The ability of tarantula toxins to partition into membranes implies that the targeted paddle motif could be submerged in the membrane, but also that the toxin concentration in the membrane could be considerably higher than in the aqueous solution. For example, if a toxin partitions with a K x of 10 6 , the effective membrane concentration could be as much as 10 5 -fold higher, allowing a very weak (e.g., mM) protein-protein interaction to achieve apparent high affi nity (e.g., nM) (Lee and MacKinnon, 2004). Although the relative energetic contributions of protein-protein and protein-lipid interactions will likely vary for each toxin-channel pair studied in different membrane environments, the interaction of the present toxins with K v 2.1 in native membranes is probably somewhere in the middle of the spectrum, with both types of interactions playing important roles. The importance of the protein-protein interface is supported by the observation that in most cases the relatively subtle effects of SGTx1 mutations on the free energy of partitioning cannot account for their effects on the concentration dependence for toxin occupancy of the channel (Fig. 8). That ∆∆G O values cannot be explained by ∆∆G P values alone is perhaps most clearly seen for two Phe to Ala mutants. The ∆∆G P values for F6A and F34A are similar, with that for F34A being somewhat greater, yet the values for ∆∆G O are vastly different, with that for F6A being far greater compared with F34A (Figs. 5 and 8; Table III). In addition, the effects of paddle mutations on recovery kinetics suggest that the toxin remains bound to the channel for hundreds of seconds (Phillips et al., 2005), pointing to a rather stable toxin-channel complex. The importance of the toxin-membrane interaction is highlighted by the estimated K x values of 10 3 for hanatoxin and SGTx1 partitioning into membranes under physiological conditions. Although these values are much lower than those observed for anionic membranes, they are suffi cient to raise the toxin concentration in the membrane considerably compared with the aqueous phase. The relative contributions of the partitioning step may be even more signifi cant for the interaction of VSTx1 with K v AP channels studied in anionic membranes, where the K x is 01ف 5 , the affi nity of the protein-protein interaction is in the mM range, yet nM concentrations of the toxin produce robust inhibition (Lee and MacKinnon, 2004). The interaction of scorpion toxins with Na v channels may be an example where protein-protein interactions are even more dominant than what we have observed for hanatoxin and SGTx1. It is important to note that the high-affi nity binding of scorpion toxins to Na v channels (Rogers et al., 1996;Cohen et al., 2006), however, does not rule out the possibility that partitioning may be required for toxin binding to the voltage sensors in those channels . It would be diffi cult to convincingly demonstrate partitioning if the K x values were much less than 10 3 , yet even a K x of 10 2 would suggest that membrane concentrations of the toxin are somewhat higher than in the bulk aqueous phase, more than adequate for a toxin to access its receptor within the membrane. Similarly, the failure to observe an interaction of tarantula toxins like HpTx2 with zwitterionic membranes at physiological pH (Posokhov et al., 2007b) does not preclude the involvement of membrane partitioning for inhibition of Kv4 channels because the methods used are not sensitive enough to detect partitioning with a K x value of 10 2 .
Our working model for the interaction of tarantula toxins with K v channels is illustrated in Fig. 10, where SGTx1 is shown interacting with the voltage-sensor paddle within the membrane. Although many of our fi ndings are consistent with toxin-channel interactions occurring within the bilayer, perhaps the strongest evidence to support this idea comes from the effects of SMaseD on inhibition by hanatoxin. The effects of this lipase on the recovery from inhibition show that the stability of the toxin-channel complex is remarkably sensitive to modifi cation of the lipid environment (Fig. 9). Conversion of SM to C-1-P dramatically speeds dissociation of the toxin, suggesting that SM interacts intimately with the channel in such a way that it infl uences the toxin-channel interaction. It is as if the lipid, channel, and toxin form a trimolecular complex, and that relatively subtle changes in the lipid molecule can be readily detected by the toxin. Although there is much to learn about the interaction between these three types of molecules, the emerging picture strongly suggests that the voltage-sensor paddle motif moves at the interface where the channel meets the surrounding lipid membrane (Jiang et al., 2003a,b;Ruta et al., 2003Ruta et al., , 2005Cuello et al., 2004;Lee et al., 2005;Schmidt et al., 2006). | 2016-05-04T20:20:58.661Z | 2007-11-01T00:00:00.000 | {
"year": 2007,
"sha1": "289cd3ec7700d164f9354a9b52a29f4696e7a20f",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/130/5/497.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "289cd3ec7700d164f9354a9b52a29f4696e7a20f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
90837432 | pes2o/s2orc | v3-fos-license | MICRONUTRIENT CONCENTRATION AND CONTENT IN PASSION FRUIT LEAVES UNDER SAMPLING METHODS AND NK FERTILIZATION RATES 1
Balanced uptake of micronutrients by the passion fruit plant is essential for increased production and fruit quality. However, similar fertilizer management in varieties with different productive capacities and high levels of nitrogen and potassium can cause nutritional disorders in plants. The objective of this study was to evaluate leaf micronutrient concentrations and contents in passion fruit as affected by two different sampling methods, different N-K fertilization proportions, and different cultivars. The study was conducted in a randomized block design, with three replications, following a 4 × 6 factorial arrangement consisting of four cultivars of yellow passion fruit (BRS Gigante Amarelo, IAC 275, BRS Ouro Vermelho, and BRS Sol do Cerrado) and six application rates ofN-K2O fertilizer (0-0, 50-125, 100-250, 150-375, 200-500, and 250-625 kg ha-1 year-1). Two leaf sampling methods (leaf located at a position adjacent to the fruit, and leaf located at the end of the fruit-bearing branch) were adopted for nutritional assessment. At 240 days after planting passion fruit seedlings in the experimental area, 20 leaves per plot were sampled. Higher accumulated micronutrient contents were obtained in the adjacent leaves, possibly because of greater leaf weight (more fully developed leaf) compared to the standard leaf. The cultivar IAC 275 had lower concentration and content of Cu, Fe, e Mn in the adjacent leaf, indicating variations in the levels of micronutrients among the cultivars and different micronutrient demands by the cultivars studied. N and K fertilization had less effect on leaf micronutrient concentration and content, but the Zn concentration and content decreased in the standard leaf of the BRS Gigante Amarelo cultivar, and Cu decreased in the standard leaf of the BRS Ouro Vermelho cultivar.
INTRODUCTION
Passion fruit is one of the fruit crops of Brazil that provide higher economic return to rural producers and companies.In the municipality of Janaúba, in the north of Minas Gerais, average yield in 2013 was 20 t ha -1 (IBGE, 2013), higher than the average yield obtained for the state of Minas Gerais.However, this yield is still considered low from the perspective of the yield potential of the crop, which is higher than 50 t ha -1 .
To obtain yield increases in passion fruit, the nutritional requirements of the plants must be supplied by fertilization, when necessary.In this respect, adequate determination of the nutritional state of the passion fruit plants is indispensable (SILVA JÚNIOR et al., 2013).Nevertheless, it must be highlighted that the leaf nutrient concentrations in passion fruit plants are affected by the season in which sampling occurs, fertilization, the cultivars planted, and other factors (BORGES et al., 2002;NATALE et al., 2006;SOUZA et al., 2013); this results in divergence in diagnosis of the nutritional state and in recommendation of fertilization.
Fertilization with nitrogen (N) and potassium (K) is important for maintaining the metabolic functions of the plant, such as composing amino acids and chlorophylls and synthesizing carbohydrates and enzyme activators (BORGES et al., 2002;PETTIGREW, 2008).Nitrogen and potassium fertilization causes synergistic and inhibitory effects on plant micronutrient uptake (NATALE et al., 2006;MUNER et al., 2011).
The use of KCl as a source of K in growing sour passion fruit increased the leaf concentrations of Cu, B, and Mn (MENEZES et al., 2012).In general, balanced fertilization with N and K increases the uptake of most micronutrients (NATALE et al., 2006;WANG et al., 2013); however, high application rates of K can inhibit uptake of cationic micronutrients (ALMEIDA et al., 2015).
Micronutrient concentrations can vary as a result of the age or the position of the leaf sampled; however, this variation depends on the nutrient and the species under study (FREITAS et al., 2007;RODRIGUES et al., 2010).Santos et al. (2002) described an increase in Fe and Mn concentrations and decrease in Cu and Zn concentrations from new leaves to older leaves in dwarf coconut.Marschner (1995) states that micronutrients with little mobility, such as B and Mn, have higher concentrations in older leaves of the plants and leaves farther from the apical region of the branches.However, there are variations in sampling procedures in relation to age, leaf position on the branch, and the phenological phase of the passion fruit plant that lead to indefinition in sampling standards for leaf diagnosis.
Leaf diagnosis is an indispensable tool in diagnosis of possible micronutrient nutritional disorders and in evaluation of the nutritional state of the plants (ABREU et al., 2007).High yielding plant species have been extensively grown with excessive NPK fertilization in many countries, which brings about micronutrient deficiencies (CAKMARK, 2002).
In recent decades, micronutrients have drawn greater interest among Brazilian technicians and farmers for the most diverse soil, climate, and crop conditions in Brazil.However, there are few studies on application rates of N and K in sour passion fruit cultivars with high yield potential concerning procedures that define leaf nutrient concentrations and contents and methods of evaluation for efficient nutritional diagnosis with lower rates of variation during the crop cycle.Thus, the aim of this study was to evaluate leaf micronutrient concentrations and content in passion fruit plants under two methods of leaf sampling and different N and K fertilization rates.
MATERIALS AND METHODS
The study was conducted in the municipality of Janaúba, MG, Brazil, in the period from April 2013 to April 2014.The location of the study is 15º 43' 48'' S and 43º 19' 24'' W, at 516 m altitude.Climate in the region is Aw according to the Köppen classification, that is, tropical climate with dry winter; and the soil of the experimental area is classified as a Latossolo Vermelho (EMBRAPA, 2013).
A randomized block experimental design was used with three replications, and the experimental units were in a 4 × 6 factorial arrangement.This factorial arrangement corresponds to four sour passion fruit cultivars (BRS Gigante Amarelo, IAC 275, BRS Ouro Vermelho, and BRS Sol do Cerrado) and six application rates of N-K 2 O (0-0, 50-125, 100-250, 150-375, 200-500, and 250-625 kg ha -1 year -1 ).The mean recommended application rate (150-375 kg ha -1 year -1 ) was estimated to obtain yields greater than 35 t ha -1 of fruit, according to Sousa and Borges (2011) for irrigated systems, though adaptations were made concerning parceling of fertilization and the application of fertilizers.
The sources of N and K used were urea, potassium chloride, and potassium sulfate, applied in topdressing and parceled out in four applications from two to eight months after planting; the first was DOI 10.1590/0100-29452017 788 Jaboticabal -SP MICRONUTRIENT CONCENTRATION AND CONTENT IN PASSION FRUIT LEAVES... made at two months after planting and the others every two months afterwards.These fertilizers were diluted in 60 L of water, and 1 L of dilution was applied to each plant in a strip of approximately 0.2 m width around the trunk at a distance of 0.1 m from it up to 150 days after planting (DAP), increasing to 0.3 m from the trunk as of 180 DAP, according to Borges et al. (2003).
The experimental plots consisted of five plants at a spacing of 2.5 × 2 m in a single row, and the three center plants were used for evaluation, for a total of 15 m 2 of useful plot area.Before planting the passion fruit, the area had grown pineapple (July 2009 to October 2011), which had been fertilized with approximately 20 t ha -1 of cattle manure, 600 kg ha -1 of N, 240 kg ha -1 of P 2 O 5 , and 800 kg ha -1 of K 2 O.At the end of pineapple growing, the plant residues (30 t ha -1 dry matter) were left on the soil surface, and the area remained fallow for 18 months.
After that, the area was sprayed with glyphosate to eliminate weeds and form a new mulch.At that time, soil samples were collected from the area (at depths of 0-20 and 20-40 cm) for chemical and physical characterization of the soil (Table 1), according to the methods described by Embrapa (1997).
Soil tillage was performed in the experimental area in a conventional manner through plowing and two passes with a disk in preplanting.Seedlings of the passion fruit cultivars were produced in a nursery from seeds.Two seeds per plastic bag, containing 0.5 L of substrate, were used.Soon after germination, the seedlings were thinned, leaving only the seedling with greater vigor in each container.For preparation of 1 m 3 of substrate, the following measures were adopted: the ratio of 3:1:1 of soil, cattle manure, and sand, 5 kg of simple superphosphate, 1 kg of potassium chloride, 1 kg of dolomitic lime, and 50 g of FTE-BR12.
Two months after sowing, the seedlings had three pairs of defined leaves and were ready for planting.This was performed in April 2013 in plant holes of 0.4 × 0.4 × 0.4 m.These holes were fertilized with 10 L of cattle manure, 100 g of dolomitic lime, 50 g of FTE-BR12, and 550 g of simple superphosphate.The cattle manure contained the following total nutrient concentrations: 0.90 dag kg -1 N, 0.12 dag kg -1 P, 0.50 dag kg -1 K, 0.26 dag kg -1 S, 0.58 dag kg -1 Ca, 0.25 dag kg -1 Mg, 89.29 mg kg -1 Zn, 18.85 mg kg -1 Cu, 8623.30mg kg 1 Fe, 306.65 mg kg -1 Mn, and 9.85 mg kg -1 B.
The crop was grown in a vertical trellis system with a wire at a height of 1.70 m.The rows consisted of two fencings at an angle of 30° at the top and eight posts spaced at 5 m in the center of the row, with a total of 20 plant rows.The plants, after going beyond 10 cm above the wire, were pruned in order to break apical dominance, thus allowing emission of two new lateral branches, which were trained in opposite directions.
The lateral branches were trained until reaching 1 m on both sides and were then pruned to break apical dominance and favor the growth of productive branches.The productive branches were trained to a distance of 40 cm from the soil as a manner of preventing diseases caused by plant pathogens Throughout the crop period, thinning was performed by pruning the stem and lateral branches, and growth, always maintaining a height of 40 cm above the ground.
The plants were irrigated by microspray nozzles with a flow of 120 L h -1 , which were arranged in 10 rows at an approximate distance of 1 m from the root collar of the plants, with a total of 10 spray nozzles per row.Irrigation was applied according to crop needs, following the crop coefficient (Kc) indicated by Silva and Klar (2002).
Weeds were controlled by application of glyphosate between the rows at the rate of 2 L ha -1 in four applications throughout the crop season.Caterpillars and insects were controlled with pyrethroid and imidacloprid, respectively, at the rate of 30 mL 100 L -1 of water in three applications of pyrethroid and two of imidacloprid during the crop season.
Two methods of leaf sampling were adopted for nutritional characterization of the plants: taking leaves from the position adjacent to the fruit (adjacent leaf) and leaves from the end of the fruit-bearing branch (standard leaf).Leaves were sampled after flowering of the passion fruit plants, at approximately eight months after planting, collecting 20 leaves per experimental plot.The standard leaf was sampled according to Malavolta et al. (1997) through collecting the 4th leaf of the vegetative branch, counting from the apex to the base of the branch.The method of sampling leaves from the adjacent position was adapted from Marchal and Bourdeaut (1972), collecting the leaves adjacent to the fruit in the initial phase of development (fruit of approximately 2 cm length), instead of sampling leaves at the axils of the flower buds.
The leaf samples were dried at 65ºC in a forced air circulation oven for 72 h and ground in a Wiley mill for later determination of dry matter weight and micronutrient concentrations, according to Silva (2009).Based on nutrient concentration and on dry matter weight, the mean content of DMW= leaf dry matter weight (g leaf -1 ); = 1000, corresponds to the unit conversion factor (kilograms to grams).
The variables under study were subjected to analysis of variance (p < 0.05).The qualitative factor (cultivars) was compared by the Tukey test (p<0.05)and the quantitative factor (N and K application rates) was adjusted through regression analyses on the Sisvar statistical program (FERREIRA, 2011).The models were fitted based on the significance of the parameters and on the coefficient of determination.
RESULTS AND DISCUSSION
The micronutrient concentrations and contents were significantly (p<0.05)affected by the dual interaction between the passion fruit cultivars versus N and K application rates in the two leaf sampling methods (Table 2).
The mean concentrations of Zn, Cu, Fe, Mn, and B in the standard leaf of the cultivars studied corresponded to 21, 6, 100, 68, and 56 mg kg -1 , respectively, and in the adjacent leaf corresponded to 19, 6, 140, 65, and 62 mg kg -1 , respectively.Among these mean concentrations, only the concentration of Fe in the adjacent leaves was considered adequate in comparison to the sufficiency range (120-200 mg kg -1 for Fe) indicated by Malavolta et al. (1997).However, the mean leaf concentrations of Zn, Cu, and Mn in the passion fruit leaves were below the sufficiency range of 25-40 mg kg -1 for Zn, of 10-20 mg kg -1 for Cu, and of 400-600 mg kg -1 for Mn, indicated by the same author.Concerning B, the leaf concentrations are above the sufficiency range indicated by Malavolta et al. (1997), which is 40-50 mg kg -1 of B.
In the adjacent leaf of the cultivars Ouro Vermelho and Sol do Cerrado, an increase was found in the content of dry matter weight with the increase in the N-K application rate up to 150-375 and 128-319 kg ha -1 , respectively (Figure 1).However, for the cultivar IAC 275, this characteristic decreased in a linear manner.In the cultivars Ouro Vermelho and Sol do Cerrado, the increase in leaf dry matter was related to a probable increase in leaf carbohydrate and protein synthesis, brought about by greater availability of N and K to plants after fertilization at intermediate application rates since these nutrients have specific functions in synthesis of carbohydrates and proteins (BORGES et al., 2002;OLIVEIRA et al., 2015).
In the adjacent leaf of the cultivar IAC 275, lower contents of Cu, Fe, and Mn were obtained in comparison to the other cultivars (Figures 2, 3, and 4).This result was related to reduction in leaf dry matter weight with the increase in N and K application rates (Figure 1).Lizarazo et al. (2013) obtained reduction in specific leaf area (division between leaf area and leaf weight) in Passiflora tripartita var.mollissima after the application of N and K rates in comparison to the unfertilized treatment, and they described that the increase in potassium availability leads to production of smaller and thicker leaves.
The increase in N and K application rates reduced the concentration and content of Zn in the standard leaf for the Gigante Amarelo cultivar (Figure 2).However, in the cultivars IAC 275 and Ouro Vermelho, no effect from the application rates was found, and in the Sol do Cerrado cultivar, there was an increase in leaf Zn concentration and content with the increase in N-K fertilization, indicating the heterogeneity of the effects of nitrogen and potassium fertilization on Zn uptake by the cultivars studied.
In the leaves sampled in the adjacent position, no difference was found among the cultivars studied for Zn concentration and content with an increase in the N and K application rates, and the mean value was 18.64 mg kg -1 for concentration and 0.69 mg leaves -1 for content (Figure 2).
Reduction in the Zn concentrations in the standard leaf of the Gigante Amarelo cultivar may be linked to the synergistic effect of the application of N favoring uptake of P, which contributes to the formation of insoluble phosphates in the plant, reducing the translocation of Zn to the leaves.According to Muner et al. (2011), increases in the P concentrations in the plant reduces the physiological availability of Zn, that is, it diminishes its solubility and mobility, caused by Zn precipitation in the form of Zn phosphate.
In contrast, the increase in leaf Zn concentration and content in the Sol do Cerrado cultivar (Figure 2) may be explained by the greater efficiency of the plant in taking up the Zn made available by the ammonium nitrification process and reduction in soil pH after the application of higher DOI 10.1590/0100-29452017 788 Jaboticabal -SP MICRONUTRIENT CONCENTRATION AND CONTENT IN PASSION FRUIT LEAVES... nitrogen rates.According to Caires and Milla (2016), the use of ammoniacal nitrogen fertilizer or urea fertilizer, containing nitrogen in the form of N-NH 2 or N-NH 4 , reduces soil pH, because in the nitrification process, each molecule of NH 4 + that is oxidized to NO 3 -releases two protons (H + ).In this condition of lower soil pH, there is an increase in the availability of micronutrients to plants (NEVES et al., 2008).
The Cu concentrations and contents in the adjacent and standard leaves in the cultivars Gigante Amarelo, IAC 275, and Sol do Cerrado were not affected by the N and K application rates (Figure 2).In the adjacent leaf of the Ouro Vermelho cultivar, the Cu concentration and content decreased with the increase in fertilization up to the recommended 188 kg ha -1 of N and 469 kg ha -1 of K (Figure 2).These divergent results of Cu uptake among the cultivars after N-K fertilization was attributed to the different productive capacities of the passion fruit plants, which can change the source-sink dynamics of nutrients in the plant organs according to higher or lower fruit production.
The Fe concentration and content in the standard leaves and those located in the adjacent position were not affected by the N-K application rates, except for Fe concentration in the leaf located in the position adjacent to the fruit of the cultivar Sol do Cerrado in which a quadratic increase was obtained after fertilization with the N-K application rates (Figure 3).
The increase in the leaf Fe concentration of the Sol do Cerrado cultivar can be explained by the increase in Fe demand in the plant as a constituent of the nitrate and nitrite reductase enzymes responsible for nitrogen assimilation and metabolism (VIANA; KIEHL, 2010).The intermediate K application rates (250-350 kg ha -1 ) may also have favored Fe uptake by the Sol do Cerrado cultivar through increasing the synthesis of low molecular weight organic compounds, their exudation by the roots, and the formation of Fe organic mineral chelates in the soil solution (CARVALHAIS et al., 2011;WANG et al., 2013).Scientific studies have reported an increase in the concentrations of this nutrient as a result of nitrogen and potassium fertilization, as observed by Natale et al. (2006) and Resende et al. (2010).
Nevertheless, the lack of an effect of N-K fertilization on Fe uptake for most of the passion fruit cultivars studied and the presence of leaf concentrations considered adequate for development of the passion fruit plants indicate the lower effect of N-K fertilization on Fe uptake under conditions of adequate soil Fe availability to plants.
The increase in the N and K application rates did not affect the Mn concentrations and contents in the adjacent and standard leaves for the Gigante Amarelo and IAC 275 cultivars (Figure 3).In the Ouro Vermelho cultivar, there was an increase in the Mn concentration of the standard leaf as the N and K application rates increased (Figure 3).In the Sol do Cerrado cultivar, the increase in the N and K application rates (up to 165-413 and 153-394 kg ha -1 ) increased the Mn concentrations in the standard leaf and Mn accumulated content in the adjacent leaves up to the application rates of 113-281 and 167-416 kg ha -1 of N and K (Figure 3).However, application rates above these values reduced the concentrations and contents in these leaves, probably through having provided for P absorption by the synergistic effect of N application, reducing the mobility and translocation to the leaves by the formation of manganese phosphate.
One of the factors that also affected the concentrations of cationic micronutrients in the plants was the increase in potassium fertilization in the form of KCl by the negative effect of the Cl on uptake of these micronutrients by the plants, especially Mn, Zn, and Cu (PRADO et al., 2004), mainly brought about by leaching in the soil after the formation of complexes between these cationic micronutrients and the chloride anion.
. P. LOPES et al. micronutrients in 10 leaves of the passion fruit plants was calculated through the following equation: In which: MC= micronutrient content in 10 leaves (mg leaves -1 ); Concentration = micronutrient concentration in the leaf (mg kg -1 );
FIGURE 1 -
FIGURE 1-Mean of dry matter weight of the leaf collected from the position adjacent to the fruit (adjacent leaf) and 4th leaf of the fruit-bearing branch (standard leaf) of passion fruit cultivars under fertilization with application rates of N and K, through the soil.
FIGURE 2 -
FIGURE 2-Concentration and content of Zu and Cu in leaf samples collected from the position adjacent the fruit (adjacent leaf) and at the end of the fruit-bearing branch (standard leaf) of passion fruit cultivars under fertilization with application rates of N and K through the soil.
TABLE 1 -
Chemical and physical composition of soil samples collected at the depths of 0-20 and 20-40 cm in the experimental area of the UNIMONTES experimental farm.Janaúba, MG, Brazil.2014.
TABLE 2 -
Summary of analysis of variance of the data referring to dry matter weight (DM), concentration and content of Zn (ZnC and ZnA), Cu (CuC and CuA), Fe (FeC and FeA), Mn (MnC and MnA), and B (BC and BA) in the standard and adjacent leaves of the passion fruit plants (Cultivar) fertilized with N and K application rates.Janaúba, MG, Brazil.2014. | 2019-04-02T13:12:26.569Z | 2017-12-04T00:00:00.000 | {
"year": 2017,
"sha1": "c3f2ae1eadd992c45625829519a867b5586e0b14",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbf/v39n4/0100-2945-rbf-39-4-e-788.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c3f2ae1eadd992c45625829519a867b5586e0b14",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
55894798 | pes2o/s2orc | v3-fos-license | The most famous fish: human relationships with fish as inferred from the corpus of online English books (1800−2000)
Despite the historically close connection between humans and fish, the question ‘What is the most famous fish species?’ has never been asked. I used Google Ngram viewer to estimate the frequency of times the common names of 250 fishes appear in the corpus of digitized English books published between 1800 and 2000. I propose the ‘famon’ as a unit of fame, with 1 famon = 10−6 relative % Ngram frequency. Twelve of the 250 common names are words which also have other uses in English and were thus not considered here, and 57 species had 0 famons. For the remaining 181 species, fame increased for 139 (76.8%), during or part of 1800−2000. Goldfish Carassius auratus, the most common laboratory and aquarium fish and the second fish to be domesticated, is the most famous fish, reaching 80 to 117 famons after 1930. It was introduced to Europe from China about 325 to 450 yr ago and then to North America around 1850. Goldfish have penetrated into cultural aspects of human civilization (e.g. stamps, art, music). ‘Goldfish’ also appears in the corpus of simplified Chinese, French, German, Italian, and Spanish books. The results show the universality and dominance of goldfish in the digitized published heritage. This likely indicates that non-consumptive cultural aspects, including aesthetic, spiritual, and recreational components, play a central role in defining the relationship of humans with fish, being equally important as provisioning, re gulating, and supporting services, and thus should be valued accordingly for conservation. However, cultural services have not yet been adequately integrated within the ecosystem service framework and are generally excluded from economic evaluations, a fact raising ethical issues with respect to their relative evaluation.
INTRODUCTION
Fish play an important role in aquatic ecosystems, spanning trophic levels from 2, for purely herbivorous fish such as Siganus spp., up to 5, for predatory species such as bluefin tuna Thynnus thunnus, swordfish Xiphias gladius, and several species of sharks (Cortés 1999, Stergiou & Karpouzi 2002, Froese & Pauly 2015).Humans have historically had very strong bonds with fish and fishing, with both playing an important role in ancient life and economy and thus in human wellbeing.This is very well illustrated by the fact that fish represented one of the 2 greatest passions of the ancient Athenians (Davidson 1997).Citing McEvoy (1986), Merchant (1997, p. 25) stated that 'Like the gold that had been discovered in California, fish were treated as gold nuggets, serving as the coin of trade.' Fish are an important, protein-rich, healthy food source (see Cunnane & Crawford 2003 for the relation between fish diet and evolution of human brain size) with unique psychotropic properties (Reis & Hibbeln 2006).Fish are also a cultural source of inspiration for artistic/pictorial representations (e.g.wall paintings and frescos, mosaics, sculptures, coins), with the Minoan fresco the 'Little Fisher from Thera' from Santorini Island (dated back to ~3600 yr ago) being the most famous (Stergiou 2005(Stergiou , 2011)).Such a close relationship is depicted in the writings of ancient Greeks and Romans in specialized books on 'natural history' (e.g.'History of Animals' by and 'N aturalis Historia' by Pliny the Elder [23−79 AD], in which both describe various aspects of the life histories of fishes and other marine organisms), in books on fishing (e.g. a poem on fishing, entitled 'Halieutika,' by Oppianos of Cilicia [2 nd half of the 2 nd century]; Egerton 2001), in Greek tragedies (e.g.those of Aeschylus [4 th to 5 th century BC]) and in other writings (e.g.'Histories' by , 'The Deipnosophists' by Athenaeus [2 nd to 3 rd century AC]; Stergiou 2011).In their review of the cultural symbolism of fish and the psychotropic properties of their omega-3 fatty acids, Reis & Hibbeln (2006) maintained that fish have also been culturally considered as symbols of social healing and emotional well-being in religious and medical practices in different cultures, for at least 6000 yr.Fish are also among the 3 top favorite pets (together with dogs and cats) even topping the list (in terms of the number of households with fish as pets) in some countries (e.g.Italy; https:// en.wikipedia.org/ wiki/ Pet # Pet _ popularity).The strong relationship be tween humans and fish is also indicated by recent, popular books such as Kurlansky's (1998) 'Cod: a biography of the fish that changed the world' and the New York Times best seller 'Four fish' by Greenberg (2010).The important role of fish for humans becomes clear when one considers that out of the 33 200 currently recognized fish species (Froese & Pauly 2015) about onethird are used by humans as food, in the fishmeal industry, for bait, in aquaculture, in recreational and subsistence fishing, and in the aquarium trade.
The relationship between humans and other animals, including fish, is very important because it largely determines how humans will interact with them (Kudo & Macer 1999), and there is deep and diverse philosophical thinking on the moral status of animals with which we share our lives (Gruen 2014, and see www.iep.utm.edu/ anim-eth/ #H4) as well as for the management of their populations (Merchant 1997).In fact, despite the close relationship between humans and fish, the latter generally do not enjoy a level of compassion similar to that enjoyed by 'warmblooded' vertebrates (Brown 2015).
Given the close relationship between humans and fish, it is no surprise that the latter attract a lot of media attention because of the alarmingly poor status of their stocks (e.g.Pauly et al. 1998, Vasilakopoulos et al. 2014), their occasional extremely high market prices (e.g.bluefin tuna), when very largesized or rare fish species are accidentally caught and landed, and in the case of shark attacks.Yet, despite such a historically close connection be tween humans and fish, the question of which fish species is the most famous has never been asked or answered.The answer to this question will cast light on which factors (i.e.cultural, aesthetic, economic, recreational, subsistence) play a role and eventually define the relationship of humans with other organisms, in this case fish, and thus should be considered when evaluating different ecosystem services.This might have ethical implications given that these factors are not all considered to be of equal importance when evaluating ecosystem services.
Fame, or reputation (i.e.what is said or reported about a name), can be objectively quantified by estimating the frequency with which the name of an entity appears in various sources such as books (Michel et al. 2011).Michel et al. (2011) constructed a corpus of digitized books, developed a computational tool (Google N gram viewer; later expanded by Lin et al. 2012), which estimates the percentage of times a word (or a phrase) appears in the corpus of books, and investigated its usefulness in social sciences and humanities.N gram has been successfully used in many fields of knowledge, from linguistics, literature, accounting, computer and environmental sciences, to ethics and estimating university reputation rankings (Table 1).This shows the importance of the digitized availability of the millions of books online as well as of Michel et al.'s (2011) tool for all sciences and the humanities.
Here I used N gram to investigate patterns in the use of the common names of 250 fish species in the corpus of digitized English books with the aim of identifying the most 'famous' fish in the modern, English-speaking world.
MATERIALS AND METHODS
Ngram is an online tool (http:// books.google.com/ ngrams) that produces a graph in which the y-axis shows how many times a phrase occurs in a corpus of books (making up about 6% of all books ever printed; Lin et al. 2012) relative to all remaining phrases composed of the same number of words (i.e.relative frequency) during the same time (x-axis).A detailed account of the Ngram tool is given by Michel et al. (2011) and Lin et al. (2012), whereas an application guide is available online (http:// books.google.com/ ngrams/ info # advanced).The analysis is available for 1800−2008 but data are more consistent for 1800−2000 (Lin et al. 2012).
I used Ngram to estimate the relative frequency of appearance of the common names of different fish species in the corpus of English books published between 1800 and 2000.Herein I define and use the famon (Gr.fími, fame, from which L. fama, fame, is derived) as a unit of fame, with 1 famon = 10 −6 relative % Ngram frequency.
Currently, 33 200 fish species are recognized (Froese & Pauly 2015).Since it is not possible to check the relative frequencies of all 33 200 species in the online books, I used a subset (see Tables S1 & S2 in the Supplement at www. int-res.com/ articles/ suppl/ e017 p009 _ supp.xls).Firstly, I checked the relative frequency of occurrence of the common names of the 100 most viewed species in FishBase and of the 50 most important marine game fish (Table S1).The latter were taken from 'Sport Fishing Magazine,' which constructed the list based on the suggestions of the 61 top anglers and skippers of the world.I further estimated the relative frequencies of the common names of all fish species with landings > 235 000 t (for 2013) and of those farmed, both from the Food and Agriculture Organization (FAO), as well as of several other well-known 'top' fish species from various sources (Table S1).Finally, I checked the relative frequencies of 26 species randomly selected from FishBase (i.e. the first species that has a common name for each of the 26 letters of the English alphabet; Table S1).
For all sources providing only common names, I used only those that corresponded to one particular scientific name in FishBase.If the common name in the source differed from that in FishBase, then I checked for both common names in N gram and used the one with the highest relative frequency.Overall, I checked 250 unique species, which cover the different uses of fish by humans (i.e.commercial fishing, fishmeal industry, aquaculture, game fishing, bait, aquarium trade).Common names were checked for other uses in the English language with the '* word' and 'word *' Ngram option.
Fame may be related to the age of an entity's name (e.g. for universities: Stergiou & Tsikliras 2014).Thus, I also tested whether a relationship exists between the age of common names (i.e.2015 minus the year of the first appearance of a common name in the books) and maximum fame for the 181 species which had frequencies > 0 famons.Finally, one might hypothesize that the larger a fish is (i.e. more conspicuous or charismatic), the larger its fame will be.Thus, I also tested whether a relationship exists between maximum length, L max (taken from Froese & Pauly 2015), and fame for the 181 species with fame > 0 famons.
RESULTS
Only 1 of the 250 species checked did not have a common name (Copadichromis azureus; top 100 FishBase species).Fame ranged between 0 famons (i.e.no Ngram frequencies for 1800−2000) for 57 species, to 117 famons for goldfish Carassius auratus.The common names of 7 species (flier Centrarchus macropterus, haddock Melanogrammus aeglefinus, meagre Argyrosomus regius, molly Poecilia sphenops, oscar Astronotus ocellatus, permit Trachinotus falcatus, and stur geon Acipenser sturio) are words which also have other uses in the English language (e.g.person names, locations; Table S1).As a result, these common names attain fame levels as high as 4900 famons, depending on the common name (Table S2).For instance, haddock/Haddock exhibits a peak of 100 to 147 famons in 1922−1930 (Fig. 1).However, Haddock is also a common surname in English.Indeed, part of the 1922−1930 frequency peak is attributed to 'Mr.Haddock' and 'Mrs.Had- S1 & S2) and thus their records were also not considered here.Several other common names also have other uses in English, but the frequency of these uses was very low (Table S1).
Overall, 131 of the 238 English common names considered in the analysis have frequencies <1 famon (median 0.56).Of the 181 species having non-0 values, fame exhibited an increasing long-term trend for 139 species (76.8%), a declining trend for 8 species (4.4%), and no trend or another type of trend for 18 (9.9%)and 16 species (8.8%), respectively, between 1800 and 2000 or part of this period (Table S2).For 124 of the 181 species (68.5%), fame exhibited long-term cycles.For the study period (i.e.1800− 2000), almost 60% of the common names (108/181) appeared in the English books for the first time between 1800 and 1899 (Fig. 2).
Goldfish dominates the fame spectrum after 1920, being the most famous fish (Fig. 3).Its fame exhibited long-term cycles and reached levels generally higher than 80 famons between 1930 and 2000, with a maximum of 117 famons in the late 1980s (Fig. 3).Of the remaining species, among the most dominant ones in the fame spectrum were: (1) rainbow trout Oncorhynchus mykiss, reaching 60 to 80 famons between 1980 and 2000 (maximum in 1990; Fig. 3); (2) sea trout Salmo trutta, which generally domi-nated between 1800 and 1910, with frequencies as high as 50 famons, with a maximum of 53 famons in the late 1940s (Fig. 4 Maximum fame in English books was significantly related to the age of the English common names for the 181 common names that had frequencies > 0 (Fig. 6).In contrast, no relationship was found between L max and maximum fame (log scale; r = 0.143, n = 181, p = 0.055).
DISCUSSION
In the present study, Ngram was successfully used for identifying the most famous fish in the Englishspeaking world.Undoubtedly, apart from books there are several other sources (e.g.newspapers, magazines, media, news archives: Leetaru 2011, Michel et al. 2011; blogs and social networks: Dodds et al. 2011, Altmann et al. 2011, Ratkiewicz et al. 2011;Facebook: Schwartz et al. 2013; Scopus, Web of Science, Google Scholar: for determining the most famous fish in the scientific literature, including journals and conference proceedings) that are also important and useful for studying various scientific and cultural aspects, including fame of entities, and which were not considered here.
The results of the Ngram analysis showed that the goldfish is the most famous fish in the English speaking world based on the number of famons.Goldfish is a freshwater species of the family Cyprinidae, which is characterized by a wide phenotypic variability and a global distribution (Froese & Pauly 2015).It has been a model fish for laboratory experiments (Balon 2004) and is among the most popular aquarium fish species.It was the second fish species to be domesticated after common carp Cyprinus carpio (Balon 2004).It is noteworthy here that despite the fact that Darwin had an apparently small interest in fishes (i.e.only 0.7% of the total words written by Darwin refer to fishes) he wrote extensively on goldfish; in fact, this is the only fish that was given a section heading (Pauly 2008) in one of his books (Darwin 1868, p. 296− 297), in which its domestication and variability in color and size are discussed.Goldfish form a monophyletic lineage, with all varieties being derived from one domestication event (Rylková et al. 2010) Age in years (log) Fig. 6.Relationship (r 2 = 0.51, n = 181, p < 0.05) between maximum fame (log) and age of a common name (2015 minus year of first appearance in the books; log) for 181 fish species 2004).By the early 1500s this species was introduced to Japan (Balon 2004), and it was first introduced to Europe in 1611 or 1691 (Mulertt 1883, p. 7) or as early as the 1550s (Darwin 1868), most probably in Portugal from where it spread to Great Britain (in 1691) and other European countries (Mulertt 1883, p. 7).It had become a very popular pet by the 1700s (Brunner 2003) and was so popular in Europe that it was considered as a symbol of good luck and fortune and was offered by husbands to their wives on their first wedding anniversary (Mulertt 1883, Brunner 2003).Although Smartt (2001) mentioned that the earliest record of the introduction of goldfish from Europe to the US dates back to 1874, it was probably introduced earlier, sometime in the 1850s, given that by 1865 it was sold in a New York pet shop and the first hatchery started in Ohio in 1882 (Brunner 2003).Thereafter, it became popular throughout the US (Brunner 2003), a fact that is also reflected in its relative frequency in the English books, which increased exponentially since 1850 (Fig. 3).
Undoubtedly, one expects a famous entity to be the subject of different aspects of human culture.Indeed, goldfish are the protagonists in several movies (e.g.see www.imdb.com/title/ tt2555048/), TV series (e.g.'Being Human', a UK television series), children's literature (e.g.Dr. Seuss's 1957 'The Cat in the Hat', Helen Palmer's 1961 'A Fish Out of Water'), and poems, and are featured in paintings (e.g.see www.artistsandart.org/ 2009/ 09/ goldfish-in-painting.html), including many by Henri Matisse, and on stamps (see the Appendix).Goldfish are also the subjects of many quotes attributed to celebrities (e.g.Henri Matisse, theoretical physicist Stephen Hawking, Princess Margaret, actor Paul Rudd, and writer Ashwin Sanghi) 1 .Interestingly, the raising of goldfish by children mod-ifies children's biological interference (i.e.raisers use their knowledge of goldfish to predict and explain the reactions of other aquatic animals; Hatano & Inagaki 2002).Thus, as Smartt (2001, p. 1) put it '… goldfish could be proclaimed as the Millennium fish!' Given that fame is relative, one question that arises is how famous goldfish are when compared to other domesticated animals or even humans.For instance, horses and dogs were the only domesticated animals appearing in the top 10 most favorite animals voted in 2004 by more than 50 000 viewers (from 73 countries) of the Animal Planet cable and satellite channel (http:// news.bbc.co.uk/ cbbcnews/ hi/ newsid _ 4070000/ newsid _ 4073100/ 4073151.stm).The fame of goldfish (maximum 117) is more than 40 times smaller than that of horses (i.e. the word 'horse' displays frequencies that decline from 9000−11 000 famons in 1800− 1900 to 4000−5000 in 1960−2000) and dogs (the frequency of the word 'dog' increases from 2300 famons in 1800 to over 4000 famons in 2000).This agrees with the fact that when exploring the relationship of Japanese people with animals, Kudo & Macer (1999) found that this relationship depends on how familiar they are with a particular species and its perceived function and role.The Japanese were overall more familiar with dogs and cats than with fish (which were mentioned by 3% of the people interviewed).Both horses and dogs were domesticated many thousands of years before goldfish and played a vital role in human well-being (Balon 2004), with horses having a direct and indirect impact on the US economy of more than $100 billion (The American Horse Council 2005).Yet, the fame of goldfish is slightly higher than that of Albert Einstein and Alexander the Great (74−95 and 91−107 famons, respectively, between 1980 and 2000), but lower than those of, e.g.Aristotle and Plato (for both of which frequencies fluctuate around 1500 famons between 1800 and 2000).
The analysis presented here suffers from certain biases with respect to the frequency estimations.Firstly, several species might have more than one common name, a fact affecting their Ngram frequencies.In the present study, the common names in the sources from which the 250 species were extracted matched the FishBase English common names, with the exception of 23 species (Table S1 in the Supplement).For 3 out of the 23 species, either the FishBase or the source common name has other uses in English and thus the corresponding frequencies were not used.For Salmo trutta, the common name in one source is 'brown trout' whereas in FishBase, it is 'sea trout.'Because both common names appear with 1 Celebrity quotes are available at www. brainyquote.com/ quotes/ keywords/ goldfish.html.Examples listed here include Henri Matisse: 'I wouldn't mind turning into a vermilion goldfish.'Stephen Hawking: 'A few years ago, the city council of Monza, Italy, barred pet owners from keeping goldfish in curved bowls... saying that it is cruel to keep a fish in a bowl with curved sides because, gazing out, the fish would have a distorted view of reality.But how do we know we have the true, undistorted picture of reality?' Princess Margaret: 'I have as much privacy as a goldfish in a bowl.' Paul Rudd: 'I think there's something great and generic about goldfish.They're everybody's first pet.'Ashwin Sanghi: 'The average human attention span was 12 seconds in 2000 and 8 seconds in 2013.A drop of 33%.The scary part is that the attention span of a goldfish was 9 seconds, almost 13% more than us humans.That's why it's getting tougher by the day to get people to turn the page.Maybe we writers ought to try writing for goldfish!' similar, high frequencies, this was the only case in which their frequencies were summed.For the remaining 19 species, the differences in the maximum frequencies between the FishBase and source common names were very small.In 5 cases, both common names had 0 famons, in 9 cases the difference between the frequencies of the 2 common names was <1 famon (with all maxima being also <1 famon), and in 5 cases the difference was between 1.59 and 6.66 famons (with all maxima being < 6.8 famons).Thus, this bias does not affect the results with respect to the dominance of goldfish.Secondly, for species having a common name that is a word which also has other uses in the English language (e.g.person names, locations, adjectives, other animal species), sophisticated disambiguation algorithms must probably be used on the downloaded N gram dataset (see Acerbi et al. 2013) in order to identify the correct frequency of this common name within a conceptual context related to fish.In any case, this bias leads to smaller frequencies for the implicated species and thus does not affect the fame status of goldfish.The above mentioned 2 biases also show the importance of coining a unique common name corresponding to only one organism (e.g.oscar or flier vs. bluefish, goldfish).Thirdly, common names of fish could change over time and this might affect the estimates of relative frequencies.This effect was not examined here.However, the facts that about 60% of the common names appeared in the English books for the first time between 1800 and 1899 (without excluding the possibility that they had appeared much earlier in English books or books in other languages), and that the frequencies of about 77% of the common names exhibited long-term increasing trends during the study period both indicate that the effect of such a bias would be minimal.Finally, the potential effect of the relative availability of books was also not examined here.
The present analysis refers to the corpus of English books (and English common names).The corpus of digitized books includes books in many other languages (i.e.Spanish, French, German, Italian, Hebrew, Russian, simplified Chinese), and the results of a similar analysis could be different in these (and other) languages, especially so for languages that were more important than English until recently.It is worth mentioning here, however, that 'goldfish' is not only used as a common name in English-speaking countries but also in several other countries (e.g.Mexico, Russia, Uzbekistan, Austria; Froese & Pauly 2015).In fact, the word 'goldfish' appears in the corpus of books in other languages with relative frequencies ranging from 3.1 to 67 famons and years of first appearance ranging from 1800 to 1929 depending on the language (i.e.simplified Chinese: first appearance in 1929, maximum 67 famons; French: first appearance in 1800, maximum 7.4 famons; German: first appearance in 1840, maximum 11. 7 famons; Italian: first appearance in 1886, maximum 3.8 famons; Spanish: first appearance in 1876, maximum 3.1 famons).This shows the universality and dominance of goldfish among fishes in the digitized published heritage.
I found that the older a common name is, the greater the fame of that species, irrespective of body size.This indicates that fame is not related to the apparent conspicuousness of a species (e.g. a big fish) but rather to its historical relation to humans.Fish are generally characterized by continually increasing fame, which must be attributed to the fact that fish are there forever and their fame is accumulated through experiences that are shared from generation to generation.The same is also true of university reputations (Stergiou & Tsikliras 2014).This fame accumulation certainly reflects the continually growing importance of fish to the well-being of humans and agrees with the new common names appearing in the books over time.Linguistically, trends in relative frequencies are also related to the birth and death of words (see Petersen et al. 2012), in this case of common names.Finally, the frequencies of the majority of the fish species examined here are characterized by long-term cycles, which might reflect various events.However, this is outside the scope of this work (but see Gao et al. 2012, for analyzing longrange correlations in Ngram frequencies).
The results of the present study give rise to cultural and conservation implications as well as ethical considerations.The fact that the goldfish is the most famous fish indicates that non-consumptive cultural aspects, including aesthetic, spiritual, educational, and recreational components, play a central role in defining the relationship of humans with other organisms, in this case fish.This agrees with the findings of Kudo & Macer (1999), who reported that people interviewed on why they like animals ranked the aesthetic/spiritual aspects (i.e.their cuteness or their behavior) very high.It is logical to assume that this also applies to the ecosystems in which these organisms are embedded.Ecosystems provide various services, which generate benefits contributing to human well-being (MEA 2005).Ecosystem services have become important for planning, conservation, decision making, and management, and research on cultural services is growing as a multidisciplinary research field (Chan et al. 2012a,b, Hernández-Mor-cillo et al. 2013, Milcu et al. 2013, Satz et al. 2013).The present study shows that ecosystem cultural services and benefits, which are closely associated with the remaining services (Chan et al. 2012a,b), are as important as the provisioning (e.g.food, fresh water), regulating (e.g.climate regulation), and supporting services (e.g.nutrient cycling) (see also Holmlund & Hammer 1999 for services generated by fish).Yet, despite their importance, cultural services have not yet been adequately integrated within the ecosystem service framework and, with the exception of tourism (Hernández-Morcillo et al. 2013), they are excluded from economic evaluations because there is no commonly accepted framework for doing so, and/or because nonmaterial values cannot be characterized using monetary methods (Chan et al. 2012a,b, Daniel et al. 2012, Hernández-Morcillo et al. 2013, Milcu et al. 2013, Satz et al. 2013).This raises ethical issues related to the recent attempts for ecosystem planning and management based on or using ecosystem services (see also Jax et al. 2013).In fact, there is an 'ethical' need to assess cultural services (Hernández-Morcillo et al. 2013) and to integrate them with other services (Satz et al. 2013).Careful considerations and valuations of cultural services in conservation and management plans will have important synergies with the preservation of the remaining services (Daniel et al. 2012).To that end, new methods, derived from social science (Chan et al. 2012a), and new conceptual approaches should be developed (Milcu et al. 2013), with Ngram being one potential approach.In addition, more effort should be devoted to involving relevant stakeholders through the various phases of decision making (Hernández-Morcillo et al. 2013).
Table 1 .
Applications and uses of Michel et al.'s (2011) Ngram tool in different disciplines dock,' which together amount to about 50 famons (Fig.1), whereas other names appearing in N gram are 'admiral/Admiral Haddock,' 'Richard Haddock,' and 'Captain Haddock.' Thus, the above mentioned 7 species were excluded from the analyses.From the remaining 243 common names, which have frequencies ≤117 famons, 5 species also have common names, which again have other uses in English (white cloud Tanichthys albonubes, morari Cabdio morar, bogue Boops boops, sergeant major Abudefduf saxatilis, beluga Huso huso; Tables | 2018-12-05T17:19:59.918Z | 2017-05-17T00:00:00.000 | {
"year": 2017,
"sha1": "4fc763c6384328359c5a2794154dcbcd99fdd132",
"oa_license": "CCBY",
"oa_url": "https://www.int-res.com/articles/esep2017/17/e017p009.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4fc763c6384328359c5a2794154dcbcd99fdd132",
"s2fieldsofstudy": [
"History",
"Linguistics"
],
"extfieldsofstudy": [
"Economics"
]
} |
117138591 | pes2o/s2orc | v3-fos-license | The extended atmospheres of Mira variables probed by VLTI, VLBA, and APEX
We present an overview on our project to study the extended atmospheres and dust formation zones of Mira stars using coordinated observations with the Very Large Telescope Interferometer (VLTI), the Very Long Baseline Array (VLBA), and the Atacama Pathfinder Experiment (APEX). The data are interpreted using an approach of combining recent dynamic model atmospheres with a radiative transfer model of the dust shell, and combining the resulting model structure with a maser propagation model.
Introduction and project outline
Mass-loss becomes increasingly important toward the tip of the AGB evolution. While the mass-loss process during the AGB phase is the most important driver for the further stellar evolution toward the PN phase, the details of the mass-loss process and its connection to the structure of the extended atmospheres and the stellar pulsation are not well understood and are currently a matter of debate.
Here, we present an overview on our established project of coordinated interferometric observations at infrared and radio wavelengths. Our goal is to establish the radial structure and kinematics of the stellar atmosphere and the circumstellar environment to understand better the mass-loss process and its connection to stellar pulsation. We also aim at tracing asymmetric structures from small to large distances in order to constrain shaping processes during the AGB evolution, which may lead to the observed diversity of shapes of planetary nebulae. We use two of the highest resolution interferometers in the world, the Very Large Telescope Interferometer (VLTI) and the Very Long Baseline Array (VLBA) to study AGB stars and their circumstellar envelopes from near-infrared to radio wavelengths. For some sources, we have included near-infrared broad-band photometry obtained at the South African Astronomical Observatory (SAAO) in order to derive effective temperature values. We have started to use the Atacama Pathfinder Experiment (APEX) to investigate the line strengths and variability of high frequency SiO maser emission,
Observations
Our pilot study included coordinated observations of the Mira variable S Ori including VINCI K-band measurements at the VLTI and SiO maser measurements at the VLBA (Boboltz & Wittkowski 2005). For the Mira variables S Ori, GX Mon, RR Aql and the supergiant AH Sco, we obtained long-term mid-infrared interferometry covering several pulsation cycles using the MIDI instrument at the VLTI coordinated with VLBA SiO (42.9 GHz and 43.1 GHz transitions) observations Karovicova et al, these proceedings). For the Mira variables R Cnc and X Hya, we coordinated near-infrared interferometry (VLTI/AMBER), mid-infrared interferometry (VLTI/MIDI), VLBA/SiO maser observations, VLBA/H 2 O maser, and near-infrared photometry at the SAAO (work in progress). Most recently, measurements of the v = 1 and v = 2 J = 7 − 6 SiO maser transitions toward our program stars were obtained at two epochs using APEX (work in progress).
Modeling
The P and M model series by Ireland et al. (2004a/b) were chosen as the currently best available option to describe dust-free Mira star atmospheres. Wittkowski et al. (2007) have added an ad-hoc radiative transfer model to these model series to describe the dust shell as observed with the mid-infrared interferometric instrument MIDI using the radiative transfer code mcsim mpi by Ohnaka et al. (2007). Gray et al. (2009) have combined these hydrodynamic atmosphere plus dust shell models with a maser propagation code in order to describe the SiO maser observations. Most recently, we have also used new dynamic atmosphere series (CODEX series) by Ireland et al. (2008), which use the opacity sampling method, and are available for additional stellar parameters compared to the P/M series.
Results
The pilot study on the Mira variable S Ori (Boboltz & Wittkowski 2005) transition) at phase 0.7. The stellar diameter was estimated by measuring the uniform disk diameter and correcting it for the continuum diameter using dynamic model atmospheres as described in Sect. 3. This result is free of the usual uncertainty inherent in comparing observations widely spaced in phase and/or using directly uniform disk diameters which may be contaminated by extended molecular layers. Fedele et al. (2005) estimated in an analogous way SiO maser ring radii of 1.9 R cont and 1.8 R cont , respectively, for the Mira variable R Leo at phase 0.1. Mid-infrared interferometric data of S Ori were taken concurrently with additional three epochs of VLBA observations of the same SiO maser transitions ). The modeling of the MIDI data resulted in phase-dependent continuum photospheric angular diameters. The dust shell could best be modeled with Al 2 O 3 grains alone located close to the stellar photosphere with inner radii between 1.8 and 2.4 photospheric radii. Mean SiO maser ring radii were found to lie between about 1.9 and 2.4 stellar continuum radii. The maser spots marked the region of the molecular atmospheric layers shortly outward of the steepest decrease of the mid-infrared model intensity profile. These results suggested that the SiO maser shells are co-located with the Al 2 O 3 dust shell near minimum visual phase. Their kinematics showed that there appeared a velocity gradient at all epochs, with masers toward the blue-and red-shifted ends of the spectrum lying closer to the center of the distribution than masers at intermediate velocities. This phenomenon was interpretated as a radial gas expansion with a velocity of about 10 km/sec. Fig. 1 shows a sketch of the radial structure of S Ori's circumstellar environment as derived from this study. A similar -but longer -study of the Mira variable RR Aql, which shows a silicate dust chemistry, is presented by Karovicova et al. in these proceedings. 4 Wittkowski et al.
The combination of the dynamic model atmospheres plus dust shell model with the maser propagation model by Gray et al. (2009) showed that modeled SiO masers formed in rings with radii consistent with those found in the VLBA observations described above and in earlier models. This agreement required the adoption of a radio photosphere of radius about twice that of the near-infrared continuum photosphere in agreement with observations. Maser rings, a shock, and the 8.1 µm radius, dominated by optically thick water layers, appeared to be closely related. The maser ring variability and number of spots may not be consistent with observations, which may be explained by re-setting masers in the model at each phase.
Near-infrared spectro-interferometric observations of AGB stars using the AM-BER instrument were first obtained by Wittkowski et al. (2008). These observations covering 29 spectral channels between 1.29 µm and 2.32 µm exhibited significant variations as a function of spectral channel that could only be explained by a variation of the apparent angular size with wavelength. This 'bumpy' visibility curve was interpreted as a signature of molecular layers lying above the continuum-forming photosphere, at near-infrared wavelengths mostly CO and H 2 O. The variation of visibility and corresponding diameter values resemble well the predictions by dynamic model atmospheres that naturally include these atmospheric molecular layers. Similar bumpy visibility curves were subsequently also seen for the red supergiant VX Sgr (Chiavassa et al. 2010), the semi-regular AGB star RS Cap (Marti-Vidal et al., in preparation), and three OH/IR stars (Ruiz Velasco et al., these proceedings)., indicating that close molecular layers may be a common phenomenon of cool evolved stars.
Medium resolution (R ∼1500) visibility functions of the Mira variable R Cnc obtained within the project discussed here confirm the conclusion that Mira variables show wavelength-dependent angular diameters when observed with spectro-interferometric techniques. In particular, the CO band-heads are nicely visible with the AMBER medium resolution mode. The data are well consistent with predictions by dynamic model atmospheres of the P/M as well as CODEX series, where the latter provides a better agreement, in particular for the CO band-heads. R Cnc shows closure phase values that are significantly different from 0 • and 180 • , thus indicate a significant deviation from point symmetry. The interpretation of the closure phase measurements is work in progress. They might indicate a complex non-spherical stratification of the extended atmosphere, and may reveal whether observed asymmetries are located near the photosphere or in the outer molecular layers. The measured angular diameter values together with the SAAO photometry results in phase-dependent effective temperature values that are roughly consistent with the effective temperature of the best-fitting model atmospheres of the series.
Our recent APEX observations of the v = 1 and v = 2 J = 7 − 6 SiO maser transition of AGB stars showed a variability of the maser intensity that is stronger than for the centimeter SiO maser transitions. Also, different ratios between the v=1 and v=2 transitions were detected for different sources and phases, where even only one of the two transitions may be present The combination of dynamic atmosphere and maser propagation models by Gray et al. (2009) showed the v = 1 transition but not the v = 2 transition. Earlier such models (Humphreys et al. 2002) showed both transitions; these had stronger shocks and higher post-shock temperatures compared to the more recent models. It is also known that infrared line overlap of SiO and H 2 O can deeply effect the pumping of some SiO maser transitions and lead to anomalous maser intensities (e.g. Bujarrabal et al. 1996) Extended atmospheres of Mira variables
Summary
We have observed a sample of AGB stars using near-infrared, mid-infrared, and radio interferometry. Near-infrared spectro-interferomet ry has shown to be a powerful tool to study the complex atmosphere of AGB stars including atmospheric molecular layers, most importantly H 2 O and CO. These observations are well consistent with predictions by recent dynamic model atmospheres. Near-infrared closure phase measurements indicate a complex non-spherical stratification of the atmosphere. The addition of near-infrared photometry allows us to determine phase-dependent effective temperature values. Mid-infrared interferometry constrains dust shell parameters including Al 2 O 3 dust with inner boundary radii of about 2 photospheric radii and silicate dust with inner boundary radii of about 4 photospheric radii. SiO maser transitions observed with the VLBA (42.8 GHz and 43.1 GHz) lie in the extended atmosphere seen by near-infrared and mid-infrared interferometry, and may be co-located with Al 2 O 3 dust. Their kinematics indicate motion such as outflow. The observed location relative to the stellar photosphere is consistent with predictions of combined hydrodynamic models and maser propagation models. APEX millimeter observations indicate highfrequency SiO masers that are located probably very close to the photosphere, and that show strong variability. We plan to add millimeter interferometry to this study using the Atacama Large Millimeter Array (ALMA) in order to obtain maps of high-frequency SiO masers. | 2011-05-17T14:12:32.000Z | 2011-05-17T00:00:00.000 | {
"year": 2011,
"sha1": "9ce88467181a7938b189e7e2fea3c83884b00582",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ce88467181a7938b189e7e2fea3c83884b00582",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
15259607 | pes2o/s2orc | v3-fos-license | Modulation of TLR2, TLR4, TLR5, NOD1 and NOD2 receptor gene expressions and their downstream signaling molecules following thermal stress in the Indian major carp catla (Catla catla)
Toll-like receptors (TLRs) and nucleotide binding and oligomerization domain (NOD) receptors are pattern recognition receptors (PRRs) that recognize pathogen-associated molecular patterns (PAMPs) and play crucial role in innate immunity. In addition to PAMPs, PRRs recognize endogenous molecules released from damaged tissue or dead cells [damage-associated molecular patterns (DAMPs)] and activate signaling cascades to induce inflammatory processes. In the aquatic environment, large variation in seasonal and diurnal water temperature causes heat and cold stresses in fish, resulting in tissue injury and mortality of fish. In the Indian subcontinent, catla (Catla catla) is an economically important freshwater fish species and is prone to thermal stresses. To investigate the response of pattern recognition receptors in thermal stress, we analyzed TLRs (TLR2, TLR4 and TLR5) and NOD (NOD1 and NOD2) receptors gene expression in catla following heat and cold stress. Analysis of tissue samples (gill, liver, kidney and blood) of the thermal stressed and control fish by quantitative real-time PCR (qRT-PCR) assay revealed significant (p < 0.05) induction of TLR2, TLR4 and NOD2 gene expression in majority of the tested tissues of the treated fish as compared to the control. The expression of TLR5 and NOD1 gene was also induced in the heat and cold stressed fish, but mostly restricted in the blood. The downstream signaling molecule of TLR and NOD signaling pathway viz., MyD88 (myeloid differentiation primary response gene 88) and RICK (receptor interacting serine-threonine protein kinase-2) was also induced in the thermal stressed fish suggesting the engagement of TLR and NOD signaling pathway during thermal stress.
Introduction
The success of aquaculture depends on providing an optimum and congenial environment to fish which subsequently helps to achieve their higher survival rate and growth (Boyd and Tucker 1998). The health status of aquatic animals is uniquely influenced by their immediate surroundings viz., pH, salinity, temperature, ambient light intensity, presence of contaminants, dissolved oxygen concentration etc. (Tort 2011). Among these, temperature is one the most important abiotic factors that plays a critical role in the life of poikilothermic animals like fish (Fry 1971;Brett 1971;Stewart et al. 2002). Various fish species differ in requirement of their optimal temperature (Sharma et al. 2014). Beyond that limit, fishes experience the thermal stress resulting in tissue injury and are prone to be infected by opportunistic pathogens (Das et al. 2004;Gordon 2005;Dalvi et al. 2009). To defend against pathogenic invasion, fish primarily depend upon nonspecific or innate immunity contributed by pattern recognition receptors (PRRs) like toll-like receptors (TLRs) and nucleotide binding and oligomerization domain (NOD) receptors. In addition to the PAMPs (pathogen-associated molecular patterns) recognition, TLRs sense DAMPs (damage-associated molecular patterns) which are endogenous host molecules viz., fibronectin (Okamura et al. 2001), heparin sulfate (Johnson et al. 2002), biglycan (Schaefer et al. 2005), fibrinogen (Smiley et al. 2001), oligosaccharides of hyaluronan breakdown products (Jiang et al. 2005;Taylor et al. 2004Taylor et al. , 2007, heat shock proteins (Yu et al. 2010), high mobility group box1 (HMGB1) (Tang et al. 2011), tenascin-C (Midwood et al. 2009), cardiac myosin (Zhang et al. 2009), S100 proteins, thioredoxin-interacting protein (TXNIP) (Yu et al. 2010) etc. Physiologically, these endogenous ligands are localized under different cellular compartment, but under stress they are either released passively from the injured tissues/ dying cells or actively secreted by activated cells via nonconventional lysosomal route (Pollanen et al. 2009). Endogenous TLR ligands act as alarmins and may serve as early warning signals to innate and adaptive immunity (Matzinger 2002;Seong and Matzinger 2004). Recognition of DAMPs by PRRs activates signaling cascade resulting in the induction of cytokines, recruitment of more immune cells and repair of damaged tissue (Medzhitov 2008). In addition to TLRs, NOD-like receptors (NLRs) have also been shown to respond to both microbial components (Franchi et al. 2012) and endogenous ligands derived from tissue/cellular injuries (Ting et al. 2008;Tschopp and Schroder 2010;Krishnaswamy et al. 2013;Monie 2013).
The global climate is rapidly changing, resulting in significant shift in water temperatures and stresses in various fish species (Jain and Kumar 2012). The variation in water temperature has been shown to modulate the expression of TLR gene transcripts in zebrafish (Danio rerio) (Sundaram et al. 2012). In India, among various freshwater fish species, catla (Catla catla) is one of the most commercially important and highly favored fish in the farming industry. Therefore, this work was undertaken to investigate the response of TLRs and NOD receptors in catla during thermal stresses.
Fish
Catla fry (0.676 ± 0.026 g) was obtained from a local fish farm and was stocked in 50 L glass aquaria in the wet laboratory. Before the start of the experiment, acclimatization was carried out for 3-weeks at 25°C to avoid the handling stress. The fish was fed twice a day with laboratory prepared feed (40 % protein) at the rate of 5 % of their body weight. Aeration was constant to maintain high oxygen level (6.21-7.14 mg/l) and the continuous mixing of water throughout the study period.
Thermal stress and sampling
Fish were randomly distributed in 12 glass aquaria each containing 40 fish. Each aquarium was connected with a filtration unit and a cooling/heating unit. This helped to maintain desirable temperature in the aquarium. The used water from fish culture unit was first circulated into the filtration unit, then to the cooling or heating unit, and finally into the fish culture unit. The ambient temperature was 25 ± 2°C during this period. Fish acclimatized at 25°C were exposed to six different temperature viz. 10, 15, 20, 25, 30 and 35°C. Two replicates were used for each temperature. The 25°C temperature was considered as ambient temperature (control). Two groups were maintained above and three groups were below the control temperature. Each experimental temperature was achieved with change of temperature at 1°C/12 h starting from acclimatization temperature of 25°C. In this way, the treated aquaria attained 20 and 30°C after 2.5 days, and 10°C after 7.5 days.
For sampling, fish were collected from each tank (n = 10) after 12 h of achieving the assigned temperature to study the immediate effect of stress after exposure. Then again fish were collected after 7 days to study the effect of chronic stress. Total two samplings were conducted in all treatments, except at 10°C because fish died before second sampling. Fish were taken out from experimental tank, anesthetized with MS-222 (Sigma, USA) following which gill, liver, kidney and blood were collected separately in TRIzol reagent for RNA extraction and further study.
RNA isolation, cDNA synthesis and quantitative real-time PCR analysis
Total RNA was extracted from TRIzol reagent-treated sample following the manufacturer's protocol (Invitrogen, USA). The concentration of RNA was measured by UV-spectrophotometer (Biophotometer Plus, Eppendorf, Germany), and the quality was assessed by observing the intensity of 28 and 18S rRNA (ribosomal RNA) band in 1 % agarose gel. To synthesize the first strand cDNA (complementary DNA), 1 lg of total RNA was first treated with 1 unit of DNase I (MBI, Fermentas, USA), and was reverse transcribed with oligo-dT primer and RevertAid 1st strand cDNA synthesis kit (MBI, Fermentas, USA). The PCR-amplification of b-actin gene was carried out for the confirmation of cDNA synthesis.
To study the basal expression of innate immune genes (TLR2, TLR4, TLR5, NOD1 and NOD2) in gill, liver, kidney and blood of catla fry, and their in vivo modulation following variations in water temperatures, quantitative real-time PCR (qRT-PCR) was employed. The qRT-PCR was performed in LightCycler Ò 480 II real-time PCR detection system (Roche, Germany), and in a 10 ll reaction volume the following reagents were added: cDNA-1 ll, FW and RV primer (2.5 lM each; Table 1) 0.25 ll each, 2X LightCycler Ò 480 SYBR Green I master mix (Roche, Germany) 5 ll and PCR grade water 3.5 ll. PCR amplifications were performed in triplicate wells under the following conditions: initial denaturation at 95°C for 10 min followed by 45 cycles of 94°C for 10 s, 51°C (MyD88)/55°C (TLR4)/56°C (NOD1)/58°C (TLR2, TLR5, RICK and b-actin)/60°C (NOD2) for 10 s and 72°C for 10 s. For negative control, qRT-PCR reaction without cDNA was considered. To determine PCR efficiencies, qRT-PCR with serial dilutions of cDNA was carried out. The efficiencies were *100 %, that allowed the use of 2 -DDCT method to calculate relative gene expression of the target genes with that of reference gene, b-actin. In a 2 % agarose gel, 8 ll of the real-time PCR products were loaded to verify the specificity of the product size. The relative expression ratios were obtained by normalizing expression of the target gene, as determined by mean crossing point (cp) deviation, by that of reference gene, b-actin following 2 -DDCT method (Livak and Schmittgen 2001). The data obtained from qRT-PCR analysis were expressed as mean of two experiments ± standard error (SE), and the significant difference between control and treated groups at each time point was determined by the Student's t test using Microsoft Excel 2010 with p \ 0.05 as significance level.
Results and discussion
Tissue specific expression of innate immune genes The data of qRT-PCR revealed wide expression of TLR2, TLR4, TLR5, NOD1 and NOD2 genes across the tested organs/tissues, but the magnitude of their expression varied. Lowest expression of TLR2, TLR4 and TLR5 was observed in blood. As compared to the blood, TLR2 gene expression in gill and kidney was *5.5 fold (Fig. 1a), and TLR4 and TLR5 in liver was *7 fold ( Fig. 1b) and *150 fold (Fig. 1c), respectively. Among the NOD receptors, least expression of NOD1 was detected in blood and the highest (*7 fold) was in liver (Fig. 1d). In contrast, NOD2 expression was lowest in liver and highest (*20 fold) in blood (Fig. 1e). The expression of TLR2, TLR5, NOD1 and NOD2 gene in the embryonic developmental stages and in various organs/tissues was previously been reported in Labeo rohita Swain et al. 2012Swain et al. , 2013a and Cirrhinus mrigala (Basu et al. 2012a(Basu et al. , b, 2013. Catla is a closely related fish to rohu and mrigal. Therefore, the expression of TLR and NOD genes in gill, liver, kidney and blood was expected. The constitutive expression of TLR and NOD genes may indicate their availability as innate immune receptor in the early developmental stages of catla.
Modulation of TLR and NOD gene expression
We monitored immediate (12 h post-exposure) and late (7 days post-exposure) response of TLR2, TLR4, TLR5, NOD1 and NOD2 receptors in catla fry by analyzing the tissue samples (gill, liver, kidney and blood) of control fish (fish maintained at 25°C), cold stressed fish (exposed to 10, 15 and 20°C) and heat stressed fish (exposed to 30 and 35°C) through quantitative real-time PCR (qRT-PCR) assay. At 7 days post-exposure, all catla fry remained alive at 15, 20, 30 and 35°C, but at 10°C, all fish died.
Toll-like receptor-2
In response to the early (12 h) thermal stress, TLR2 gene expression was significantly (p \ 0.05) up-regulated in gill, liver, kidney and blood of the treated fish as compared to control (Fig. 2a). In gill, TLR2 expression was highest at 20°C (*10 fold) and it declined gradually with further lowering of temperature. A similar trend of TLR2 expression was also observed in liver, kidney and blood at 20 and 15°C. At 10°C, TLR2 expression was found to be suppressed in all tested tissues. Due to heat stress, TLR2 induction in gill, liver, kidney and blood was also increased at 30 and 35°C. Among the organs, maximum induction of TLR2 was observed in liver at 30°C (*6 fold) followed by gill and blood. Fish exposed at 35°C expressed higher TLR2 in all tested tissues than the control fish maintained at 25°C. During late thermal stress (7 days), the pattern of TLR2 gene expression in gill, liver and kidney was almost similar as observed in early (12 h) thermal stress but the magnitude of the response in terms of fold change was different (Fig. 2b). In contrast to other tissues, TLR2 gene expression in blood was down-regulated in the treated fish group as compared to control.
Toll-like receptor-4
The organ/tissue specific modulation of TLR4 was detected in thermal stressed fish at 12 h post-exposure as compared to control fish (Fig. 2c). In gill, TLR4 expression was downregulated at all experimental temperatures as compared to control. In liver, except at 10°C, a similar trend of TLR4 gene expression was also observed. In kidney, a significant induction of TLR4 was noted at 10°C, and at higher temperatures (30 and 35°C) it was down-regulated. In blood, highest induction of TLR4 gene expression was observed at 10°C as compared to the control fish. At 7 days post-thermal stress (Fig. 2d), TLR4 gene expression in gill, liver and kidney of cold and heat stressed fish showed down-regulation. However, in blood there was marginal increase in TLR gene expression at 20°C, which reached to its peak at 15°C (*7 fold). There was moderate increase in TLR4 gene expression at 30 and 35°C (*2 fold) as compared to the control fish.
Toll-like receptor-5 The induction of TLR5 gene expression in thermal stressed fish was also tissue specific as compared to the control fish. At 12 h post-exposure, TLR5 was down-regulated in gill and liver at all experimental temperatures (Fig. 2e). In kidney, TLR5 expression showed a marginal increase at 15°C (*1.7 fold) and 20°C (*1.4 fold) as compared to control, but at other temperatures it remained almost unchanged. In blood, the pattern of TLR5 gene expression was strikingly different from other tested tissues. As compared to the control, most significant (p \ 0.05) induction of TLR5 was observed in blood during cold stress at 10°C (*14 fold) and heat stress at 30°C (*13 fold). At 7 days post-thermal stress, except in gill, the trend of TLR5 gene expression in liver, kidney and blood of treated Total RNA was extracted from gill, liver, kidney, and blood from the control and treated fish, at 12 h and 7 days post-exposure, and quantitative real-time PCR was conducted to analyze TLR2, TLR4 and TLR5 gene expression keeping b-actin as housekeeping control gene. The results were calculated as mean ± standard error (bars in the graph) and were shown as fold changes compared to control. Significant difference (p \ 0.05) between control and treated fish group was indicated with asterisks. TLR2 gene expression at 12 h (a) and 7 days (b); TLR4 gene expression at 12 h (c) and 7 days (d) and TLR5 gene expression at 12 h (e) and 7 days (f) posttreatment fish group was almost similar to 12 h post-thermal stress (Fig. 2f). In the treated fish gill, TLR5 gene expression was up-regulated at 20°C as compared to the control fish. The TLR2 and TLR4 are reported to be responsible for recognition of heat shock proteins (Hsp60, Hsp90 and Gp96), HMGB1 (Tsan and Gao 2004) and hyaluronan-induced inflammatory response (Jiang et al. 2005;Noble and Jiang 2006). Similarly, in catla, the activation of TLR2 and TLR4 is likely to be mediated through Hsp during heat stress and HMGB1 released from the necrotic cells (Tsung et al. 2005). In addition to these, other endogenous TLR ligands released from damaged tissue and cells may activate TLRs. Recognition of endogenous ligands by TLR5 was previously been reported in rheumatoid arthritis (Chamberlain et al. 2012). In catla, significant induction of TLR5 gene expression in blood suggests tissue injury resulting in the release of endogenous TLR5 ligands. However, further works are necessary to draw any conclusion in this regard.
Nucleotide binding and oligomerization domain (NOD)-1
We next analyzed NOD1 gene expression in thermal stressed and control fish at 12 h post-exposure (Fig. 3a). Among all tested tissues, the most significant induction (p \ 0.05) of NOD1 was observed in blood. During cold stress, NOD1 expression in blood was *2.7 fold at 20°C, and it reached the peak (*3 fold) at 10°C. In heat stress, there was gradual increase in NOD1 expression at 30°C (*1.3 fold) and 35°C (*2.6 fold). In gill, NOD1 gene expression was observed to be slightly induced or remained unchanged. In liver and kidney, down-regulation of NOD1 was observed in response to cold as well as heat shock. As shown in Fig. 3b, NOD1 gene expression in cold and heat exposed fish gill and liver remained down-regulated at all tested temperatures after 7 days post-exposure. However, in kidney it was up-regulated (*1.6 fold) only at 20°C as compared to the control fish. In blood, NOD1 expression remained down-regulated at 15, 30 and 35°C, but a marginal up-regulation was observed at 20°C in the treated fish group.
Nucleotide binding and oligomerization domain (NOD)-2
The effect of thermal stress (both cold and heat) on the catla NOD2 gene expression was clearly different from other tested PRRs. As compared to the control (25°C), NOD2 was significantly (p \ 0.05) up-regulated during thermal stress in all tested tissues/organs at 12 h post-exposure (Fig. 3c). With the advancement of temperature from 25°C, there was gradual increase in NOD2 gene expression in all tested organs/tissues, except in kidney. Among the tissues, highest induction of NOD2 was observed in liver (*9 fold). In gill and kidney, maximum induction of NOD2 was observed at 15°C (5-7 fold), and it gradually decreased with the lowering of temperature to 10°C. In blood, we noticed steady increase in NOD2 expression following cold and heat stress. At 7 days postexposure, liver, kidney and blood of treated fish group also revealed enhanced NOD2 gene expression (Fig. 3d). Among the tissues, maximum induction of NOD2 was in kidney: at 20°C it was *6 fold, at 15°C *2.8 fold, at 35°C *3.5 fold and at 30°C it was *3 fold as compared to the control. In liver, highest expression of NOD2 was at 15°C (*2.4 fold) and in blood at 35°C (*2.9 fold).
In addition to PAMPs recognition, the response of NOD receptors in recognizing endogenous ligands (DAMPs) was reported during tissue injury (Ting et al. 2008;Tschopp and Schroder 2010). In fish, the response of NOD receptors in PAMPs recognition and innate immunity was previously been reported in rohu, catla and mrigal (Swain et al. , 2013a. In thermal stress, the activation of NOD1 and NOD2 gene in some of the tissues of catla supports the previous observation of DAMPs recognition by NOD receptors, and warrant further study in this regard.
Myeloid differentiation primary response gene 88
MyD88 is the downstream adaptor molecule in TLR2, TLR4 and TLR5-signaling pathway. At 12 h post-cold stress and heat stress, MyD88 gene expression in the treated fish gill increased *2 fold as compared to the control. In the treated fish liver, there was *5 fold increase in MyD88 expression at 10°C. At 15 and 35°C, MyD88 expression was almost equal to the control and at 20°and 30°C, it was down-regulated. In kidney, except at 10°C, there was inductive expression of MyD88 gene (2-3 fold), and it reached maximum of *3 fold at 15 and 30°C. In blood, there was *2 fold increase in MyD88 expression only at 10°C (Fig. 4a). At 7 days post-treatment, MyD88 gene expression increased significantly in gill and liver of the treated fish group as compared to control, and it reached its peak (*3 fold) at 30°C. In kidney, MyD88 remained almost unchanged at 20, 30 and 35°C but it was down-regulated at 15°C. In blood, there was up-regulation of MyD88 at 20°C, and at other temperatures it remained almost unchanged (Fig. 4b).
In MyD88-dependent TLR-signaling pathway, recognition of PAMP or DAMPs by TLR2, TLR4 and TLR5 leads to the activation of downstream adaptor molecule ''MyD88'' resulting in NF-jB phosphorylation, and induction of cytokines gene expression (Akira 2009). In this study, up-regulation of either TLR2/TLR4/TLR5 genes expression in some tissues correlated with MyD88 gene expression during cold and heat shock, suggesting the activation of MyD88-dependent TLR-signaling pathway. Previously, MyD88 activation in PAMPs mediated TLR2, TLR4 and TLR5 signaling resulted in the induction of cytokines in rohu ) and mrigal (Basu et al. 2012a(Basu et al. , b, 2013. Catla, a member of the Indian major carps (IMC), is closely related to rohu and mrigal under the same family of Cyprinidae. Therefore, activation of MyD88 in DAMP mediated TLR signaling during cold and heat stresses may follow a similar pathway of NF-jB phosphorylation and cytokine gene expression.
Receptor interacting serine-threonine protein kinase-2 (RICK) In NOD1 and NOD2 signaling pathway, RICK functions as downstream adaptor molecule. We investigated RICK gene expression in gill, liver, kidney and blood of control and thermal stressed fish through qRT-PCR. As shown in Fig. 4c, there was significant induction of RICK gene expression at 12 h post-treatment in all tested tissues. Due to cold stress, highest induction of RICK was observed at 10°C in blood, followed by liver, kidney and gill in the treated fish group as compared to the control. During heat stress all other tissues except liver revealed marked increase in RICK gene expression. At 7 days post-treatment, RICK gene expression in liver, kidney and blood of the treated fish group followed almost similar pattern as observed in 12 h post-treatment. However, the magnitude of RICK induction at 7 days was much lower than 12 h (Fig. 4c).
NOD1 and NOD2 are cytoplasmic sensors of PAMP/ DAMPs and they transmit downstream signaling through RICK. In rohu, catla and mrigal, activation of NOD1, NOD2 and RICK gene expression was previously been reported following PAMPs (iE-DAP, LPS and poly I:C) stimulation and bacterial infection (Swain et al. , 2013a. In cold and heat stresses, we also noted Total RNA was extracted from gill, liver, kidney, and blood from the control and treated fish, at 12 h and 7 days post-exposure, and quantitative real-time PCR was conducted to analyze NOD1 and NOD2 gene expression keeping b-actin as housekeeping control gene. The results were calculated as mean ± standard error (bars in the graph) and were shown as fold changes compared to control. Significant difference (p \ 0.05) between control and treated fish group was indicated with asterisks. NOD1 gene expression at 12 h (a) and 7 days (b); NOD2 gene expression at 12 h (c) and 7 days (d) significant up-regulation of NOD1 and NOD2 and RICK gene expression in various tissues, which may suggest the activation of NOD signaling pathway by the endogenous ligands during thermal stress in catla.
Conclusion
This article demonstrates TLR2, TLR4, TLR5, NOD1 and NOD2 gene expression in catla fish during the early developmental stages, and this is the first report. The inductive expression of TLR and NOD receptor genes along with their downstream molecules MyD88 and RICK, respectively, suggests the release of DAMPs during thermal stress in fish. The data in this study may help in investigating the greater role of TLR and NOD receptors in repairing the damage tissues and pathology of fish.
Acknowledgments This study was supported by the grant of National Agricultural Science Fund (NASF), Indian Council of Agricultural Research (ICAR) (Project code AS-2001). The authors express their gratitude to the Director, CIFA, for providing necessary facility and Dr. A Bandyopadhyay, Dr. P. K. Agrawal, National Coordinator, NASF for their help and suggestions.
Conflict of interest
The authors declare that they have no conflict of interest in the publication.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. MyD88 and RICK gene expression in thermal stress. Total RNA was extracted from gill, liver, kidney and blood from the control and treated fish at 12 h and 7 days post-exposure, and quantitative real-time PCR was conducted to analyze MyD88 and RICK gene expression keeping b-actin as housekeeping control gene. The results
References
were calculated as mean ± standard error (bars in the graph) and were shown as fold changes compared to control. Significant difference (p \ 0.05) between control and treated fish group was indicated with asterisks. MyD88 gene expression at 12 h (a) and 7 days (b); RICK gene expression at 12 h (c) and 7 days (d) | 2018-04-03T02:00:27.977Z | 2015-05-16T00:00:00.000 | {
"year": 2015,
"sha1": "45619f6d2acfcf74a7e72b998850d9df3ff54920",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-015-0306-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "45619f6d2acfcf74a7e72b998850d9df3ff54920",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270755859 | pes2o/s2orc | v3-fos-license | Electrospun Photodynamic Antibacterial Konjac Glucomannan/Polyvinylpyrrolidone Nanofibers Incorporated with Lignin-Zinc Oxide Nanoparticles and Curcumin for Food Packaging
Due to the growing concerns surrounding microbial contamination and food safety, there has been a surge of interest in fabricating novel food packaging with highly efficient antibacterial activity. Herein, we describe novel photodynamic antibacterial konjac glucomannan (KGM)/polyvinylpyrrolidone (PVP) nanofibers incorporated with lignin-zinc oxide composite nanoparticles (L-ZnONPs) and curcumin (Cur) via electrospinning technology. The resulting KGM/PVP/Cur/L-ZnONPs nanofibers exhibited favorable hydrophobic properties (water contact angle: 118.1°), thermal stability, and flexibility (elongation at break: 241.9%). Notably, the inclusion of L-ZnONPs and Cur endowed the nanofibers with remarkable antioxidant (ABTS radical scavenging activity: 98.1%) and photodynamic antimicrobial properties, demonstrating enhanced inhibitory effect against both Staphylococcus aureus (inhibition: 12.4 mm) and Escherichia coli (12.1 mm). As a proof-of-concept study, we evaluated the feasibility of applying nanofibers to fresh strawberries, and the findings demonstrated that our nanofibers could delay strawberry spoilage and inhibit microbial growth. This photodynamic antimicrobial approach holds promise for design of highly efficient antibacterial food packaging, thereby contributing to enhanced food safety and quality assurance.
Introduction
Food spoilage caused by microorganisms leads to significant food wastage and substantial economic losses [1].Therefore, it is imperative to explore antimicrobial preservation methods for food to effectively control spoilage and deterioration [2].One such method involves the application of antibacterial agents directly onto the food surface [3].However, this approach entails potential concerns as it necessitates the contact of food with the antimicrobial agents, which can result in the generation of toxic substances and compromise food quality [4].Nowadays, there has been a growing interest in the development of food packaging, which offers a promising solution to mitigate the potential toxicity while ensuring food quality [5].Nevertheless, the antimicrobial performance of this packaging still requires improvement.Therefore, it is imperative to implement measures that enhance its antimicrobial capabilities, effectively inhibiting the growth of microorganisms and extending the shelf life of food products [6].
Photodynamic inactivation (PDI) represents an innovative antimicrobial approach that can be seamlessly integrated with packaging to enhance its antimicrobial properties [7].A Foods 2024, 13,2007 2 of 15 key advantage of PDI is its ability to overcome bacterial drug resistance, a critical concern in traditional antimicrobial methods [8,9].Nano-zinc oxide (Nano-ZnO) exhibits a high specific surface area, photocatalytic capabilities, and robust antibacterial activity [10,11].Furthermore, it has been identified as a potential photosensitizer, capable of producing reactive oxygen species (ROS) when exposed to light at specific wavelengths [12].Presently, it finds applications in diverse fields such as food packaging [13], sensors [14], batteries [15], and other applications.However, its utility is hindered by drawbacks like low stability and challenging dispersion [16,17].Lignin is the second most abundant plant constituent and contains a variety of functional groups, including hydroxyl, carboxyl, and carbonyl groups [18,19].Previous research has shown that lignin can improve the dispersion of Nano-ZnO [20].Additionally, by combining Nano-ZnO with lignin for nanoparticle preparation, a synergistic effect can be achieved to augment antimicrobial activity [21].Curcumin (Cur), a naturally occurring fat-soluble polyphenolic compound extracted from turmeric, possesses multiple beneficial physiological functions [22,23].Furthermore, curcumin exhibits hydrophobic, antioxidant, and antimicrobial properties which have the potential to further enhance the functional properties of the films, thereby expanding their applications [24].It is worth mentioning that the addition of these natural extracts has an effect on the water vapor permeability (WVP) of nanofibers; this effect is not dependent on the hydrophobicity of the natural extracts themselves; rather, it is also related to the nanofiber substrate and interactions [25].For example, hydrophobic curcumin increases the WVP of cellulose/chitin nanofibers, whereas water-soluble anthocyanins decrease the WVP of pullulan/polyvinyl alcohol nanofibers [26,27].
So far, electrospun technology has become a prevalent method for fabricating food packaging [28,29].For instance, Ignacio Solaberrieta et al. developed electrospun nanofibers that can be utilized in active food packaging by incorporating Aloe vera skin extract (AVE) into poly (ethylene oxide) (PEO) [30].When compared to traditional food packaging produced using solution casting technology, those generated through electrospun nanofibers offer distinct advantages, notably high porosity and a substantial specific surface area [31].These properties facilitate the release of bioactive substances.It is worth noting that the spinnability of the solution and the morphology of the fibers are influenced by a multitude of electrospun parameters, including solution concentration, solvent volatility, conductivity, temperature, and humidity [32].Konjac glucomannan (KGM), derived from konjac tubers, is a water-soluble polysaccharide [33].It boasts commendable film-forming properties, biodegradability, biocompatibility, and safety [34].However, performing individual electrostatic spinning with pure KGM is difficult [35].To overcome this limitation, there is a need to identify a substance with superior spinnability.Fortunately, polyvinylpyrrolidone (PVP) is an amphoteric polymer with good spinnability and biocompatibility, providing a solid foundation for its effective blending with KGM [36].For instance, Jaume Gomez.et al. combined PVA and PVP with natural mango kernel starch (MKS) in the fabrication of nanofibers to enhance its spinnability [37].Moreover, Zhang Yin et al. fabricated electrospun compound nanofibers with PVP, PVA, and chitosan (CS).The evidence demonstrated that PVP could optimize the spinnability of the combination [38].
While research on photodynamic antimicrobial activity is on the rise, most studies employ the solution casting technique for films preparation.In this study, we compounded lignin with Nano-ZnO to form L-ZnONPs.Subsequently, we utilized the electrospun technique to create KGM/PVP nanofibers loaded with Cur and L-ZnONPs, aimed at achieving photodynamic synergistic antibacterial activity (Figure 1).The morphology and structure of nanoparticles and nanofibers were characterized, and the thermal stability, mechanical properties, hydrophobicity, antioxidant properties, and photodynamic antimicrobial properties of nanofibers were investigated.Furthermore, the preservation effect of nanofibers in fresh strawberries was investigated.We believe that this research will promote the development of new food packaging approaches with highly efficient antibacterial activity for enhanced food preservation and safety.
antimicrobial properties of nanofibers were investigated.Furthermore, the preservation effect of nanofibers in fresh strawberries was investigated.We believe that this research will promote the development of new food packaging approaches with highly efficient antibacterial activity for enhanced food preservation and safety.
Synthesis of L-ZnONPs
The L-ZnONPs were prepared based on previous method with slight modifications [39].Briefly, 0.5 g of lignin and Nano-ZnO were dissolved in 250 mL of DMSO and anhydrous ethanol, respectively.Then, they were mixed and magnetic stirred vigorously for 5 h.Subsequently, the L-ZnONPs were centrifuged, dried, and stored.
Synthesis of L-ZnONPs
The L-ZnONPs were prepared based on previous method with slight modifications [39].Briefly, 0.5 g of lignin and Nano-ZnO were dissolved in 250 mL of DMSO and anhydrous ethanol, respectively.Then, they were mixed and magnetic stirred vigorously for 5 h.Subsequently, the L-ZnONPs were centrifuged, dried, and stored.
Electrospun Process
Nanofibers were prepared by using electrospun equipment.The electrospun specimens were injected into a 10 mL syringe with a needle (20-gauge), and the velocity was controlled at 0.2 mm/min, the spinning voltage was 9.0 kV, the needle-collector distance was 20 cm.Throughout the spinning process, the temperature and humidity remained constant at 55 • C and 25%, respectively.
Characterization of L-ZnONPs
Fourier transform infrared spectroscopy (FTIR) was conducted to appraise the changes in the groups of the L-ZnONPs.The measurements were conducted using the KBr press method with 32 scans in the range of 4000-400 cm −1 at a resolution of 4 cm −1 .Scanning electron microscopy (SEM) was performed to examine the micro morphology of the L-ZnONPs, with the samples coated with a thin layer of gold by sputtering prior to observation.Energy-dispersive X-ray spectroscopy (EDS) was analyzed mainly for the three elements C, Zn, and O.
Characterization of Nanofibers
The morphology of the nanofibers was analyzed using SEM.The nanofibers were immobilized on an aluminum column and subjected to gold sputtering prior to the assay.The mean diameter and diameter distribution of the nanofibers were evaluated using the Image J, comprising 50 randomly selected fibers.EDS analyzes the elements C, Zn, and O in a similar format.The composition of nanofibers was measured by ATR-FTIR in the range (400-4000 cm −1 ) with 32 scans at a resolution of 4 cm −1 .The crystalline structure of the nanofibers was measured by X-ray diffraction (XRD) in the range (10 • -80 • ), with Cu Ka as the radiation source, operating at a voltage of 40 kV, 40 mA, and a scanning rate of 8 • /min.Thermogravimetric analysis was conducted using thermo-gravimetric analysis (TGA) at a ramp velocity of 10 • C/min within the temperature range (30-800 • C) and an initial sample weight of approximately 5 mg.The recording of the initial temperature, residue, and maximum degradation temperature commenced at a sample weight loss of 1%.The tensile strength (TS) and elongation at break (EAB) were tested using a tensile tester (Kyoto, Japan).Prior to testing, the nanofibers were slit into small rectangular specimens measuring 1 × 5 cm, and the thickness was measured at five random locations using an electronic micrometer (DITRON, Chengdu, China).The water contact angle (WCA) of the nanofibers was recorded at room temperature using a WCA analyzer.A drop (10 µL) of ultrapure water was placed on the nanofibers and photographed within 5 s.
Water vapor permeability (WVP) measurements were conducted following the previously described method with minor modifications [40].The nanofibers were loaded on top of a weighing bottle holding anhydrous CaCl 2 (3.0 g), and the bottle was maintained in a room temperature, 70% RH container and each 24 h was weighed and computed as follows: where ∆W (g) means the total weight discrepancy between the weighing bottle and the sample, x (mm) means the average thickness, A (mm 2 ) means the experimental space, t (24 h) means the experimental time, and ∆P (Pa) means the vapor pressure discrepancy between the nanofibers and the environment.
Antioxidant Performance Evaluation (ABTS Scavenging)
The antioxidant capacity of nanofibers was determined by ABTS free radicals scavenging rate [24].The nanofibers (50 mg) were dispersed in an anhydrous ethanol solution (5 mL).After that, it was shaken for 1 h.The leachate of nanofibers was admixed with ABTS solution in a ratio (4:0.2) and reactive for 5 min in a dark environment.Finally, the absorption of the reaction liquids was recorded with a UV spectrophotometer (UV-2600, Kyoto, Japan) at 734 nm.The rate of ABTS free radical scavenging was computed as shown below: ABTS free radical scavenging rate = where A 0 was the absorption of the ABTS solution and A 1 was the absorption of the reaction liquids.
Photodynamic Antimicrobial Properties
The antimicrobial properties of nanofibers were determined by agar diffusion method.Briefly, the nanofibers were chopped into slices (diameter = 6.0 mm) and placed on agar medium that had been inoculated with S. aureus and E. coli (inoculum concentration: 10 5 CFU/mL).Then, the nanofibers were irradiated with red light of 808 nm for 30 min.Finally, they were incubated at 37 • C for 16 h.
Application of Nanofibers in Strawberry Preservation
This study aimed to estimate the efficacy of nanofibers in maintaining the freshness of strawberries.The experimental groups consisted of strawberries wrapped in KGM, KGM/PVP, KGM/PVP/2%L-ZnONPs, KGM/PVP/Cur/1%L-ZnONPs, and KGM/PVP/ Cur/2%L-ZnONPs nanofibers, while the control group was left untreated.The strawberries were stored at room temperature, and their appearance was photographed at 1, 3, 5, and 7 days to observe any changes.Additionally, the strawberries were evaluated for weight loss, hardness loss, and pH before and after storage for 7 days.
Statistical Analysis
A minimum of three independent experiments were conducted for each experiment.All experimental data analyses were conducted using Origin 2021 (version 2021, Northampton, MA, USA).
Characterization of L-ZnONPs
The conformations and changes in the functional groups of the nanoparticles are studied by FTIR spectra.In Figure 2a, the FTIR spectral peaks of Nano-ZnO display prominent broad peaks (400-600 cm −1 ), which correspond to the Zn-O stretching vibrations in Nano-ZnO [41].Additionally, a peak at 3427 cm −1 is indicative of O-H stretching.In the spectra of lignin, characteristic peaks are located at 3400 cm −1 corresponding to -OH stretching, 2937 cm −1 is attributed to -CH 2 stretching, 1386 cm −1 and 1596 cm −1 are interpreted as -C=O stretching and phenyl ring backbone absorption peaks [42].In the spectrum of L-ZnONPs, the distinctive peaks of both ZnO and lignin are discernible.Notably, strong vibrational peaks appearing at 1136 cm −1 suggest the strong interaction occurring between lignin and Nano-ZnO.These findings support the successful integration of lignin and Nano-ZnO.
Morphological Analysis of Nanofibers
To determine the optimal mixing ratio of PVP and KGM solutions, we spun the nanofibers in three ratios of 9:1, 8:2, and 7:3 and evaluated the spinning effect by examining the microscopic morphology of the nanofibers (Figure 3).It shows that nanofibers spun in a ratio of 8:2 exhibit tangles and droplets.Additionally, nanofibers spun with a 7:3 ratio show even more droplets and tangles.Fortunately, the nanofibers spun in the 9:1 ratio have a good shape and are without liquid droplets or tangles.Therefore, the 9:1 ratio was chosen for subsequent experiments.
Figure 4a-c showcases SEM images, diameter distribution histograms, and EDS mapping images of nanofibers.Pure KGM nanofibers exhibit a significant presence of beaded and fractured fibers.This can be accounted for by the inadequate evaporation of the sol- Figure 2b,c show the SEM images and EDS images of L-ZnONPs, respectively.The SEM images depict the ellipsoidal architecture of the nanoparticles, featuring both large and small particles.Furthermore, the surface of the nanoparticles is rough and porous; these characteristics contribute to an enlarged specific surface area.Moreover, Nano-ZnO appears to be firmly adhered to the lignin surface, consistent with previous research [43].The EDS mapping images of the L-ZnONPs illustrate the partitioning of C, Zn, and O. Notably, these three elements (C, Zn, and O) exhibit a uniform distribution, providing further evidence of the successful combination of lignin and Nano-ZnO.
Morphological Analysis of Nanofibers
To determine the optimal mixing ratio of PVP and KGM solutions, we spun the nanofibers in three ratios of 9:1, 8:2, and 7:3 and evaluated the spinning effect by examining the microscopic morphology of the nanofibers (Figure 3).It shows that nanofibers spun in a ratio of 8:2 exhibit tangles and droplets.Additionally, nanofibers spun with a 7:3 ratio show even more droplets and tangles.Fortunately, the nanofibers spun in the 9:1 ratio have a good shape and are without liquid droplets or tangles.Therefore, the 9:1 ratio was chosen for subsequent experiments.
but rough surfaces.This phenomenon may be the result of the fiber structure collapsin due to rapid moisture absorption [44].In addition, the absence of beading and fiber frac tures can be traced to the reduction in interactions between KGM molecules facilitated b PVP, the phenomenon supported by prior research [45].However, upon the addition o Cur, both KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs nanofibers ex hibit smooth surfaces.This change in nanofiber morphology may be ascribed to the hy drophobic nature of Cur, which enhances the fiber structure.
The EDS figures show the average representation of the distribution of the three ele ments (C, Zn, and O) within the nanofibers.The images clearly demonstrate that the na noparticles are distributed both within and on the surface of the nanofibers.Furthermore the apparent white bright spots observed on the surface of the nanofibers were confirme to correspond to L-ZnONPs.Figure 4a-c showcases SEM images, diameter distribution histograms, and EDS mapping images of nanofibers.Pure KGM nanofibers exhibit a significant presence of beaded and fractured fibers.This can be accounted for by the inadequate evaporation of the solvent during the electrospun process and the limited spinnability of KGM itself.In contrast, both KGM/PVP and KGM/PVP/2%L-ZnONPs nanofibers display no tangles or droplets but rough surfaces.This phenomenon may be the result of the fiber structure collapsing due to rapid moisture absorption [44].In addition, the absence of beading and fiber fractures can be traced to the reduction in interactions between KGM molecules facilitated by PVP, the phenomenon supported by prior research [45].However, upon the addition of Cur, both KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs nanofibers exhibit smooth surfaces.This change in nanofiber morphology may be ascribed to the hydrophobic nature of Cur, which enhances the fiber structure.
The EDS figures show the average representation of the distribution of the three elements (C, Zn, and O) within the nanofibers.The images clearly demonstrate that the nanoparticles are distributed both within and on the surface of the nanofibers.Furthermore, the apparent white bright spots observed on the surface of the nanofibers were confirmed to correspond to L-ZnONPs.
Group Changes Analysis
In Figure 5a, the ATR-FTIR spectrum of the nanofibers is presented.For pure KGM nanofibers, the eigenpeak at 3359 cm −1 corresponds to the stretch vibration of -OH, while the eigenpeaks at 2930 cm −1 and 1027 cm −1 are associated with the stretching vibration of C-H and C-O, separately [47,48].Additionally, the eigenpeak at 1640 cm −1 is linked to the stretching vibration of intramolecular hydrogen bonding, while the eigenpeak at 1730 cm −1 relates to the vibrational stretching of the C=O within the KGM acetyl group and the C-O group within intermolecular hydrogen bonding [48,49].Furthermore, the eigenpeaks at 874 cm −1 and 807 cm −1 are connected to the vibrational stretching of the mannan unit of the KGM [45,50].In the case of KGM/PVP nanofibers, characteristic peaks of PVP are observed in addition to those of KGM.Specifically, the peak at 1495 cm −1 is assigned to the vibrational stretching of C-N and the bending vibration of N-H, while the peaks at 1424 and 1291 cm −1 are linked to the bending vibration of C-H and C-N, correspondingly [51,52].In addition, the eigenpeaks shift from 3359 to 3397 cm −1 and from 1640 to 1642 cm −1 with changes in transmittance, indicating an enhancement of molecular interaction between KGM and PVP through hydrogen bonding [53].For KGM/PVP/2%L-ZnONPs, no new functional groups are observed apart from the characteristic peaks of KGM, PVP, and nanoparticles, which suggests that there are solely physical interactions between the nanoparticles and the KGM/PVP nanofibers matrix.However, the characteristic peaks become less pronounced in the nanofibers with the addition of Cur, which may be due to the encapsulation of Cur within the structure formed by the nanofiber matrix, restricting the stretching vibrations of the functional groups [24].
XRD Analysis
In Figure 5b, the diffraction patterns of the different nanofiber samples are depicted.The KGM nanofibers show a broad diffraction peak at 20.3 • , indicating the amorphous nature KGM [45].In the case of KGM/PVP, a broad diffraction peak at 22 • is observed, slightly shifted compared to KGM.This shift is attributed to the interaction of KGM with PVP, which is consistent with the ATR-FTIR results [54].In addition, the diffraction pattern of KGM/PVP nanofibers closely resembled that of pure KGM, indicating good compatibility between KGM and PVP.Upon the addition of nanoparticles, the nanofibers showed crystallization peaks at 31.98 • , 34.66
Thermal Performance Analysis
Figure 5c,d show the TGA and DTG curves of the nanofibers, respectively, providing insights into their thermal stability.The thermal decomposition of these nanofibers can be categorized into three main steps.The initial step (<100 °C) which involves weight loss is attributable to the removal of unbound water [55].The second stage (200-320 °C) of thermal decomposition in KGM nanofibers is associated with the elimination of hydrogen bonds and the decomposition of sugar rings in KGM [50].However, in the case of KGM/PVP nanofibers, this second decomposition stage extends up to 460 °C, owing to the establishment of hydrogen bonds between KGM and PVP, which enhance the thermal stability of the nanofibers [48].This observation aligns with the results from ATR-FTIR analysis.The weight loss in the third stage remains elative constant but slightly decreases, indicating further decomposition of molecular weight [44].
Mechanical Performance Analysis
The stress-strain versus strain graph, TS, and EAB of the nanofibers are depicted in Figure 6a-c.The TS of KGM/PVP nanofibers is inferior to that of pure KGM.This may be
Thermal Performance Analysis
Figure 5c,d show the TGA and DTG curves of the nanofibers, respectively, providing insights into their thermal stability.The thermal decomposition of these nanofibers can be categorized into three main steps.The initial step (<100 • C) which involves weight loss is attributable to the removal of unbound water [55].The second stage (200-320 • C) of thermal decomposition in KGM nanofibers is associated with the elimination of hydrogen bonds and the decomposition of sugar rings in KGM [50].However, in the case of KGM/PVP nanofibers, this second decomposition stage extends up to 460 • C, owing to the establishment of hydrogen bonds between KGM and PVP, which enhance the thermal stability of the nanofibers [48].This observation aligns with the results from ATR-FTIR analysis.The weight loss in the third stage remains elative constant but slightly decreases, indicating further decomposition of molecular weight [44].
Mechanical Performance Analysis
The stress-strain versus strain graph, TS, and EAB of the nanofibers are depicted in Figure 6a-c.The TS of KGM/PVP nanofibers is inferior to that of pure KGM.This may be attributed to the finer average diameter and higher packing density of pure KGM nanofibers [56].Normally, enhancing both TS and EAB of composites simultaneously can be challenging [53,57].However, it is interesting to note that doping the nanofibers with nanoparticles leads to improvements in both TS and EAB.This enhancement can be owed to the homogeneous distribution of nanoparticles in the nanofibers [58].A similar effect was identified by Wang et al., who found that QLS/ZnO NCs could upgrade the mechanical performance of PU films [59].Conversely, the incorporation of Cur resulted in a deterioration of the TS of the nanofibers.However, it significantly enhanced the EAB (>241%), indicating that the nanofibers exhibited good tensile properties.These results suggest that KGM/PVP/Cur/L-ZnONPs nanofibers possess good ductility but lower strength.
identified by Wang et al., who found that QLS/ZnO NCs could upgrade the mechanical performance of PU films [59].Conversely, the incorporation of Cur resulted in a deterioration of the TS of the nanofibers.However, it significantly enhanced the EAB (>241%), indicating that the nanofibers exhibited good tensile properties.These results suggest that KGM/PVP/Cur/L-ZnONPs nanofibers possess good ductility but lower strength.
Antioxidant Activities Analysis
The antioxidant potential of the nanofibers was assessed using ABTS free radical scavenging assay, and the results are illustrated in Figure 6d.The ABTS radical scavenging rates for the KGM and KGM/PVP nanofibers are only 5.48% and 5.87%, respectively.However, upon the incorporation of nanoparticles, the ABTS radical scavenging rate increased to 20.48%.This increase can be accredited to the intervention of phenolic hydroxyl groups in the lignin within the nanoparticles [60].Furthermore, the addition of Cur resulted in a remarkable enhancement of the antioxidant competence of the nanofibers, with ABTS radical scavenging rates of 98.14% and 98.19%, respectively.This is associated with the presence of an abundance of phenolic hydroxyl groups within the Cur molecule, as the essence of free radical scavenging involves the transfer of phenolic hydroxyl hydrogen atoms [61].The results indicate that the prepared nanofibers demonstrate excellent antioxidant capacity and have potential applications for inhibiting oxidative spoilage.
Water Contact Angle (WCA) Analysis
Surface wettability is a crucial parameter for food packaging, and hydrophobic food packaging is more suitable for practical applications [62].The WCA measurements for the
Antioxidant Activities Analysis
The antioxidant potential of the nanofibers was assessed using ABTS free radical scavenging assay, and the results are illustrated in Figure 6d.The ABTS radical scavenging rates for the KGM and KGM/PVP nanofibers are only 5.48% and 5.87%, respectively.However, upon the incorporation of nanoparticles, the ABTS radical scavenging rate increased to 20.48%.This increase can be accredited to the intervention of phenolic hydroxyl groups in the lignin within the nanoparticles [60].Furthermore, the addition of Cur resulted in a remarkable enhancement of the antioxidant competence of the nanofibers, with ABTS radical scavenging rates of 98.14% and 98.19%, respectively.This is associated with the presence of an abundance of phenolic hydroxyl groups within the Cur molecule, as the essence of free radical scavenging involves the transfer of phenolic hydroxyl hydrogen atoms [61].The results indicate that the prepared nanofibers demonstrate excellent antioxidant capacity and have potential applications for inhibiting oxidative spoilage.
Water Contact Angle (WCA) Analysis
Surface wettability is a crucial parameter for food packaging, and hydrophobic food packaging is more suitable for practical applications [62].The WCA measurements for the pure KGM nanofibers, KGM/PVP/Cur/1% L-ZnONPs nanofibers, and KGM/PVP/Cur/2% L-ZnONPs nanofibers are 34.8 • , 118.1 • , and 101.7 • , respectively (Figure 7a-c).The increase in WCA of the nanofibers after adding Cur can be attributed to the fact that Cur is a naturally hydrophobic polyphenol [63].The hydrophobic benzene ring within Cur has a more pronounced effect on the nanofibers than the polar hydroxyl group [64].Conversely, the incorporation of nanoparticles induced the decrease in the WCA of the nanofibers, which can be attributed to the surface hydrophilicity of the nanoparticles.This phenomenon has also been demonstrated by Daniele Del Buono et al. [65].Nanofibers are considered hydrophobic materials if they have a WCA value of ≥90 • [46].Hydrophobic materials possess excellent water repellent properties, which are essential for preventing moisture loss and are critical in food packaging [66].
rally hydrophobic polyphenol [63].The hydrophobic benzene ring within Cur has a mor pronounced effect on the nanofibers than the polar hydroxyl group [64].Conversely, th incorporation of nanoparticles induced the decrease in the WCA of the nanofibers, whic can be attributed to the surface hydrophilicity of the nanoparticles.This phenomenon ha also been demonstrated by Daniele Del Buono et al. [65].Nanofibers are considered hy drophobic materials if they have a WCA value of ≥90° [46].Hydrophobic materials posses excellent water repellent properties, which are essential for preventing moisture loss an are critical in food packaging [66].
WVP Assessments Analysis
WVP directly impacts the interplay between food and the storage environment.It magnitude is influenced not only by the chemical structure of the nanofibers but also b the hydrophilic-hydrophobic characteristics of these nanofibers [34,50].Figure 7d show the WVP of nanofibers.The WVP value of pure KGM nanofibers was measured at 1.23 0.04 (g mm/m 2 day KPa), while the WVP value of KGM/PVP nanofibers increased to 2.9 ± 0.42 (g mm/m 2 day KPa), which can be explained as a result of the generation of hydro gen bonds [46].Additionally, the high hydrophilicity of PVP may increase the WVP of th nanofibers.The WVP of the nanofibers also remained elevated after the addition of L ZnONPs, which can be attributed to the hydrophilic hydroxyl moieties of Nano-ZnO, in creasing the hydrophilicity of the nanofibers and the moisture vapor sorption sites [67 The WVP of KGM/PVP/Cur/L-ZnONPs was found to be acceptable in this study.Ou nanofibers have WVP values similar to polypropylene (PP) films (3.9-6.2 g mm/m 2 da KPa), much lower than polystyrene films (109-155 g mm/m 2 day KPa), but also highe than PVC (0.94-0.95 g mm/m 2 day KPa) and PLA (1.34 g mm/m 2 day KPa) prepared films [68
WVP Assessments Analysis
WVP directly impacts the interplay between food and the storage environment.Its magnitude is influenced not only by the chemical structure of the nanofibers but also by the hydrophilic-hydrophobic characteristics of these nanofibers [34,50].Figure 7d shows the WVP of nanofibers.The WVP value of pure KGM nanofibers was measured at 1.23 ± 0.04 (g mm/m 2 day KPa), while the WVP value of KGM/PVP nanofibers increased to 2.91 ± 0.42 (g mm/m 2 day KPa), which can be explained as a result of the generation of hydrogen bonds [46].Additionally, the high hydrophilicity of PVP may increase the WVP of the nanofibers.The WVP of the nanofibers also remained elevated after the addition of L-ZnONPs, which can be attributed to the hydrophilic hydroxyl moieties of Nano-ZnO, increasing the hydrophilicity of the nanofibers and the moisture vapor sorption sites [67].The WVP of KGM/PVP/Cur/L-ZnONPs was found to be acceptable in this study.Our nanofibers have WVP values similar to polypropylene (PP) films (3.9-6.2 g mm/m 2 day KPa), much lower than polystyrene films (109-155 g mm/m 2 day KPa), but also higher than PVC (0.94-0.95 g mm/m 2 day KPa) and PLA (1.34 g mm/m 2 day KPa) prepared films [68].
Photodynamic Antimicrobial Activity Analysis
The photodynamic antimicrobial activity of nanofibers against S. aureus and E. coli is investigated and the outcomes are presented in Figure 8a,b.Typically, a larger inhibition zone around the nanofiber discs indicates a more effective antibacterial activity of the nanofibers [57].It is observed that there is no inhibition zone around the KGM and KGM/PVP nanofibers discs, indicating that they lacked antimicrobial activity.However, the nanofibers containing nanoparticles and Cur displayed substantial antimicrobial capabilities against both S. aureus and E. coli bacteria.Furthermore, after irradiation with 808 nm red light, the inhibition zone around the nanofibers is larger, indicating that irradiation could enhance the antibacterial activity of the nanofibers.This enhancement may be due to the inhibitory effect on bacterial growth resulting from the presence of nanoparticles and Cur in the nanofibers after light irradiation, leading to the production of ROS [59].
investigated and the outcomes are presented in Figure 8a,b.Typically, a larger inhibition zone around the nanofiber discs indicates a more effective antibacterial activity of the nanofibers [57].It is observed that there is no inhibition zone around the KGM and KGM/PVP nanofibers discs, indicating that they lacked antimicrobial activity.However, the nanofibers containing nanoparticles and Cur displayed substantial antimicrobial capabilities against both S. aureus and E. coli bacteria.Furthermore, after irradiation with 808 nm red light, the inhibition zone around the nanofibers is larger, indicating that irradiation could enhance the antibacterial activity of the nanofibers.This enhancement may be due to the inhibitory effect on bacterial growth resulting from the presence of nanoparticles and Cur in the nanofibers after light irradiation, leading to the production of ROS [59].
Preservation Effects for Strawberries Analysis
We assessed the impact of nanofibers wrapping on the quality of fresh fruits.Figure 9 shows the preservation effect of nanofibers on fresh strawberries.Strawberries wrapped in control, KGM, and KGM/PVP films exhibited slight decay and deterioration, while those wrapped in KGM/PVP/2%L-ZnONPs, KGM/PVP/Cur/1%L-ZnONPs, and KGM/PVP/Cur/2%L-ZnONPs nanofibers maintained good appearance.After 7 days of storage, all samples experienced a loss of quality due to moisture loss and nutrient depletion of the strawberries [69].Furthermore, we discovered that treating the strawberries with films of KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs significantly reduced the rate of hardness loss.The pH of the strawberries increased at the end of storage due to the depletion of organic acids during metabolism and respiration [70].However, the pH of strawberries wrapped in KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs nanofibers was significantly inferior to that of the other groups.This is on account of the inhibition of strawberry respiration caused by the sustained release of L-ZnONPs and Cur.Therefore, we have concluded that the nanofibers have a certain preservation effect on the strawberries.Although the weight loss of ⃝ KGM/PVP, 3 ⃝ KGM/PVP/2%L-ZnONPs, 4 ⃝ KGM/PVP/Cur/1%L-ZnONPs, 5 ⃝ KGM/ PVP/Cur/2%L-ZnONPs).
Preservation Effects for Strawberries Analysis
We assessed the impact of nanofibers wrapping on the quality of fresh fruits.Figure 9 shows the preservation effect of nanofibers on fresh strawberries.Strawberries wrapped in control, KGM, and KGM/PVP films exhibited slight decay and deterioration, while those wrapped in KGM/PVP/2%L-ZnONPs, KGM/PVP/Cur/1%L-ZnONPs, and KGM/PVP/ Cur/2%L-ZnONPs nanofibers maintained good appearance.After 7 days of storage, all samples experienced a loss of quality due to moisture loss and nutrient depletion of the strawberries [69].Furthermore, we discovered that treating the strawberries with films of KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs significantly reduced the rate of hardness loss.The pH of the strawberries increased at the end of storage due to the depletion of organic acids during metabolism and respiration [70].However, the pH of strawberries wrapped in KGM/PVP/Cur/1%L-ZnONPs and KGM/PVP/Cur/2%L-ZnONPs nanofibers was significantly inferior to that of the other groups.This is on account of the inhibition of strawberry respiration caused by the sustained release of L-ZnONPs and Cur.Therefore, we have concluded that the nanofibers have a certain preservation effect on the strawberries.Although the weight loss of strawberries slightly increased after treatment with the films, the nanofibers were able to prevent the growth of microorganisms, lessen the loss of strawberry hardness, and inhibit the consumption of organic acids.
strawberries slightly increased after treatment with the films, the nanofibers were able to prevent the growth of microorganisms, lessen the loss of strawberry hardness, and inhibit the consumption of organic acids.
Conclusions
In this work, we successfully prepared KGM/PVP/Cur/L-ZnONPs nanofibers with potent photodynamic antimicrobial properties using the electrospun technique.These nanofibers exhibited significant enhancements in the thermal stability and hydrophobicity.Moreover, their improved flexibility indicated robust resistance to deformation.Notably, under 808 nm red light irradiation, the antimicrobial efficacy of nanofibers was improved, leading to enhanced inhibition of both S. aureus (inhibition: 12.4 mm) and E. coli (12.1 mm).This renders food less susceptible to contamination by S. aureus and E. coli.Additionally, the nanofibers have a certain freshness-preserving effect on strawberries, inhibiting the onset of rotting and spoilage, thereby extending the shelf life.This work a fortiori holds several advantages, including cost-effectiveness and straightforward preparation, which lays a solid foundation for the development of novel food packaging with highly efficient antimicrobial properties.
Foods 2024, 13 , 2007 7 of 15 Figure 4 .
Figure 4. (a) SEM images of nanofibers.(b) SEM images corresponding to the histogram of the diameter distribution.(c) Elemental mapping image of nanofibers.
Figure 4 .
Figure 4. (a) SEM images of nanofibers.(b) SEM images corresponding to the histogram of the diameter distribution.(c) Elemental mapping image of nanofibers.
Figure 9 .
Figure 9. (a) The changes in visual appearance; (b) weight loss; (c) hardness loss; and (d) pH changes of strawberries before and after storage for 7 days. | 2024-06-27T15:14:32.031Z | 2024-06-25T00:00:00.000 | {
"year": 2024,
"sha1": "9454d2883358af8b8eeff0482ea1e7c6057b5df6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/13/13/2007/pdf?version=1719322676",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "263def7933e572ee8809a316aedc6116e8759cd5",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
258809488 | pes2o/s2orc | v3-fos-license | Recognition Gaps and COVID Inequality: The Case of Immigrants in Sweden
In this article, we examine recognition gaps exposed by the coronavirus pandemic. We apply Lamont’s cultural processes of inequality framework to the critical case of COVID inequality during the first wave of the pandemic in Sweden – a period in which COVID-19 cases were concentrated among immigrants. We identify recognition gaps associated with five key cultural processes of inequality. Counter to the dominant narrative of Sweden as an open and equal society, our analysis uncovers cultural processes of inequality theorists have identified in other contexts: the racialization of immigrants; and the stigmatization and evaluation of immigrant spaces. We identify two additional cultural processes: resignification in which the State’s coronavirus response was directed toward ethnic Swedish people; and inversion, in which higher death rates among immigrants were relabeled as a natural and acceptable cause of COVID deaths. In addition to applying and extending the theory, we demonstrate the value of a focus on recognition for studies of health inequality. The recognition gaps we identify in this article are practical and solvable problems. In comparison with the challenges of managing large-scale economic redistribution or abolishing prejudice and stigmatization by addressing bias on a person-by-person basis, anticipating and counteracting the cultural processes of inequality is an actionable pathway to pursuing more just and equal societies.
Introduction
The COVID-19 pandemic was not merely a natural disaster, it was also a social disaster. The pandemic exposed deep structures of inequality across and within societies. More unequal societies experienced more COVID-19 deaths (Elgar et al., 2020) and the pandemic's 'racist morbidities' (Murji and Picker, 2021) soon became apparent. Within and across countries, existing inequalities of class, race, ethnicity, citizenship, and prestige were reflected in the likelihood of contracting and dying from the virus, access to vaccines and treatments, inclusion in programs for economic recovery, and the right to travel across national borders (Barker, 2020;Bentley, 2020;Casaglia, 2021;Disney et al., 2022;Greenaway et al., 2020;Rostila et al., 2020;Usher, 2022;Yu et al., 2021). In other words, being a 'minority' or an 'outsider' was a common COVID risk factor shared by people from different racial and ethnic groups and with different migration histories who resided in different nations with different levels of inequality in and access to medical and social support systems. Sweden was no exception. This makes COVID inequality a good opportunity to uncover local cultural processes of social recognition, which are understudied but significant drivers of inequality including health inequality (Clair et al., 2016;Lamont et al., 2014).
In this article, we use the case of COVID inequality to show how cultural processes produce and reproduce inequality in ways that are not fully appreciated in Sweden. We further aim to advance theory on social recognition and the cultural processes of inequality, in particular concerning foreign-born people and their children, who make up the immigrant second generation (Alba, 2005;Honneth, 1995;Jaworsky, 2016;Lamont, 2018). Immigrants in this common sociological use is not synonymous with formal legal immigration status (Alba, 2005). In Sweden there is also a folk concept and symbolic category of immigrant (Hübinette and Lundström, 2014;Khayati, 2017;Voyer and Lund, 2020), which is described later in this article. We use italics when referring to this symbolic category.
Analyzing Sweden's COVID inequality, we identify five key cultural processes associated with recognition gaps: (1) Racialization of immigrants; (2) Stigmatization of immigrant spaces; (3) Evaluation in which biased official categories devalued immigrant spaces; (4) Resignification in which the State's coronavirus response was directed toward ethnic Swedish people, mistaking the symbolic center for the whole of society; and (5) Inversion, in which higher death rates among immigrants were labeled as a natural and acceptable cause of COVID mortality.
This analysis of COVID inequality demonstrates the benefits of attention to cultural processes of inequality. Particularly in a context known for its high quality of life, wellbeing, and comprehensive welfare state, inequality in COVID risk requires sociological examination. Analyzing the Swedish response to the crisis of the pandemic, we show how recognition gaps are institutionalized in policy and practice. Moreover, the recognition gaps we identify in this article are practical and solvable problems. In comparison with the challenges of managing large-scale economic redistribution or abolishing prejudice and stigmatization by addressing individual prejudice and bias on a person-by-person basis, anticipating and counteracting the cultural processes of inequality is an actionable pathway to pursuing more just and equal societies (Clair et al., 2016).
Theory and Literature: The Cultural Processes of Inequality
Sociologists often focus on the unequal distribution of material resources as causes or drivers of social inequality, including health inequalities. While not denying the significance of material inequality, cultural sociologists point to the importance of symbolic social processes of boundary-creation, stigmatization, and exclusion (Lamont et al., 2014). Theorizing around the cultural processes of inequality is centered on social meaning, including individual attitudes and understandings, but also shared meanings such as social norms and cultural scripts linking individual cultural elements like taste, choice, intention, and aspiration with macro-level structures of inequality such as neighborhood effects and the intergenerational transmission of advantage and disadvantage (Lamont et al., 2014: 579-82).
Social Recognition
Lamont and many collaborators have developed theory and a research program on cultural processes of social inequality. The work includes such elements as symbolic social boundaries (Lamont and Molnár, 2002), processes of legitimation and evaluation (Lamont, 2012), and stigmatization . The theory is unified around recognition (Lamont, 2018: 423) -that identifying and countering 'recognition gaps' opens up new possibilities for the pursuit of more equal societies. If and when people are seen or not seen in society matters to life chances. To be recognized and acknowledged as a member of society is fundamental to human interaction and social organization (Honneth, 1995).
Social recognition refers to the processes by which some ways of life are embraced as the socially and morally correct norms and practices of society, while recognition gaps refer to processes by which other ways of life and the people who practice them are ignored or condemned. Persistent inequalities are not natural. An individual's social status does not arise intrinsically from cultural advantages or disadvantages, it arises relationally through processes of social recognition. Recognition, therefore, forms the underlying structure of social relations: How do we know we are a society? How do we know as individuals we are part of a society? Who is not part of society? These are basic but profound questions that provide sociology its disciplinary backbone and are central to explaining persistent inequalities. Who is in and who is out? Who decides? On what terms? To be ignored, outside, and invisible is a specific kind of social exclusion and stigmatization. Recognition is the counterpoint to cultural processes of inequality, and the establishment of inclusive social membership is the goal of recognition efforts (Lamont, 2018: 422, 419).
Recognition Gaps
Recognition gaps occur when there is a failure to recognize certain social groups as full members of society (Lamont, 2018). Cultural processes unfold within two dimensions: identification processes by which individuals and groups are categorized and situated in within broader collective groupings, and rationalization processes in which biased, but purportedly neutral, routines and practices are created and implemented (Lamont et al., 2014). The processes unfolding in these two dimensions include racialization (in which social and phenotypical markers become significant indicators of essential difference), stigmatization (the process of attaching negative significance to characteristics), standardization (the construction and application of presumably neutral and uniform norms and rules), and evaluation (presumably neutral categorizations and assessments of value or worth) (Lamont et al., 2014). This set of cultural processes accounts for a wide range of cases and contexts of inequality, but it is not exhaustive (Lamont et al., 2014: 597) or necessarily sequential. In our case, these processes are iterative and tend to compound one another, which led to the discovery of two additional cultural processes: resignification and inversion, described later.
The cultural processes of inequality framework draws a clear contrast with culture-ofpoverty approaches based on assumptions of cultural deficiency. These approaches highlight self-perpetuating cycles of inequality driven by neighborhood or group practices, family structure, and the socialization of youth to 'problematic' values and norms and rendering some groups incapable of social and economic mobility (Wilson, 2012(Wilson, [1987). Unlike the cultural processes framework, such explanations are not designed for observing ongoing structural conditions contributing to cycles of poverty (Wilson, 2012(Wilson, [1987) or continued discrimination and the reproduction of advantage through recognition gaps (Greenbaum, 2015;Lamont et al., 2014).
Cultural Processes of Health Inequality
The cultural processes of inequality framework have already been applied to the topic of health inequality. Clair et al. (2016) show that stigma reduction efforts reducing negative views of social groups at the center of the HIV and obesity epidemics can have important impacts on responses to public health problems. Meanwhile, Asad and Clair (2018) argue that stigmatized legal statuses (i.e. being associated with a criminal record and undocumented immigrant status) have negative impacts on health for both individuals that hold and those that are expected to hold those statuses. This article contributes to this area of research with an analysis of recognition gaps exposed in the case of COVID inequality during the onset of the pandemic in Sweden.
Background: COVID Inequality in the Swedish Welfare State
Sweden's COVID inequality is a critical case (Flyvbjerg, 2011) for observing cultural processes of inequality because the comprehensive Swedish welfare state limits material inequality in comparison with many contexts (e.g. less inequality in access to healthcare than places without national healthcare). Relative to contexts with less inequality-reducing infrastructure, COVID inequality in Sweden makes cultural processes of inequality more visible.
The 'Swedish Miracle'?
Known for its welfare state and emphasis on equality and human rights (Schaffer, 2020), Sweden ranks among the highest OECD countries in terms of quality of life and health of its democracy (OECD, 2020). Sweden provides national health insurance for all legal residents. Sick leave and other vital social supports are widely available. These social supports are intended to reduce inequality. Sweden's social safety net is something that both foreigners and Swedish citizens recognize as a special characteristic of the country (Simons, 2020;Smith, 2006).
Sweden is also ethnically and racially diverse. Beginning in the 1990s, large-scale migration of refugees and asylees has shifted Sweden's population demographics. In 2020, over 20% of the nearly 10.4 million people in Sweden were immigrants with 'foreign background,' being either born abroad themselves or with one or more parents born abroad (Statistics Sweden, 2020). The national origins of Sweden's immigrant population are diverse. In 2021, the top 10 sending countries for foreign-born people in Sweden were Syria, Finland, Poland, Iran, Somalia, Afghanistan, the countries of the former Yugoslavia, Bosnia and Herzegovina, and Turkey. The top 10 foreign backgrounds of the Swedish-born immigrant second generation where both parents have the same national origin were Finland, Iraq, Syria, Somalia, Yugoslavia, Turkey, Iran, Bosnia and Herzegovina, Poland, and Lebanon (Statistics Sweden, 2022).
The Swedish welfare state is not as strong as it once was. Beginning in the 1990s, a program of neo-liberal reforms was introduced and the social safety net was redesigned (OECD, 2015). As a result, Sweden has growing economic inequality, especially for long-term unemployed persons and newly arrived immigrants (Therborn, 2020). While some attribute increased inequality and its immigration link to neoliberal reforms and welfare state restructuring (Schierup and Ålund, 2011), others emphasize on the mechanisms of class reproduction (Hällsten and Pfeffer, 2017), labor market dynamics, including discriminatory hiring and recruitment practices favoring native Swedes (Bursell, 2014), the long-term effects of housing segregation (Backvall, 2019), disparities in educational attainment, and exclusion from the democratic process (de los Reyes et al., 2014;Hörnqvist, 2016). These factors are found to contribute to the rise and persistence of inequality in Sweden and surely account for some of the health differentials we see with COVID-19 (Drefahl et al., 2020). However, they all point to material inequality. In this article, we highlight the understudied cultural processes linked to recognition gaps.
COVID in Sweden
As Sweden's first cases of COVID-19 were reported in January 2020, government officials and public authorities expressed optimism that the virus would be contained (see Rodan, 2020). The Swedish Public Health Agency (FHM), which was tasked with developing the nation's coronavirus response, published routine reports, public guidelines, and forecasts. They conducted regular press conferences to share this information, which documents the slow rise in COVID cases connected to foreign travel and managed through contact tracing. COVID-19 was a leading news item and a prevalent topic of public conversation.
The optimism faded quickly. The surge of COVID cases in northern Italy coincided with Sweden's February school holiday -a time when many people travel to Italy (FHM, 2020a). The number of known cases linked to international travel rose quickly, and in early March, the first cases with no clear link to international travel and the first death were reported (see Claesson, 2020;Erlandsson, 2020). In mid-March, FHM announced a new phase in the pandemic. Instead of stamping out coronavirus through contact tracing, the new goal was slowing the spread of the virus and protecting the most vulnerable populations -a 'risk group' comprising seniors and people with underlying health conditions (FHM, 2020b). Nursing homes and hospice centers were placed on lockdown. All who could were asked to work from home. High schools and universities moved to remote instruction.
Sweden became known for its lax approach to the coronavirus, emphasizing social distancing, bans on large gatherings, and personal responsibility instead of mandating the widespread use of face masks, mass testing, and lockdowns (Simons, 2020;Yan et al., 2020). Sweden's excess mortality rate rose quickly, outpacing neighboring Norway, Denmark, Iceland, and Finland (Yarmol-Matusiak et al., 2021). Even though Sweden was an outlier in terms of the relaxed policies and COVID statistics, there was little dissent regarding FHM's policies. Unlike many other nations, the pandemic was not politicized in Sweden (Sparf et al., 2022). To account for Sweden's outlier approach and its broad public support, most expert commentaries focused on high trust in government, civic pride, a dominant (if not absolute) consensus culture, and the public health authority's unusual power to determine policy uninfluenced by politics (e.g. Trägårdh and Özkirimli, 2020).
COVID Inequality
Figures on COVID-19 infection and mortality during the first wave of the pandemic in Sweden showed serious discrepancies: foreign-born people were over 200 times more likely to contract the virus than natives (FHM, 2020d). Between 31 January and 4 May 2020, Swedish residents born in the Middle East and Africa were about three times more likely to die of COVID-19, while even those born in other Nordic countries were 50% more likely to die than Swedish-born individuals (Rostila et al., 2020). Those at the highest risk for COVID mortality were people born in Somalia (risk 9 times higher than Swedishborn individuals), Lebanon (6 times higher) Syria (5 times higher), Turkey (3 times higher), Iran (3 times higher) and Iraq (2 times higher) (Rostila et al., 2020). These disparities persisted as the pandemic continued, although in muted form .
Method
COVID inequality between immigrants and natives in Sweden is a critical case (Flyvbjerg, 2011) and following case study methods (Yin, 2009), we rely on different kinds of data, including information on COVID-19 and the social context available from government documents, official statistics, government statements, national and local media accounts, and public health warnings and recommendations, as well as our observations and interpretations as people experiencing the pandemic in Sweden.
We were in the Stockholm region during the pandemic. As sociologists who live and work in Sweden, but relocated from the USA, we drew on our backgrounds as analytical leverage to observe cultural processes that may be less apparent to cultural insiders (for example, as described later, the reliance on Swedish cultural competency in health recommendations), but we also used our knowledge of Sweden to contextualize the pandemic within Swedish history and society.
We began data collection in March 2020, when COVID inequality became visible. We incorporated primary material from as early as January 2020. We concluded the main data collection in May 2020, corresponding with the decline in cases at the end of the first wave of the pandemic in Sweden (Kawalerowicz et al., 2022). During this period, we observed the regular press conferences from the public health authority, FHM, as well as addresses from leaders of the Swedish government and the monarchy. FHM posted statistics, reports, strategy, communications, and other relevant material on its website daily. We also read the national newspapers' coverage of the pandemic, including both news reporting and editorial pieces. To contextualize the material and support the analysis, we drew on relevant information from earlier and later periods -including official statistics, historical accounts, secondary sources, and data on immigration. We also compared what was happening inside Sweden to what was happening outside of Swedenfor example, the politicization of the pandemic response in the USA (Villegas, 2020) and the role of rigid lockdowns in most other European nations (Yan et al., 2020).
The research is limited to the first wave of the pandemic. As Klinenberg (2002) has shown, public health crises often expose deep social fault lines that are not always observable. The initial government response to the pandemic exposed cultural rifts in Sweden that may not have been visible otherwise. Subsequent research could observe additional recognition gaps emerging as the situation developed. Some of the recognition gaps we describe were addressed quickly while others persisted as the pandemic continued.
In analyzing the material, we use abductive logic (Swedberg, 2012), meaning our analysis is informed by our theoretical emphasis on recognition gaps and by the data itself. COVID inequality between native and non-native Swedes pointed us to the significance of the distinction between 'immigrants' and 'Swedes' as it was related to cultural processes of inequality. Our approach to the analysis was holistic and emergent (Lareau, 2021). We categorized the material thematically and identified different recognition gaps, as presented in the analysis. We worked with a large body of material, triangulating and corroborating information across multiple sources. For example, a comment the Prime Minister made in a prime-time television address to the nation might also be covered in the newspapers the next day and discussed in FHM's morning press conference. Examining all of these things made it possible to establish the character of the Prime Minister's remarks, and also see how they were interpreted. Here we present key and illustrative material.
Analysis
The analysis exposes and makes visible five cultural processes of inequality resulting in recognition gaps: racialization; stigmatization; evaluation; resignification; and inversion. We describe these processes in turn.
Racialization of Immigrants
Racialization refers to process by which social and phenotypical markers derive significance as indicators of essential difference (Lamont et al., 2014). In contemporary Sweden, immigrant is a racialized category, and we italicize this term when we intend to denote immigrant in this sense. Stigmatization and racism in Sweden tend to operate through a distinction between native Swedish (italicized here to represent a symbolic ethno-racial category) people and immigrants (Hübinette and Lundström, 2014;Khayati, 2017;Voyer and Lund, 2020). Non-native and/or non-white people who speak with foreign accents or otherwise fail to convey a sense of Swedishness through their attire, neighborhood, and social networks are typically referred to as 'immigrants'(invandrare) or persons with 'immigrant background' (utländsk bakgrund) (Lund and Voyer, 2019). This distinction between immigrant and Swedish (svensk) is a symbolic ethno-racial distinction that is not equivalent, but is related to, formal legal categories.
Racialized group differences are seen as essential or fundamental (Lamont et al., 2014). Immigrants in contemporary Sweden are generally seen as incapable of being Swedish, and their assumed foreignness is documented in official population registries and cultural frameworks of understanding and practice (Barker, 2017). As Statistics Sweden (2020) reports, 'most immigrants are Swedish-born' with parents or grandparents who immigrated to Sweden (Lund and Voyer, 2019). Swedish-born children of immigrants are still called 'immigrants' or referred to as 'new Swedes' (nysvenskar). Meanwhile, people who are literal immigrants from majority white and western nations such as Finland, Norway, the UK, and the USA are generally not considered immigrants at all, even if they are included in official immigration statistics (Voyer and Lund, 2020).
Colorblind Ideology. In Sweden, the category immigrant includes different races and ethnicities, nationalities, religions, and other significant social and cultural groupings. While these groupings also have social meaning that could be explored, we chose to focus specifically on immigrants (versus Swedish) because the racialization of immigrant takes place in the context of a strong colorblind ideological commitment combined with a history of racial hierarchy associated with Sweden's few official minority groups: Sami, Roma, Jews, Swedish Finns, and Tornedalers. These groups, recognized as having long historical ties to Sweden, receive government support to preserve their language and culture (Swedish Institute, 2022).
Sweden's national minorities have not always been recognized and supported. Historically established hierarchies of racial differences included scripts of Nordic white superiority and beauty, civility, intelligence, and morality in comparison with Finns and the indigenous Sami (Kjellman, 2013). These views laid the pseudo-scientific groundwork for a national eugenics institute (SIRB, the Swedish State Institute for Racial Biology), a program of forced sterilization (Broberg and Tydén, 1996), and the forced removal and assimilation of some Sami children. Racial biology, discredited after World War II, did not completely disappear. The SIRB, renamed the Institute of Medical Genetics, continued racial surveys of the population through the post-war period (Ericsson, 2021).
In contemporary times, a desire to recover from this shameful past resulted in an authoritarian colorblind ideology (Hubinette and Tigervall, 2009). Racial, ethnic, and cultural differences are perceived as a threat, and not something easily incorporated into Swedish national identity (Carlsson et al., 2012). The State is prohibited from collecting data on race or ethnicity (Wikström and Hubinette, 2021). There is no national census in Sweden in which individuals self-report their ethnic or racial categorization. Instead, statistics are collected on national origin. In public life and research, focusing on specific racial and ethnic groups or racial and ethnic identity is considered to be racist (Voyer and Lund, 2020). Nevertheless, as in other countries, immigrants in Sweden tend to identify with their nationality or ethnicity, connect with others with the same identity, and form organizations based on these shared identities (e.g. Bayram et al., 2009;Carlsson et al., 2012). However, unlike other countries where the State may embrace immigrant organizations and apply the same logics of language and cultural preservation to both immigrant and indigenous groups (Bloemraad, 2006), in Sweden immigrant-group-specific identifications and mobilizations are considered problematic signs of 'failed integration' and group-specific organizations are often met with skepticism, possibly stymying immigrant incorporation (Carlsson et al., 2012).
This colorblind, anti-group approach creates a gap between imposed ideology and social reality. Researchers struggle to study racism within the constraints of official categories and researchers have long called for a more direct discussion of race and ethnicity in Sweden (Hubinette and Mählck, 2015;McEachrane, 2014; Schclarek and Mulinari, 2020). The continued use of immigrant as an overarching category, while central to ethnic diversity in Sweden, often obscures more than it reveals about how difference is valued, communicated, and incorporated in Swedish society. The racialization of immigrants in Sweden is one of the recognition gaps exposed in the case of COVID inequality. Stigmatization is another recognition gap.
Stigmatization and Evaluation of Immigrant Spaces
Stigmatization is the process of attaching negative significance to characteristics associated with a particular category (Lamont et al., 2014). Although we observe stigmatization in relationship to a variety of practices, such as ways of speaking Swedish, ways of dressing, and choice of music, which are socially devalued because of their association with immigrants, in terms of COVID inequality the stigmatization of immigrant neighborhoods is the most glaring recognition gap.
Stockholm is one of the most segregated urban areas in Sweden, a country with one of the highest rates of residential segregation among OECD countries (Koopmans, 2010). White middle-class Swedes and western ex-pats tend to live in separate neighborhoods, more often inside the city centers or leafy suburbs, whereas immigrants tend to live in housing projects that ring the city centers. These concrete suburbs were built in the 1960s as part of the Million Program, an ambitious public housing project for the working class. Over the decades, the aging infrastructure of these neighborhoods and the rising popularity of home ownership and stand-alone houses led to lower rents and higher vacancy rates (Nesslein, 1982). Newly arrived immigrants are more likely to find housing in these areas, and those who struggle economically are more likely to remain (Andersson and Bråmå, 2004).
Minority spaces are often seen as undesirable and problematic (Voyer, 2019). Indeed, residential segregation in Sweden, as in most places, relies more upon Swedes' avoidance of immigrant neighborhoods than upon the self-segregation of immigrants (Muller et al., 2018). Neighborhood stigmatization is felt by residents, who voice concerns about being treated differently, being excluded, disrespected by the police, and demeaned by public authorities (Schierup et al., 2014). Research documents a growing sense of frustration among residents of Sweden's segregated neighborhoods (del los Reyes et al., 2014).
Evaluating Immigrant Neighborhoods. Recognition gaps also arise as stigmatized immigrant spaces undergo evaluation. The cultural process of evaluation is the assessment of value or worth based on presumably neutral categories (Lamont et al., 2014). Sweden's official category of 'vulnerable area' (utsatta områden) is the basis for evaluation as a cultural process of inequality. The designation of vulnerable area is assigned by law enforcement and based on a variety of characteristics, including having lower socioeconomic status, lower education attainment, higher unemployment, a larger at-risk youth population, and higher crime rates (see Polisen, 2018). But this category is not neutral. Being outside of the Swedish norm is an explicit element of the 'vulnerable' designation, which also includes such characteristics as possessing parallel social structures such as the national and cultural group-specific organizations discussed earlier, having lower levels of Swedish language proficiency, and elevated risk of extremist religious views and the possibility for residents to sympathize with or participate in conflicts abroadthe official discussion of these last two concerns explicitly references practitioners of Islam, and the foreign conflicts associated with the Islamic State (IS) and al-Shabaab (see Polisen, 2018).
The stigmatization of immigrant neighborhoods and the formalization of that stigma through the assignment of a formal, supposedly neutral label only further the boundaries between immigrants and Swedes. There are social problems, including gun violence, the drug trade, and other criminal activities in many low-income Swedish suburbs, and these issues could benefit from increased spending on social programs and law enforcement, which is provided to 'vulnerable' neighborhoods (Polisen, 2018). However, the categorization of 'vulnerable' through a biased process of evaluation reinforces stereotypes about immigrants' perceived failures to integrate. As a result, neighborhoods placed on the list of vulnerable areas often protest the designation despite the increased services (e.g. . 'Segregation Kills'. Racialization, stigmatization, and evaluation are cultural processes of inequality relevant to COVID inequality. As discussed by FHM and reported in the media, in March 2020, neighborhoods with more immigrants, suburbs where as many as two out of three residents were born abroad, were feeling the brunt of the virus (see Berg and Skoglund, 2020;Hurinsky and Carp, 2020;Randhawa, 2020). Likewise, immigrants living in segregated neighborhoods were overrepresented in risk and mortality statistics (see Franssen, 2020;Nordström et al., 2020).
Structural conditions surely contributed to COVID inequality. Higher residential density, more multi-generational households, and poorer public health were routinely cited as driving the disparity (e.g. Berg and Skoglund, 2020;Franssen, 2020;Randhawa, 2020). But recognition gaps were at play as well. In the words of Ahmed Abdirahman, director of the Global Village Foundation, 'Segregation kills. Corona kills too, but faster' (Global Village Foundation, 2020). While the first wave unfolded, COVID inequality revealed additional cultural processes contributing to recognition gaps between immigrants and Swedes.
Resignification: Sweden is for Swedes
Resignification, a cultural process of inequality first identified in this research, refers to recognition gaps in which the focus of attention is directed away from stigmatization and its impacts and back toward the symbolic center that remains. In this way, the center is mistaken for the whole of society. In the Swedish case, the contrast between Swedish and immigrant is implicit in the stigmatization and evaluation of immigrant spaces. As immigrants are stigmatized, they are susceptible to being treated as outsiders and nonmembers of society. Through resignification, the symbolic space of people who are not immigrants and spaces that are not 'vulnerable' are recast as the people and places of Sweden while those who are excluded fade into the background. When it comes to COVID inequality, resignification is evident in pandemic planning and pandemic policy.
Pandemic Planning. Following the first coronavirus death -an immigrant residing in a 'vulnerable' neighborhood, media outlets began discussing the lack of information about coronavirus in languages other than Swedish (e.g. Berg and Skoglund, 2020;Sundkvist and Anderberg, 2020). These reports noted that people who do not speak Swedish, more than 10% of the national population, had difficulties finding official information about COVID-19. At this point, information was provided in Swedish and sometimes English, but not the many other languages spoken in Sweden. Given limited official information, people turned to local institutions like immigrant aid societies, religious organizations, and senior centers. However, according to the reports, there had been no official outreach to these voluntary community organizations (e.g. Sundkvist and Anderberg, 2020).
The lack of accessible messaging could be due to the short time frame for pandemic response instead of cultural processes of inequality unfolding through the resignification of Swedish people as the people of Sweden, but the evidence suggests otherwise. In the face of criticism, the public health authority acknowledged that, although they began preparing for COVID-19 and related scenarios well in advance, there was no plan for outreach to non-Swedish-speaking populations and the neighborhoods where these populations were concentrated (see Sundkvist, 2020). More than an oversight, this failure to send the message to minority populations put FHM out of compliance with UN and World Health Organization's guidance for risk communication and community engagement around COVID-19 (RCCE, 2020). Two weeks after the reports of language-accessibility problems and 10 days after the first COVID-19 death, FHM corrected its error. More than 6000 notices in 24 languages describing the dangers of COVID-19 and how to limit the spread of the illness were posted throughout virus-stricken neighborhoods as part of a large information campaign (see Ahmed, 2020;Bergman and Jobe, 2020;FHM, 2020c). But by that point, the health impacts were clear. As widely reported by FHM and the media, most of the first cases of community transmission involved immigrants, and 6 of the first 15 people who died were Somali immigrants (e.g. Berg and Skoglund, 2020).
Pandemic Policy. Resignification of Swedish people as the people of Sweden also extended to the way COVID-19 mitigation policies were formulated. Instead of clear rules, health guidelines were delivered as recommendations. For example, it was recommended to work from home. One should ask oneself if it was necessary to take mass transit or decide for oneself if one should go to the gym or meet friends at a restaurant. It was the individual's responsibility to make the right choice. When pushed for clarity on how to make such decisions, for example during press conferences, public officials argued that the flexibility of recommendations worked well in the Swedish context.
The resignification of Swedish people as the people of Sweden is evident in the assumed level of cultural consensus and understanding on the part of the people the health recommendations were for. In late March, Prime Minister Löfven addressed the nation regarding the coronavirus. Löfven explained that every Swedish resident should follow the health guidelines and rely upon folkvett -a term translated by an Englishlanguage news service as 'common sense manners' and 'the moral sense that every person is expected to have without being taught, and a word every Swede will instinctively recognize as something seen as a very, very bad thing not to have' (Löfgren, 2020). When asked to explain more precisely how his agency wanted people to behave, the head of FHM, Anders Tegnell, said, 'What we are talking about here is the Swedish culture, how Swedes interpret recommendations from the authorities. I think most people see [a recommendation] as very clear advice on how to do this in the best possible manner' (see Rothschild, 2020). In other words, deciphering the guidelines that made up the heart of coronavirus mitigation efforts did not just require the ability to speak Swedish, it required Swedish common sense.
Not everyone had access to the knowledge necessary to interpret Sweden's COVID-19 guidelines. An unscientific survey of international residents, including elite ex-pats employed as academics, creatives, and IT workers in addition to racialized immigrants, found overwhelming uncertainty around what one should do to mitigate the spread of the virus. Lacking clear guidance, people reported following the policies of their home countries. In the case of people coming from other western nations, home-country coronavirus restrictions were more stringent (see Edwards, 2020).
Resignification is a cultural process of inequality revealed in pandemic planning and policy in Sweden. From the beginning, COVID policies were designed with Swedish people in mind. Racialized and stigmatized immigrant populations and neighborhoods were neglected, but so were the elite ex-pats and international visitors living alongside Swedish people but lacking the required linguistic and cultural competence. Resignification is a process that re-naturalizes and essentializes the Swedish population as a homogenous entity, rendering its actual population invisible, with serious ramifications for public health policy and practice.
Inversion: Immigrants Causing COVID Inequality
The final recognition gap is the process of inversion, newly identified in this case. Through inversion, inequality becomes a reason or a cause that explains its own effects. We observed two instances in which sense-making around Sweden's COVID inequality was turned upside down and immigrants were blamed for COVID inequality. First, COVID inequality was interpreted as evidence of immigrants' cultural deficiencies. Second, immigrants' greater risk was presented as a cause of the country's higher death rates.
Immigrant Culture. As the disproportionate suffering of immigrants became apparent, speculation arose as to the reasons. Some argued that 'cultural aspects' played a role (e.g. Busch, 2020a;see Franssen, 2020;Randhawa, 2020). In more benign interpretations, cultural explanations highlighted cultural differences in a matter-of-fact and value-free way. For example, Jihan Mohamed, a doctor on the board of the Somali-Swedish Medical Association, noted that 'In Somali culture, it is important to socialize, support and visit each other, especially if someone is ill' (Randhawa, 2020). Others discussed the value placed on multigenerational households and caring for the elderly at home (see Berg and Skoglund, 2020).
Initial value-neutral considerations of cultural factors were quickly overshadowed by, on the one hand, pointed criticism of the public health vulnerability of Sweden's immigrants, and, on the other hand, a cultural process of inversion in which culture of poverty arguments attributed the suffering of immigrants to their own cultural failings. Culture of poverty explanations of COVID inequality emerged in the form of online expressions of hate. Some of these expressions celebrated the deaths as a way to decrease the minority population (see Jobe, 2020). These cruel and xenophobic sentiments were soon sanitized and weaponized as cultural essentialism.
For example, Ebba Busch, the leader of the far-right Christian Democratic party (KD), penned an opinion piece for the mainstream newspaper, Aftonbladet. In it, she argued that more deaths occurred among immigrants because they were 'vulnerable.' Here she used the term utsatta, which means vulnerable but also isolated or set apartthe same term used to refer to 'vulnerable' neighborhoods. Busch acknowledged the underlying risk factors noted by others, such as overcrowded and intergenerational households, but she classified these factors as problematic 'cultural causes' (kulturspecifika orsaker) and connected them to other cultural characteristics she attributed to immigrants: distrust of authorities, illiteracy, and the idea that medical advice is transmitted word of mouth instead of coming from experts (Busch, 2020a).
The process of inversion is clearly evident in Busch's account. As she described it, the only blame the Swedish government and society bore for immigrants suffering was the crime of open borders. Busch concluded that COVID inequality was caused by the fact that '20% of immigrants in the country were not admitted with due consideration of their integrationspotentialer,' meaning their potential for integration based on cultural similarity with Swedishness (Busch, 2020a(Busch, , 2020b. Busch suggests that immigrants created COVID inequality, rather than seeing it as a part of enduring social problems linked to underlying structures of social recognition. This inversion of COVID inequality could be dismissed as a far-right perspective, but how different was it from the official view of the situation? COVID Inequality as an Explanation. Inversion was also evident in the way government officials explained and justified Sweden's COVID deaths. In September 2020, well after the first wave of the pandemic had subsided, FHM's Tegnell was asked why COVID-19 had killed so many people in Sweden in comparison with neighboring Finland. Tegnell explained that Finland had 'better conditions' to contain the virus. He highlighted the country's relative lack of urban density, the relatively limited international travel of Finland's population, and the fact that the country has 'almost no immigrant groups' (see Svahn and Hallgren, 2020). We observed Tegnell offer this explanation multiple times when questioned about Sweden's higher death rates.
Tegnell misrecognized COVID inequality as an explanatory factor instead of a factor to be explained. This inversion process, when combined with the racialization and stigmatization keeping immigrants outside or on the periphery of Swedish society, made it possible to present COVID inequality as an acceptable cause of deaths instead of an unacceptable effect of processes of social inequality. Tegnell was eventually criticized for blaming Sweden's high COVID-19 death toll on immigrants. When faced with criticism, Tegnell apologized for his poor word choice (see Zangana, 2020), but he did not abandon the claim that Sweden's larger immigrant population contributed to the country's lackluster COVID-19 statistics.
Ultimately, inversion obscured the State's responsibility for protecting immigrants. This is evident in contrast to the sense of shared responsibility punctuating discussions of the deaths of elderly people in nursing homes and hospice care. Tegnell, Prime Minister Löfven, and others acknowledged the heavy toll the virus took on this population, the work being done to address the problem, and the fact that protecting the elderly had been a goal from the beginning (see Bengtsson, 2020;Kerpner and Fernstedt, 2020;Svahn and Hallgren, 2020). While this failure was eventually the subject of a scathing coronavirus commission report, no national investigation of COVID inequality as it related to immigrants was undertaken and the coronavirus commission did not pick up the issue of COVID inequality associated with immigrants as an area for improvement (see Corona Commission, 2020, Corona Commission, 2022).
Discussion: Recognition Gaps and COVID Inequality
Recognition gaps arise when some people in a community, society, or country are not recognized as part of 'the people'. When the State and the citizenry mobilize symbolic boundaries of belonging in response to a crisis, recognition gaps are particularly visible and salient. Our analysis of COVID inequality in Sweden finds recognition gaps associated with the racialization of immigrants and the stigmatization and formal devaluation of immigrant neighborhoods before the spread of COVID-19. COVID's 'racist morbidity' (Murji and Picker, 2021) was then reflected in recognition gaps evident as the pandemic unfolded. Ethnic Swedish people were resignified as the focus of government efforts to protect the people of Sweden. These resignification processes resulted in failures to recognize and plan for the needs of people outside of the Swedish herd. There was initially a lack of messaging for those who could not speak Swedish. When warnings and protections finally did arrive, they required 'Swedish' cultural competence to be comprehended. Recognition gaps were further evident in inversion. Instead of recognizing COVID inequality as a social problem to be addressed, the suffering of immigrants prompted a hostile public reaction. Conservative voices interpreted the COVID inequality as evidence of immigrants' cultural deficiencies and lack of potential for integration into Swedish society, while public authorities argued that immigrant deaths could explain the country's death rates. Given the relatively large immigrant population, the country's higher death rate could be excused. To date, there has been no official investigation into the recognition gaps revealed by COVID inequality in Sweden, but there has been public acknowledgment of failures to protect other populations (e.g. Corona Commission, 2020).
Conclusion
It is essential to consider recognition gaps when examining how societies respond to global and local crises, including disease pandemics (Klinenberg et al., 2020). The cultural processes of inequality framework makes it possible to identify recognition gaps arising when certain social groups and individuals fall outside of the 'symbolic boundaries of belonging' (Jaworsky, 2016) in a society, contributing to disparities in social worth. Attention to these cultural processes can help us better understand the breadth and depth of social inequality in the case of crises like the pandemic, and the routine functioning of unequal societies as well.
The COVID-19 pandemic exposed deep structures of inequality across and within societies. Examining the onset of the pandemic in Sweden, we observed five distinct cultural processes. These cultural processes exacerbate existing inequalities as recognition gaps reproduce hierarchies of human worth that facilitate other recognition gaps. Three of these processes were previously identified in the literature on cultural processes of inequality: the racialization of immigrants vis-à-vis the category of Swedish; the stigmatization of immigrant neighborhoods; and the negative evaluation of these neighborhoods through a biased formal and official assessment of value. We identify two additional processes: resignification in which ethnic Swedish people are reaffirmed as the symbolic center of Sweden who should be the focus of COVID planning; and inversion in which explanations of COVID inequality are inverted and, instead of something to be explained and addressed, immigrants' risk is blamed on immigrants and used as an acceptable explanation for Sweden's higher death rates.
These cultural processes are likely to be relevant for other cases. In our case, the processes compounded one another (e.g. racialization enables but does not cause inversion, stigmatization and the resulting segregation facilitate resignification) even if they did not appear in a strict sequential path. Because of the iterative character of cultural processes, we would imagine a range of combinations that would play out differently in different contexts with varying resonance and significance rather than a uniform sequence.
By questioning taken-for-granted explanations for Sweden's lax approach to the pandemic (e.g. trust in government) and conventional explanations for inequality (e.g. socioeconomic inequality), we develop an explanatory framework rooted in cultural processes of inequality. But this problem is not confined to Sweden. Immigrants from different sending nations had elevated COVID-19 risk in many nations (Bentley, 2020;Greenaway et al., 2020;Yu, 2021). Examining cultural processes of inequality can shed crucial light on health inequalities in other contexts and related to other groups (Clair et al., 2016).
Recognition gaps open up new avenues for intervention. The recognition gaps we identify in this article are practical and solvable. Public health planning can anticipate and address health risks resulting from racialization, stigmatization, and resignification. Evaluation and inversion can be addressed through assessments of bias in policy and policy implementation. The iterative nature of the cultural processes of inequality means that intervening in one process can also have a positive impact via other processes. For example, addressing the resignification of the Swedish majority through campaigns fostering recognition of the equal social worth and social membership of immigrants can also facilitate de-stigmatization of immigrant spaces and decrease the risk of subsequent recognition gaps arising through biased standards for evaluation. Taking a cultural turn in the study of COVID inequality demonstrates the centrality of recognition gaps for persistent and emergent social inequality, and the promise of recognition efforts in the pursuit of more just and equal societies. | 2023-05-21T15:17:50.998Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "97648edabeae29b7d765edc730153251db8882ee",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e0a6658b9906319ff8404c618e38877171e21f55",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
86204886 | pes2o/s2orc | v3-fos-license | Optimum Height at Which to Kill Barley Used as a Living Mulch in Onions
. Planting barley ( Hordeum vulgare L.) as a living mulch with onions ( Allium cepa L.) reduces soil erosion and protects the onions from wind damage. It can also reduce yield and size of onion bulbs if not managed correctly. In a 4-year study at the Oakes Irrigation Research Site in North Dakota, barley was planted in the spring at the same time that onions were direct-seeded. Barley rows were planted either parallel with or perpendicular to the onion rows. Barley was killed with fluazifop-P herbicide when » 13, 18, 23, or 30 cm tall. Onion size and yields were reduced when barley was allowed to grow taller than 18 cm before killing it. Total onion yield was usually greater when barley was planted parallel with, rather than perpendicular to, onion rows. Chemical name used: (R)-2-[4-[[5-(trifluoromethyl)-2-pyridinyl]oxy]phenoxy]propanoic acid (fluazifop-P). In
In North Dakota and many other areas of the northern United States, onions are directseeded in the spring into a conventionally tilled, finely prepared seed bed. Very little residue remains on the soil surface. One reason this is necessary is because many onion planters do not work well in no-till or stubbletilled fields. High winds, which frequently occur in the spring, blow the soil particles from the finely prepared seed bed across the ground, causing extensive damage to newly emerged onion seedlings. Some areas of the field are completely denuded of onion plants, while in other areas the onion stand is reduced and plants are weakened.
Cover crops reduce soil erosion, protect plants from blowing soil particles, help control weeds, enhance water penetration and retention, improve soil structure, help maintain or increase soil organic matter, and may increase yields (Barnes and Putnam, 1983;Masiunas et al., 1995Masiunas et al., , 1997Mwaja et al., 1996;Schonbeck et al., 1993). They may also reduce yield and quality of the principal crop by competing with the crop for water, light, and nutrients (Bottenberg et al., 1999;Masuinas et al., 1997).
Cover crops that are planted before, at the same time, or after the principal crop and grow with the crop are often referred to as living mulches. A major purpose of living mulches is to provide protection against wind-blown soil herbicides. In previous research I found that a barley cover crop seeded simultaneously with onions emerged rapidly, grew vigorously, and was usually 5 to 10 cm tall when the onions emerged (Greenland, unpublished). Oats (Avena sativa L.), spring wheat, and rye were not as well suited for a living mulch as was barley because of slower growth, a less robust stand, or a prostrate growth habit.
The objective of this study was to determine how tall a barley living mulch, planted in rows either perpendicular or parallel to onion rows, should be allowed to grow before killing it to prevent it from reducing onion yield.
Materials and Methods
This field study was conducted from 1995 to 1998 at the North Dakota State Univ., Oakes Irrigation Research Site, in southeast North Dakota. The soil was a Maddock or Egeland (1995) sandy loam (both are Udorthentic Haploborolls) with a pH of 7.2 to 7.4 and 2.4% to 2.7% organic matter. Onions followed cabbage (Brassica oleracea L. var. Capitata), soybean (Glycine max L.), potato (Solanum tuberosum L.), and carrot in 1995, 1996, 1997, and 1998, respectively. The field was disked twice, then cultivated twice before planting onions in 1996. In 1995, 1997, and 1998, seed bed preparation consisted of only one disking and one cultivation.
The barley cover crop was planted on 26 Apr. 1995, 18 Apr. 1996, 25 Apr. 1997 Apr. 1998 at a rate of 54 kg·ha -1 . The barley was planted perpendicular to the onion rows all 4 years using a grain drill with 15-cm row spacings. This same grain drill was used to plant barley parallel with the onions in 1995 and 1996; some of the drill rows were not planted to allow room for the onion rows, and seeding rate was adjusted in the drill rows to maintain a rate of 54 kg·ha -1 . The barley was planted first, followed by a second pass to plant the onions. In 1997 and 1998, modification of the onion planter by mounting a grain drill on it allowed planting of two parallel rows of barley between every two onion rows in one pass. Onions were planted 1 d after the barley in 1995 and on the same day the barley was planted in 1996-98. A precision belt planter (Stanhay S870; Stanhay Webb, Ltd., Newmarket, Suffolk, England) was used to plant onions in double rows (7 cm apart) on 40cm centers. Seeding rate was ≈500,000 seeds per hectare. The onion hybrids used were 'Golden Treasure' in 1995 and'Santos' in 1996-98. The fields were overhead sprinkler irrigated as needed, and N, P, K, and S fertilizers were applied per Univ. of Minnesota recommendations. Weeds were controlled with DCPA (dimethyl 2,3,5,6-tetrachloro-1,4benzenedicarboxylate) in 1996, pendimethalin [N-(1-ethylpropyl)-3,4-dimethyl-2,6-dinitrobenzenamine] in 1997 and 1998, bromoxynil (3,5-dibromo-4-hydroxybenzonitrile) plus oxyfluorfen [2-chloro-1-(3-ethoxy-4nitrophenoxy)-4-(trifluoromethyl)benzene] in all years, and by hand-weeding. Fluazifop-P, used to kill the barley cover crop, also con-particles. A living mulch should be chosen that does not compete with the principal crop, or the living mulch must be restricted or killed to prevent competition. The taller a living mulch grows, the greater the protection against wind erosion and wind damage. If it grows too tall, it competes with the crop, and yield and quality are reduced.
How long a living mulch can grow before causing crop yield reductions depends on the crop, the mulch, and other factors. Zandstra and Warncke (1993) reported that carrot (Daucus carota L.) yields were not reduced if the barley living mulch was killed at 20 and 40 cm tall when broadcast-seeded at 108 and 54 kg·ha -1 , respectively. They reported that barley interseeded (broadcast) with onions had to be killed when 10 to 20 cm tall to avoid onion yield reduction. This would be ≈1 week after onion emergence. Lanterman et al. (1984) observed no onion yield reductions when barley was broadcast-seeded at 54 kg·ha -1 if the living mulch was killed before it exceeded 12 cm in height. Onion yield also was not reduced if barley planted at 7 kg·ha -1 in a single row between every two rows of onions was killed before it reached 30 cm tall. In competition studies with weeds, Wicks et al. (1973) reported small (sometimes nonsignificant) reductions in yield and size of onions when weeds were allowed to grow unchecked for 2 weeks after onion emergence. Hewson and Roberts (1971) reported that final onion yield was unaffected when weeds remained for up to 4 to 6 weeks after 50% crop emergence, provided that the crop was subsequently kept weed-free.
Barley was killed by spraying it with fluazifop-P. The fluazifop-P (plus a nonionic surfactant) was applied in 425 L·ha -1 of water at 200 kPa pressure using a CO 2 backpack sprayer, except for the last application each year; the last treatment was applied to the entire area using a tractor-mounted sprayer (340 L·ha -1 ; 205 kPa), and killed any barley not previously sprayed, plus any grass weeds present (usually <1 m 2 ). Time of killing the barley cover crop is shown in Table 1. The rate of fluazifop-P ranged from 120 g·ha -1 when the barley was small to 388 g·ha -1 when taller. The bromoxynil plus oxyfluorfen herbicide treatment was applied 3 to 20 d after the last fluazifop-P treatment.
Onions were hand-pulled and placed in a windrow in mid-September, allowed to field cure for 2 to 4 weeks, then were bagged and stored in a shed until grading. In early November bulbs were sorted by size, counted, and weighed. Percentage of single-centered onions was determined using a randomly selected sample of five large bulbs from each plot. Visual grades of uniformity and of overall appearance of the bulbs from each plot were recorded.
Because of nearby corn fields, shelter belts, or buildings, the onions were not exposed to the high winds common in the area. Therefore, this study probably did not detect the effects on onions of protection from wind damage. On 2 July 1998, I measured the height of the dead barley mulch and the percentage of ground it covered to provide some idea of the wind protection the barley would give the onions. Percent groundcover was determined using a line-intercept method with a 0.9-m measuring stick marked at 2.5-cm intervals. I also determined the size and growth stage of the onions on that date.
Average temperatures for Oakes, N.D., are 13, 18, 22, 21, and 13 °C for May, June, July, August, and September, respectively, and average precipitation for the same months is 35, 50, 41, 47, and 33 mm, respectively. Spring 1995 was cool and wet, delaying field work and planting. June through August temperatures were average to above average. Spring 1996 was cool and wet, but onion planting was not delayed. Crops developed slowly during the spring. Temperatures from June through the end of the season were near average. Record snowfall in Winter 1996-97 did not melt until late April, delaying planting in 1997. Temperatures were below average in April and May, but average or above average the rest of the year. In 1998, temperatures were near average and precipitation was above average.
A split-plot experimental design was used, with direction of planting barley as the main plots and time of killing barley as the subplots. Main plots were 9.8 × 4.2 m, and were divided into four subplots, 4.2 × 1.8 m, with a 0.6-m border between subplots. The data were analyzed using the SAS GLM and regression procedures (SAS Institute, 1989). Because of very significant year × barley height interactions, data for each year were analyzed separately.
Results and Discussion
Effect of barley height when killed. Allowing the barley to grow taller before killing it did not reduce the total number of onion bulbs harvested except in 1996 and 1997, and then only at the tallest barley height (Table 2). However, the number of large onions was reduced each year at the tallest barley height. In 1998, there was a clear shift from large onions to medium and small onions as barley height at spraying increased. Stress on the plant from competition with barley reduced bulb size.
The first derivative of the regression equations showed that the highest large and total onion yields occurred at the lowest barley height. However, yields of large onions when barley was killed at 15, 23, and 17 cm tall (for 1995, 1996, and 1997, respectively) were not significantly different from those at the lowest barley height. Total onion yields when barley was killed at 15, 18, 23, and 18 cm tall (for 1995, 1996, 1997, and 1998, respectively) were not significantly different from those 0.45 *** 0.39 *** 0.39 *** NS 0.47 *** 0.35 *** 0.36 *** 0.35 *** z Large, medium, and small onions are >7.6 cm, 5.7 to 7.6 cm, and <5.6 cm in diameter, respectively. y Mean separation within columns and years determined by 95% confidence intervals. NS, *, **, *** Nonsignificant or significant at P ≤ 0.05, 0.01, or 0.001, respectively. when it was killed earlier. In 1998, yields of medium and small onions increased as barley height at spraying increased. Although the time of application at which no significant reduction in onion yield occurred varied slightly from year to year, the 18-cm height seemed to be optimum. In the studies by Zandstra and Warncke (1993), and Lanterman et al. (1984), barley had to be killed at a lower height to avoid onion yield reductions, perhaps because they broadcast-seeded the barley, whereas in this study the barley was planted in rows.
The height at which the barley was killed did not affect percentage of single-centered onions, onion uniformity, or their overall appearance.
Effect of direction of planting barley. Compared with planting barley perpendicular to onion rows, planting barley parallel with the onions decreased total number of onion bulbs in 1995, increased total bulbs in 1996, and did not affect total bulb number in 1997 and 1998 (Table 3). Parallel planting also increased numbers and yield of large onions while reducing those of small onions in 1998. Total onion yield was higher in 1996 and 1998, lower in 1995, and unaffected in 1997 when barley was planted parallel with the onion rows. The reductions in number and yield in 1995 may have been the result of imprecise planting of the onions. Sometimes the planter would swing to the side a few centimeters and plant the onion seed in the barley row. Onions were planted more accurately in 1996, and modifying the planter in 1997 and 1998 so that both onions and barley were planted in a single pass solved the seed placement problem. I conclude that planting the barley parallel with, rather than perpendicular to, the onion rows gives equal or greater yields provided that seed placement is accurate. It also saves one pass over the field.
The orientation of the barley did not affect percentage of single-centered onions, or onion uniformity or overall appearance.
Effect on cover crop remaining and size of onion plants. When its height was measured on 2 July 1998, the barley that had been sprayed when it was 10 or 18 cm tall had completely or almost completely disappeared (Table 4). Most of the barley sprayed at the last two spray dates was still erect and covered about one-third of the ground. The taller the barley was allowed to grow before killing it, the shorter and farther behind in development the onion plants were. This delay in development reduced bulb size.
The barley was taller and covered more ground when planted perpendicular to vs. parallel with the onion rows. The difference in barley height may have resulted from the difference between the standard drill used to plant perpendicular rows and the modified planter used to plant parallel rows. The modified planter did not have packer wheels behind the drill openers, and the barley emerged a day later than that planted with the drill. Final stand was similar for both planters. The percentage of groundcover was greater when barley was planted perpendicular to the onion rows because there were more rows of barley.
During the 4 years of the study, barley no * * * z Orientation of barley vs. onion rows. y Large, medium, and small onions are >7.6 cm, 5.7 to 7.6 cm, and <5.6 cm in diameter, respectively. NS, *, ** Nonsignificant or significant at P ≤ 0.05 or 0.01, respectively. Percentage of ground covered with barley residue on 2 July. Determined using the line intercept method with a 0.9-m ruler and taking counts every 2.5 cm. y Orientation of barley vs. onion rows.
NS, *, **, *** Nonsignificant or significant at P ≤ 0.05, 0.01, or 0.001, respectively. longer provided any groundcover around 1 June, 1 July, and mid-August when sprayed at 10, 18, and 23 cm tall, respectively. Barley sprayed when 30 cm tall provided groundcover for the entire season. By mid-to late June, onions are usually large enough (15 to 25 cm tall) to withstand the wind without windbreaks. Spraying the barley when it was 18 cm tall provided protection until that time. | 2019-03-30T13:05:47.886Z | 2000-08-01T00:00:00.000 | {
"year": 2000,
"sha1": "ebb9ae8efbd9e983209634acd28125f2346ee5ac",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/hortsci/35/5/article-p853.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9db6c13d9e0da0acce3443bbc0c3b18ab0e6089d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
195443290 | pes2o/s2orc | v3-fos-license | A Longitudinal Study of Early Reading Development: Letter-Sound Knowledge, Phoneme Awareness and RAN, but Not Letter-Sound Integration, Predict Variations in Reading Development
ABSTRACT It is now widely accepted that phonological language skills are a critical foundation for learning to read (decode). This longitudinal study investigated the predictive relationship between a range of key phonological language skills and early reading development in a sample of 191 children in their first year at school. The study also explored the theory that a failure to establish automatic associations between letters and speech sounds is a proximal causal risk factor for difficulties in learning to read. Our findings show that automatic letter-sound associations are established early, but do not predict variations in reading development. In contrast, phoneme awareness, letter-sound knowledge and alphanumeric RAN were all strong independent predictors of reading development. In addition, both phoneme awareness and RAN displayed a reciprocal relationship with reading, such that the growth of reading predicted improvements in these skills.
Introduction
Fluent reading skills are a critical foundation for educational success, but many children experience problems in learning to read. Developmental dyslexia, a disorder characterized by impaired word reading and spelling, is estimated to affect between 3 to 8% of the population (Peterson & Pennington, 2015), but this diagnosis represents the lower end of a continuous distribution of reading and spelling skills (Fletcher, 2009). It is, therefore, critically important to determine the cognitive skills that predict variations in reading development, to allow us to identify and treat children at risk of reading difficulties.
Learning to read depends on mastery of the alphabetic principle: that written letters represent the sounds of speech (Byrne & Fielding-Barnsley, 1989, 1990. It proceeds in stages from early visually driven associations between printed letters and word pronunciations to later more sophisticated use of phonological information to drive efficient word recognition processes ). There is a growing consensus that early reading development is dependent on phonological skills (Fletcher, 2009;Hulme & Snowling, 2013) and that deficits in these skills are probably causally related to difficulties in learning to read. Following on from this, a subset of phonological language skillsphoneme awareness, letter-sound knowledge and rapid automatized naming, have been identified as strong and independent predictors of variations in reading skill (Hulme & Snowling, 2014).
Another recent theory has suggested that dyslexia reflects a failure to automatize associations between speech sounds and letters (e.g. Blomert, 2011;Blomert & Froyen, 2010;van Atteveldt & Ansari, 2014). Proponents of this theory suggest that the phonological deficit in dyslexia is a secondary consequence of problems in learning to read, whereas a deficit in forming automatic associations between letters and phonemes is a proximal cause. This theory might be seen as an extension of the view that letter-sound knowledge is critical for early reading development (Hulme, Bowyer-Crane, Carroll, Duff, & Snowling, 2012;Melby-Lervåg, Lyster, & Hulme, 2012). However, the automatic letter-sound integration hypothesis is more specific. According to this view letter-sound associations have to be learned to the point of being automatized in order to support the development of accurate and fluent word recognition skills.
Most studies that support this hypothesis are concurrent ERP or fMRI studies comparing lettersound processing in small groups of children or adults with dyslexia, to typically developing readers matched for age (Bakos, Landerl, Bartling, Schulte-Körne, & Moll, 2017;Blau et al., 2010;Blau, van Atteveldt, Ekkebus, Goebel, & Blomert, 2009;Jones, Kuipers, & Thierry, 2016;Karipidis et al., 2017;Kronschnabel, Brem, Maurer, & Brandeis, 2014;Moll, Hasko, Groth, Bartling, & Schulte-Körne, 2016). These studies report atypical (or developmentally delayed) associations between letters and speech-sounds in children with dyslexia, but with little agreement between different studies. In the original ERP studies, typical readers demonstrated an early mismatch-negativity (MMN) in response to mismatched letters and speech-sound pairs, which was absent in adults and children with dyslexia (Froyen, Bonte, van Atteveldt, & Blomert, 2009;Froyen, Willems, & Blomert, 2011). The absence of an early MMN in those with dyslexia has been interpreted as reflecting a deficit in letter-sound integration that is causally related to reading difficulties, however a subsequent attempt to replicate these findings suggests the early MMN is absent only in the most severely impaired dyslexic readers (Žarić et al., 2014).
Similarly, fMRI studies have reported a deficit in letter-sound integration in adults and children with dyslexia; specifically a failure to suppress activation in response to mismatched letter-sound pairs relative to typical age-matched control groups (Blau et al., 2009(Blau et al., , 2010. However, such group differences could simply be attributed to group differences in phonological processing, which are not controlled for in these studies (Peterson & Pennington, 2015). Crucially, subsequent studies that have controlled for differences in phonological skills find little evidence of a relationship between letter-sound integration and reading (Clayton & Hulme, 2018;Law et al., 2018;Nash et al., 2017). Studies using a priming task to assess letter-sound integration found that children with dyslexia were significantly faster to respond to a speech-sound when primed by a matching visually presented letter, indicating intact automatic activation of sounds by letters (Clayton & Hulme, 2018;Nash et al., 2017). Both age-matched and reading-age-matched controls showed comparable performance, and across a large unselected group of typical readers, the extent of letter-sound integration did not predict concurrent variance in reading performance. Together, these studies suggest that automatic associations between letters and speech-sounds emerge within the first few years of reading development, but at present there is little evidence that individual differences in letter-sound integration predicts reading above and beyond phonological skills.
In contrast, there is good evidence that phoneme awareness, letter-sound knowledge and RAN are independent predictors of variation in reading skill which may be causally related to difficulty in learning to read (Hulme, Muter, & Snowling, 1998;Hulme, Nash, Gooch, Lervåg, & Snowling, 2015;Landerl et al., 2018;Melby-Lervåg et al., 2012;Muter, Hulme, Snowling, & Stevenson, 2004;Roth, Speece, & Cooper, 2002;Schatschneider, Fletcher, Francis, Carlson, & Foorman, 2004). The strongest evidence for a causal relationship between both phoneme awareness and letter-sound knowledge and reading development comes from randomized controlled trials (e.g., Bowyer-Crane et al., 2008;Hatcher et al., 2006;Hatcher, Hulme, & Snowling, 2004;Torgesen et al., 1999Torgesen et al., , 2001. Early, intensive instruction in phoneme awareness and letter-knowledge and the linkages between the two improve children's word reading skills. Furthermore, improvements in reading skills brought about by training letter-sound knowledge and phoneme awareness are mediated by improvements in these skills . Rapid automatized naming (RAN) measures the ability to name a random sequence of objects, colours, letters or digits as quickly as possible. It has been suggested that RAN taps brain areas involved in object recognition and naming that are recruited for learning to read , however there are other competing theories regarding the underlying mechanism driving the RAN-reading relationship (e.g. Jones, Ashby, & Branigan, 2013;Protopapas, Altani, & Georgiou, 2013). RAN can be divided into alphanumeric RAN (letters and digits), and non-alphanumeric RAN (objects and colours). Both concurrent and longitudinal studies show that RAN is a correlate of reading skills (Allor, 2002;Bowey, 2005;Kirby, Georgiou, Martinussen, & Parrila, 2010;Wolff, 2014; for a meta-analysis, see Araújo, Reis, Petersson, & Faísca, 2015). The finding that RAN predicts reading speed in typically developing children and in children with dyslexia, even with nonalphanumeric subtests, shows that this effect is not simply a result of differences in letter or digit knowledge Verhagen, Aarnoutse, & Van Leeuwe, 2008;Wolf & Bowers, 1999;Wolff, 2014).
In addition to evidence that these three phonological language skills may be causally related to variations in learning to read, there is also mounting evidence that the relationships they share with reading may be bi-directional. Phoneme awareness, in particular, has been found to share a reciprocal relationship with reading, such that learning to read leads to subsequent improvements in phonemic skills (Castles & Coltheart, 2004;Hulme, Snowling, Caravolas, & Carroll, 2005;Perfetti, Beck, Bell, & Hughes, 1987). For example, Perfetti et al. (1987) found a reciprocal relationship between several tests of phoneme awareness and early reading development in 1 st graders, with a phoneme deletion test exhibiting the most marked reciprocal relationship. The authors suggested that while a level of phoneme awareness was necessary in order to begin the process of learning to read, more advanced phonemic awareness develops in tandem with the development of reading. Indeed, evidence suggests a possible virtuous circle of reciprocal relationships between phoneme awareness, letter-sound knowledge and reading (Muter et al., 2004), with increased phonemic awareness improving the learning of letter-sound correspondences (Fox & Routh, 1984;Treiman & Baron, 1983), in turn leading to improvements in reading which then drives further refinements of phonemic and letter-sound knowledge.
The extent to which RAN and reading skill may share a reciprocal relationship appears less clearcut. This may be due to differences in the age of participants and the types of RAN measure used across studies. For example, Compton (2003) reported a reciprocal relationship between RAN digits and word, but not non-word, reading in a representative sample of 1 st graders, that was most marked in the poorest decoders. A reciprocal relationship between RAN and word reading speed was also reported in a training study of older Swedish children with reading difficulties (Wolff, 2014). However, in a large representative sample of Norwegian children followed from school entry to grade 4 , alphanumeric RAN predicted the development of word reading fluency but not vice versa. The same pattern was found in a longitudinal study of Dutch children in 1 st and 2 nd grade, using a composite measure of alphanumeric and non-alphanumeric RAN (Verhagen et al., 2008). Most studies showing reciprocal relationships between the development of reading and RAN have used alphanumeric RAN tasks either early in reading development (when letter and number name knowledge are not expected to be fully automatized) (Compton, 2003;Peterson et al., 2017), or in samples of children with reading difficulties (Wolff, 2014). Thus, it may be that alphanumeric RAN is influenced by reading development only during that phase of development when knowledge of letter and digit names are not yet fully automatized.
In summary, evidence indicates that phonological language skills (letter-sound knowledge, phoneme awareness and RAN) may be causally related to variations in learning to read, while evidence for a relationship between letter-sound integration and reading is far from conclusive. This longitudinal study, therefore, examines the role of letter-sound integration as a predictor of early reading development alongside other better established predictors (phoneme awareness, letter-sound knowledge and RAN). We measure these skills during the first year of school when children are aged 4-5 years old. Theoretically, this is a critical period of development, since it encompasses the first year of formal reading instruction, when the foundations of children's decoding skills are being established. Measuring children's performance during this period enables us to establish how early automatic associations between letters and speech-sounds emerge. We predict that this early time window of development may be when the automaticity of letter-sound correspondences might make the greatest contribution towards growth in word reading. Furthermore, tracking the development of these foundational skills throughout the first year of school will allow us to investigate potential reciprocal relationships with reading during this critical period of development.
Participants
One hundred and ninety one children (107 male, 84 female) participated in the study. Children were recruited at school entry from 7 primary schools in Greater London. The average age of the children at the start of the study was 4 years, 6 months (range = 4 years, 0 months, to 5 years, 2 months, SD = 3.54 months). Ethical approval was given by the University College London Research Ethics Committee. Head teachers gave written informed consent for children to take part, and the parents of each child were given the option of withdrawing their child from the study before it began.
Design and testing procedure
The children were tested 4 times over a period of 14 months: a) September -December, (Reception Term 1); b) January -March, (Reception Term 2); c) May -July, (Reception Term 3) and d) September -November (Year 1 Term 1). At each time-point children were tested individually in two sessions each lasting approximately 30 minutes. All testing was completed in school. There was a small amount of missing data where children were absent from school. In addition, some children did not complete all tasks at each time-point due to time constraints. However, as tasks were not administered in a fixed order, data can be considered to be missing completely at random (MCAR).
Tests and materials
The children completed an experimental task designed to measure automatic letter-sound integration and a range of measures assessing early reading and language skills.
Letter-sound priming task
This task involved the successive presentation of a visual letter prime and an auditory letter-sound target. Children were required to decide on each trial whether the second stimulus (the "target") was a speech-sound or a "robot sound". Fifty percent of trials consisted of speech sounds; the other 50% of trials involved the presentation of a scrambled speech sound ("robot sound"). Response time (RT) was measured to the auditory stimuli (speech/scrambled speech decision RT). Figure 1 details the trial structure across the three experimental conditions.
Stimuli
Stimuli in this task were recordings of the 5 letter-sounds /tə/ (293ms), /də/ (263ms), /və/ (428ms), /zə/ (413ms) and /dʒə/ (357ms). Scrambled versions of these stimuli were created in Matlab by randomly assembling 5ms segments of the original signal (Ellis, 2010). These scrambled sounds were identical in length, energy and spectral composition to the original speech sounds but sounded completely unlike speech. The lowercase letters corresponding to the letter-sounds were used as the letter primes and were presented in Arial font (approximately 23 x 20mm). On 50% of trials a letter prime was presented and on the other 50% of trials one of five novel letter-like forms (adapted from Taylor, Plunkett, & Nation, 2011) was presented.
Apparatus
Stimuli were presented and responses recorded (speed and accuracy) using E-Prime Software (version 2.0) using a Psychology Software Tools Serial Response Box (SRB; model 200a) and a laptop running Windows 7. Auditory stimuli were presented through headphones.
Procedure
Children were instructed to attend to both the letter and speech-sound and decide whether the sound was a "real" speech-sound using "yes" and "no" response keys on the response box. Before the task began children were familiarized with the procedure in thirteen practice trials.
On each trial a centrally located fixation point was presented for 1000ms, followed by the letter or non-letter stimulus, presented in black and appearing on a white screen for 500ms. The auditory target was presented over headphones and its onset was synchronous with the offset of the visual letter. Each trial was followed by the visual prompt "Real sound?" Response times from the response box were recorded from the onset of the auditory target. The experimenter monitored the child's performance, controlling the presentation of trials.
There were six conditions in the letter-sound priming task. In the congruent condition, the prime and target were the same letter/sound. In the incongruent condition the prime and target were not the same letter/sound. In the baseline condition, the prime was a novel letter and the target was a speech-sound. There were three additional control conditions to prevent children detecting the relationship between primes and targets and generating expectancies. In these control conditions the target was a non-speech sound. Novel symbols and scrambled speech-sounds were yoked to create pseudo baseline, congruent and incongruent control conditions.
The letter-sound priming task was completed across two sessions on consecutive days to reduce attentional demands. In total there were 20 congruent and 20 incongruent trials. In the congruent condition there were four trials of each pairing and in the incongruent condition each letter prime was presented once and paired with all other speech-sounds. There were 40 baseline trials to ensure equal probability of the presentation of a novel symbol relative to a real letter prime. This resulted in 180 trials in total, including 20 "catch" trials to ensure children were attending to the screen. On catch trials the same letters were presented in a black and white animal print (for example, zebra stripes) and children were instructed to make a different response (using a different button on the response box).
Letter-sound knowledge
Children completed the letter-sound knowledge (LSK) subtest from the York Assessment of Reading for Comprehension (YARC; . This test required children to say the sound corresponding to 32 letters and digraphs.
Reading
Children completed the Early Word Recognition (EWR) subtest from the YARC . This test required children to read aloud a list of words of increasing difficulty without time pressure. The maximum possible score is 30.
Phoneme awareness
Children completed the sound deletion subtest from the YARC . In this test children heard a word (and saw an accompanying picture) and were required to repeat it and then repeat it again after deleting a sound (for example "Can you say seesaw? Can you say it again but this time don't say saw?"). Practice trials ensured children understood the instructions. There were 17 items of increasing difficulty and the number of items answered correctly was recorded.
Rapid automatised naming (RAN) Children completed two RAN subtests (colours, and digits) from the Comprehensive Test of Phonological Processing . Each subtest required children to name two 9 × 4 arrays of stimuli as quickly and accurately as possible. The time taken to name all of the items was recorded as was the number of errors (incorrect naming and/or omission of an item). Testing was discontinued if the child made four or more errors on the first stimulus array.
Results
Means and standard deviations for all measures at each time-point are shown in Table 1. Measures show a good range of scores, with the exception of EWR and Phoneme deletion, where many children were at floor at Time 1 (T1). As expected, measures of reading, letter knowledge, phoneme deletion and RAN were significantly correlated at each time point. Standardised measures correlated well across time points. Children improved substantially in performance on all phonological tasks over the course of the study, most markedly between T1 and T2. For correlations between all measures across all time points see Appendix 1 in the online supplementary materials.
The emergence of letter-sound priming
Only correct responses were considered and outliers were removed from the raw reaction time (RT) data. RTs over 5000ms were first removed as this was considered to reflect a lapse in attention. A non-recursive outlier removal procedure was then used (Selst & Jolicoeur, 1994). Finally, RT data were excluded from the analysis where response accuracy was below 75% correct. Following these steps, at T1, 85% of the RT data were included in the analyses, at T2, 83% of the RT data were included, at T3, 89% of the RT data were included and at T4, 82% of the RT data were included.
The mean correct response times in each condition, together with 95% within-subject confidence intervals (Morey, 2008) are shown for each time-point in Figure 2. At T2-4 it is clear that there is an identical pattern across conditions, with faster responses in the congruent condition compared to the baseline condition, and no appreciable slowing in the incongruent condition. However, at T1 children show a contrasting pattern, with similar response times in the baseline and congruent condition and slowing in the incongruent condition.
Response times for the baseline, congruent and incongruent conditions for each time point were compared using a mixed effects linear model treating participants and items as crossed random effects.
The relationship between letter-sound priming and reading related skills
We modelled the development of reading skills using growth curve models. As most children at T1 could not read, we had to restrict the reading growth model to T2, T3 and T4. Furthermore, at T1 there was no statistically significant facilitation effect in the letter-sound integration task, but between T2 and T4 there was a statistically significant facilitation effect (faster reaction times to letter sounds that were preceded by their corresponding printed letter). Because growth was faster between T2 and T3 compared to between T3 and T4, we fitted a nonlinear growth model where we freely estimated the middle time point. This gave us a model with a significant intercept (m = 7.899 words, p < .001) at T2 (initial status), significant growth (m = 8.223 words per year, p < .001) and significant variance in both the intercept and rate of growth (sd = 6.410, p < .001 and sd = 4.107, p = .049 for intercept and growth, respectively).
First, we wanted to see if our measure of facilitation in the letter-sound integration task was a predictor of initial status and growth in word reading. In order to correct for measurement error in the reaction time measures we created latent variables for both baseline and congruent reaction times by grouping the items into four parcels for each construct, at each time point. These parcels were then used as indicators of a latent baseline reaction time construct and a latent congruent reaction time construct that allowed us to estimate the true score regression of reading growth on facilitation in the letter-sound integration task. We assessed facilitation by taking the residual of congruent reaction times after regressing them on baseline reaction times.
The full model is shown in Figure 3. Here, both baseline reaction time and the residual of congruent reaction time (i.e. facilitation: congruent reaction time that is independent from baseline reaction time) at T2 are used as potential predictors of initial status and growth in word reading (between T2 and T4). As can be seen from Figure 3, shorter baseline reaction times at T2 were associated with better initial status and faster growth in early reading skills. However, baseline reaction time explained 89.70% of the variance in congruent reaction time and the residual of congruent reaction time was not a significant predictor of either initial status (unique R2 = .017) or the rate of growth (slope; unique R2 = .020)) in reading skills. This model had a good fit to the data, χ2 (39) = 64.351, p = .007, RMSEA = .059 (10% CI = .031-.084), CFI = .980, TIL = .972.
The results of the model in Figure 3 show clearly that the degree of facilitation in the letter sound integration task at T2 (the unique effects of congruent reaction time after accounting for baseline reaction time) plays no appreciable role in predicting individual differences in initial reading levels or rates of growth in reading between T2 and T4. However, in that model baseline RT is a predictor of both initial reading level and the rate of growth in reading. The different measures of RT in the letter-sound integration task were very highly correlated, and in subsequent analyses we proceeded to assess the dimensionality of the RT measures and their possible role as predictors of reading development. It seems quite possible, based on the model in Figure 3, that a measure of overall speed on the letter-sound integration task (rather than the degree of facilitation on the task) would be a unique predictor of reading development.
To examine the dimensionality of the RT measures from the letter-sound integration task we estimated a confirmatory factor analysis where we included the three measures (baseline, congruent and incongruent) in the same latent variable at each of the four time points. This model had scalar invariance, χ2 (12) = 17.453, p = .133, and a very good model fit, χ2 (60) = 83.866, p = .023, RMSEA = .047 (10% CI = .018-.069), CFI = .984, TIL = .982. As can be seen from Figure 4 the standardised factor loadings were strong for all three reaction time scores at all time points (ranging from .812 to .955). There were strong and significant correlations between this overall RT factor at T2, T3 and T4 but no significant correlations between this factor at T1 and the later time points. This model shows that the three reactiontime scores load very well on a single latent variable that has the same structure at all time points and shows strong stability between T2 and T4. The absence of correlations between T1 and the later time points, presumably reflects children's insecure letter knowledge at T1.
To see if this overall latent RT factor predicted growth in reading we re-estimated the model in Figure 3 but replaced the observed baseline and congruent reaction-time scores with the overall latent reaction-time factor. As Figure 5 shows, the reaction-time factor predicted both the intercept and rate of growth of reading skills; faster reaction times being associated with better initial skills and faster growth in reading. The model had an excellent fit to the data, χ2 (8) = 4.569, p = .803, RMSEA = .000 (10% CI = .000-.055), CFI = 1.00, TIL = .1.01.
However, when letter knowledge, phoneme awareness and RAN were included as predictors, the overall latent-reaction time factor did not predict any unique variance in either initial status or the growth of reading. In this model (see Figure 6), letter knowledge, phoneme awareness and RAN all predicted unique variance in both the initial status (R 2 = .664) and the growth of reading (R 2 = .373). RAN was estimated by a latent variable with RAN colours and RAN digits as indicators while letter knowledge and phoneme awareness were estimated by latent variables with only one indicator where the residual was fixed according to their measures' reliabilities (α = .85). This model had an excellent fit to the data, χ2 (25) = 32.551, p = .143, RMSEA = .040 (10% CI = .000-.075), CFI = .994, TIL = .89.
The correlations between the latent variables in Figure 6 are shown in Table 2. As might be expected RAN, letter knowledge and phoneme awareness show moderate correlations with each other. Overall reaction time on the letter-sound integration task also shows moderate correlations with RAN, letter knowledge and phoneme deletion (r's between .33 and .39), this suggests that performance on the letter-sound integration task reflects in part variations in phonological skills and letter-sound knowledge.
Possible reciprocal relationships between RAN and reading development
Finally, we wanted to examine potential reciprocal relationships between reading development and the development of RAN, phoneme awareness and reaction time. We estimated four models where the growth of these three variables were estimated in parallel with the growth of reading. In particular, we were interested in whether the initial status of one process predicted the growth of the other process when the initial status of the other process was controlled. As growth was nonlinear for all of the measures we freely estimated the factor loading for the middle time point. There were negative residuals for the first time point for reading, RAN colours, phoneme awareness and reaction time, however, as they were all non-significant we fixed them to zero.
As the growth of reading had a different relationship with the growth of RAN colours compared to RAN digits we estimated the two growth processes in separate models. Simplified versions of these models are shown in Figure 7a and 7b for the colour and digit versions respectively. Initial status for both RAN digits and colours predicted the growth of reading after controlling for initial levels of reading; these coefficients are negative meaning shorter times on the RAN tasks were associated with faster growth in reading. In addition, there was a reciprocal relation between reading and the growth of RAN digits: higher levels of initial reading skill were associated with slower rates of growth in RAN digits. This pattern is consistent with the view that children with the weakest reading skills at the beginning of the study, may have had insecure knowledge of digit names, and that which allowed for growth in digit naming speed as reading development increased. Furthermore, there were significant correlations between the growth of the two RAN constructs and the growth of reading. The model fit was good for both RAN colours, χ2 (6) = 9.003, p = .173, RMSEA = .051 (10% CI = .000-.116), CFI = .996, TIL = .991, and RAN digits, χ2 (5) = 6.616, p = .251, RMSEA = .041 (10% CI = .000-.115), CFI = 998, TIL = .994, respectively. There were also reciprocal relationships between phoneme awareness and reading as the initial status of one process predicted the growth of the other process (see Figure 7c; children with higher initial reading levels showed greater growth in phoneme awareness, and similarly children with higher initial levels of phoneme awareness showed a greater growth in reading. In addition, there were significant correlations between the growth of the processes after controlling for starting levels in the other processes. The model fit was good for this model, χ2 (7) = 7.773, p = .353, RMSEA = .024 (10% CI = .000-.095), CFI = .999, TIL = .998.
Discussion
This longitudinal study examined the relationships between developing reading skills and a range of predictors of reading in a sample of 191 children assessed at four time points during their first year of formal reading instruction (mean ages: 4;6 years to 5;6 years). Measures included well established predictors of reading (phoneme awareness, letter-sound knowledge, RAN), as well as a novel measure of automatic letter-sound integration.
We used latent growth curve modelling to examine relationships between these measures and reading development. This statistical technique eliminates measurement error by constructing latent variables that take only the common variance of their indicators into account. It also allows potential reciprocal effects to be examined. Since measures of reading were at floor at the start of the study, growth models were estimated from T2 onwards.
The current longitudinal study used a measure of letter-sound integration that had been used previously in a concurrent study with older children (Clayton & Hulme, 2018). The results from the letter-sound integration task showed a robust priming effect, which was in evidence as soon as children had learned letter-sound correspondences (from T2 onwards after just 4 months in school). Thus, we have evidence of letter-sound integration emerging earlier than suggested by some previous research (Froyen et al., 2009). The priming effect reported from T2 onwards directly replicates the pattern observed in our previous study with the same task (Clayton & Hulme, 2018), but extends this finding to younger children in the earliest stages of learning to read. In line with our earlier findings from a concurrent study with older children (Clayton & Hulme, 2018) the extent of the priming effect on the letter-sound integration task was not a predictor of reading in this younger sample. The presence of robust priming effects on the lettersound integration task at T2 demonstrates that the task is sensitive to children's knowledge of letter-sound relationships, but it is striking that the degree of facilitation on this task is not a reliable predictor of individual differences in reading development. It would be useful for future studies, however, to examine whether alternative measures of letter-sound integration can be developed that are related to individual differences in reading development.
Faster response speeds on the different conditions of the priming task were associated with better letter-sound knowledge, phoneme awareness and RAN performance. These findings are consistent with previous results showing overall slower responding on the task in children with dyslexia (Clayton & Hulme, 2018). In the current study, a latent variable representing the shared variance in speed of response across conditions of the letter-sound priming task was found to predict both reading status at T2 and growth of reading between T2 and T4. However, once letter-sound knowledge, phoneme awareness and RAN were added to this model, it did not predict any additional unique variance. This pattern suggests that overall reaction times on the letter-sound priming task are related to reading ability, but not as well as better established measures (letter-sound knowledge, phoneme awareness and RAN).
Our growth models clearly showed that letter-sound knowledge, phoneme awareness and RAN were all strong, independent predictors of word reading, predicting both initial reading status of children after only a single term of formal reading instruction and growth in reading over the remainder of the year. These longitudinal results measured across a relatively narrow time window extend those from previous research examining predictors of early reading development (Allor, 2002;Lervåg, Bråten, & Hulme, 2009;Muter et al., 2004), and highlight the important role phonological skills play in the earliest stages of learning to read.
An important finding in this study is that both phoneme awareness and alphanumeric RAN share a reciprocal relationship with reading. Learning to read appears to lead to improved performance on phoneme deletion and RAN digits at later time points. This finding is consistent with previous research reporting reciprocal relationships between phoneme awareness and early word reading development (Burgess & Lonigan, 1998;Hogan, Catts, & Little, 2005;Perfetti et al., 1987;Peterson et al., 2017). Some previous research has also reported that literacy development influences subsequent improvement in alphanumeric RAN (Compton, 2003;Wolff, 2014). Crucially, although a strong longitudinal relationship between alphanumeric and non-alphanumeric RAN suggests that both types of RAN rely on the same underlying cognitive mechanisms , reciprocity between RAN and reading growth in the current study was only found for RAN digits and not RAN colours. This finding that initial reading skills predict growth in digit (alphanumeric) but not colour (non-alphanumeric) RAN is consistent with the view that familiarity with alphanumeric stimuli may be intimately related to increases in early reading skills. At T2 the children were roughly 4 years 9 months old and had been in school for a little over one term, hence we might expect knowledge of digit names to be less than fully automatized at this stage of development, whereas children of this age would be fully familiar with colour names. The current findings are, therefore, consistent with the unidirectional relationship between non-alphanumeric RAN and reading reported in . The current results differ slightly from those in Peterson et al. (2017), who found that the reciprocal relationship between RAN and reading development extended to non-alphanumeric RAN. However, the reciprocal effect in Peterson and colleague's study was limited to the very youngest children in the sample (pre-k) and was absent in older children for whom RAN colours still predicted reading (1 st grade). It is possible that by T2 the children in the current study were already too old to show reciprocity between non-alphanumeric RAN and early reading accuracy, whereas increasing facility in the retrieval of letter and digit knowledge fed in to a demonstrable bi-directional relationship with alphanumeric RAN.
The finding that both phoneme awareness and alphanumeric RAN share a reciprocal relationship with early reading development has important implications for both theory and practice. It suggests, not only that the development of phonemically structured phonological representations are critical for learning to read, but that reading experience, in turn, exerts a positive influence on the development of such representations. This pattern raises the possibility that the phonological deficit in dyslexia, especially in older children, may be partially a consequence of reading failure (Peterson et al., 2017). From a clinical perspective, tests of phoneme awareness and RAN have great benefits as assessment tools for children at risk of reading difficulties, not least because they are simple to administer. However, if the relationship between these predictive skills and reading is reciprocal then assessment using these skills potentially loses an element of predictive power in identifying children with reading disorders (Hogan et al., 2005;Peterson, 2017), at least later in development, once reading is established.
To conclude, this longitudinal study of children during the first year of reading instruction provides further support for a close relationship between phonological skills (phoneme awareness, letter-sound knowledge and RAN) and early reading development. In the case of phoneme awareness and alphanumeric RAN, this relationship appears to be a bi-directional one, with increasing reading accuracy leading to improvements in these core phonological skills. By contrast, the study found that automatic integration of letter-sound correspondences could be measured early in development (after just 4 months of formal reading instruction) but did not predict variations in word reading skill. Furthermore, although overall response speed on the letter-sound integration task did predict growth in reading, it did not provide a unique contribution over and above lettersound knowledge, phoneme awareness and RAN. | 2019-06-26T13:15:27.944Z | 2019-06-08T00:00:00.000 | {
"year": 2020,
"sha1": "c586b2bf1b0eeb1885b5db48fd7e8a2472a68ce4",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10888438.2019.1622546?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "f1d8a632728a0ee98d2b142cdcef9ae1aebc3827",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
226621325 | pes2o/s2orc | v3-fos-license | Two-dimensional speckle-tracking of antral contraction in dogs
This study was purposed to make the referenced range of stomach antral contraction strain in 50 dogs using 2-dimensional speckle tracking. In addition, the strain results were compared among body condition scores to reveal the correlations of obesity among the subjects of the study. Finally, the medetomidine group that was comprised of 10 dogs was compared with the normal group to identify the medetomidine pharmacologic effect in the stomach antral contraction. Clinically healthy 50 dogs were recruited for the study. In an ultrasonographic examination, the stomach antrum region was scanned, and at least one cycle of antral contraction was recorded. The peak strain of antral contraction in healthy dogs was 58.2 ± 20.47% (mean ± SD). The obesity group showed a high strain result and there were significant correlations between the body condition score (BCS) 2, BCS 3 groups and BCS 8 group. The medetomidine group revealed a low strain result and was significantly correlated with normal group. Two-dimensional speckle tracking was useful to the evaluation of stomach motility disorders.
Introduction
Stomach contraction motility is mediated by a slow wave which is actually an electrical wave generated by the interstitial cells of Cajal (ICCs) in the gastric smooth muscle layer [1][2][3]. In previous investigations, the alteration of gastrointestinal motility was founded in obese human patients [3,4]. By the same token, it is noted that in rat experiments, intestinal motility regulated by enteric nerve system was shown to be stronger in the evaluated obese group than in the control group [5].
It is noted that medetomidine is an α2-adrenergic agonist drug for analgesia and sedation, which is primarily used in dogs and cats [6][7][8]. The pharmacologic effects of medetomidine suppress gastrointestinal tract as it inhibits gastric secretion, reticulo ruminal contractions and colonic motility in ruminant and horse, and it is also known to inhibit electrical activity of the small intestine in dogs [6][7][8]. Additionally, the central nervous system and endocrine functions suppression, as well as muscle relaxation are also included in medetomidine pharmacologic effect [7,8].
The 2-dimensional (2D) speckle tracking imaging is a recent technique for use to evaluate myocardial motion and function assessment in humans [9,10] and in dogs [11][12][13][14]. In this way, the 2D speckle tracking traces over a period of time the motion pattern of unique speckle in tissue B-mode images, and is free from the angle parameters, tissue translation, and tethering in the subject [9,11]. In previous studies, speckle tracking and strain measurement were used to assess the activity of an in vitro porcine antrum, in vitro human uterine, and human stomach antral contraction. These evaluations are suitable for a comprehensive quantitative analysis of gastrointestinal and uterine motility in a subject [15][16][17][18][19].
The aim of this study was designed to generate reference data for stomach antral contraction strain by using 2D speckle tracking in dogs, to determine a correlation between the measured variables by the presence of the condition of obesity, and to compare the measured differences between the normal and medetomidine group of subjects in this research study.
Materials and Methods
Animal recruitment All procedures were approved by the Institutional Animal Care and Use Committee at Gyeongsang National University and the dogs were cared for according to the Guidelines for Animal Experiments (GNU-161031-D0062) of Gyeongsang National University. Clinically 50 healthy dogs were recruited for examination in this study. They were noted to have included 3 intact males, 26 castrated males, 8 intact females and 13 spayed females. The dogs ranged in ages from 1 to 18 years, and were an average weight of 8.1 ± 6.3 kg (range, 2.1 to 37 kg). The dogs were screened for evidence of a gastrointestinal disease by the use of a physical examination, complete blood counts, serum biochemistry, abdominal radiographs, and ultrasonography.
To assess the effect of obesity on stomach antral contraction strain, the body condition score (BCS) was used. It is noted that BCS was commonly used for weight management studies [20]. All dogs were graded and grouped by the BCS score following a physical examination. BCS groups were BCS 2 (n = 3), BCS 3 (n = 5), BCS 4 (n = 10), BCS 5 (n = 20), BCS 6 (n = 9), and BCS 8 (n = 3).
Medetomidine group
Among the 50 dogs, 10 young adult healthy beagle dogs were recruited for the medetomidine group. This group included ten castrated male dogs, and they were all the same age at 4 years old, whereby their average weight was 10.2 ± 1.19 kg (range, 8.6 to 12.2). They had been subjected to a prior ultrasonography examination before the onset of the drug injection. In this case, medetomidine (Domitor ® ; Orion Pharma, Orionintie, Finland) was injected intravenously into the dogs, with 40 mcg/kg that caused the sedation and analgesia result in all subjects. Next, an abdominal ultrasonographic examination was performed after 5 min after the effect of the drug injection was concluded.
Equipment and measurement
All ultrasonographic examinations were performed by 1 investigator (P.J.H.) using the ultrasound system (Arietta 70; Hitachi Aloka Medical, Tokyo, Japan) and a high-frequency (12 MHz) linear-array transducer. Afterwards, the hair was clipped, and a transducer was applied to the skin of the dog with the use of coupling gel positioned at the right side of the cranial abdomen. Likewise, the dogs were restrained in a dorsal recumbency position on the examination table. By the same token, all of the dogs were fasted 12 h before the start of the ultrasonographic examination.
In what follows, the contraction image had been acquired using high quality images possible. Next, the images were analyzed by a same investigator (P.J.H.). Upon review a shot axis view at stomach antrum region was used to evaluate radial strain by 2D-speckel tracking using software program built into ultrasound system. Additionally, all data were obtained from one acquired contraction cycle (Fig. 1A).
In consequence of this study, the resulting strain was therefore mapped in segments of the gastric antrum wall. As noted in the inner and outer borders of the stomach muscular layer, these regions were manually traced to select the appropriate region of interest (ROI) (Fig. 1B). The observer then checked to ensure that the ROI was visually synchronized during the stomach contraction. After processing, the computer software automatically traced the stomach muscular layer and evaluated whether it reliably followed the muscular layer speckles. As an illustration, if the evaluation failed because the software was showed inadequate tracking quality segment, the inner and outer borders were manually corrected and reevaluated. In that case that there were more than three attempts that failed, the entire images were excluded from analysis. The peak systolic strain and the speckle size which were noted at a distance of a two-strain measurement point that formed the basis of the strain estimation in the inner and outer of stomach membrane whereby ROI were measured in the radial plane. Statistical analysis All statistical analyses were performed with commercial statistical analysis software (SPSS version 19.0; SPSS Inc., USA). The use of a one-way analysis of variance (ANOVA) was performed to determine whether a relationship existed between antral contraction strain, and BCS in normal dogs. In this case, a one-way ANOVA followed by a post hoc Tukey's multiple comparisons test was used to compare the peak strain in each of the BCS groups. Next, a paired t-test was performed after normality testing to determine correlations in the antral contraction strain, after administration of the medetomidine in 10 dogs. In the statistical test, a p value less than 0.05 was considered to indicate a significant difference.
Normal range of antral contraction strain and correlation with BCS score
In the entire group of 50 dogs, the mean antral contraction strain was 58.2 ± 20.47% (mean ± SD) and the 95% confidence interval was 52.39% to 64.02%.
In each of the BCS groups, the antral contraction showed various strain results between the groups (Table 1). BCS 8 group showed significant differences from BCS 2 (p = 0.042) and BCS 3 (p = 0.016) in the strain results (Fig. 2).
Comparison of antral contraction strain between normal group and medetomidine group
In this case, the mean ± SD (range) of antral contraction strain at normal group was 69.56 ± 13.71% (59.75% to 79.37%), and the medetomidine group was 8.9 ± 3.73% (6.24% to 11.57%). The strain in the medetomidine group was significantly lower than was seen in the normal group (p < 0.001) (Fig. 3).
Discussion
In previous studies, it was shown that the strain measurement in a human antral contraction was 82%, and it also identified that measurement of an intragastric balloon pressure was seen to significantly influence the relationship with strain, therefore strain had a significant correlation with the instance of a stomach contraction [15,17]. In the present study, it is noted that the antral contraction strain is 58.2 ± 20.47% (mean ± SD) in dogs.
Specifically, the use of in vitro testing using a silicon strip phantom mimicking slowly moving tissue was revealed, and 58 Junghyun Park, Soyon An, Tae Sung Hwang, Hee Chun Lee that if the speckle size had been small, the strain result was underestimated. Increasing the speckle size to 1.9 mm might reduce some of this error. But speckle size up to 3.1 and 4.2 mm did not increase the accuracy further, and it was shown that the speckle size to 0.8 mm was too small for measuring the strain results for the in vitro phantom experiment [21]. It must be remembered that it was revealed that another study which measured the porcine antral contraction strain had compared 1.2 mm with 1.9 mm speckle size, and it was seen to be better in 1.9 mm speckle size to estimate the strain in in vitro model [16].
In this study, the speckle size was 1.36 ± 0.35 mm (mean ± SD) in dogs. Another key point is that the speckle size was shown to vary individually among the dogs tested. In this case, the testing could not exclude the possibility of an influence to the strain result.
It has been reported that there were many methods to evaluation the stomach motility [22][23][24]. Among these methods, electrogastrography (EGG) was a traditional noninvasive technique which is often used when evaluating the gastric electrical activity [22][23][24]. In a human study, children tested by EGG allowed clinical researchers to evaluate the difference in normal and obese patients, as the results were able to reveal that there was no differences found in the normal and obese patients evaluated [25]. But other studies revealed that in morbid obese adult patients, the results also showed an increase in the percentage of bradygastria [26]. All in all, it is noted that bradygastria was associated with a strong antral contraction [27]. Alternatively, it is noted that in the EGG method, there was limitation in measuring obese patients, because the distance between the electrode and the stomach was too far a distance in morbid obesity patients, whereby it could decrease the signal detection and might show the erroneous result for that reason [3]. In this study, the BCS 2, BCS 3 groups showed significant correlation with the BCS 8 group (p = 0.042 and p = 0.016, respectively). It was revealed that the BCS 8 group showed a high strain result, which was higher when compared to the BCS 2 and BCS 3 groups. This result meant that the obese group had a strong antral contractility, as compared with the lean groups.
Medetomidine was revealed to reduce the gastrin secretion in the stomach through an activation of both the central and peripheral α2-adrenoceptors, and it was also revealed that it inhibited the migrating myoelectric complex pattern of the small intestine [6]. In other reports, gastrin secretion inhibition was associated decreasing the motility of the gastric antrum, duodenum, mid jejunum and ileum in dogs [6,28].
In this study, the stomach antrum motility was evaluated by the strain. In this study, there were noted comparisons in the normal and medetomidine groups, whereas the medetomidine group was noted to have revealed a significantly low strain result. This evaluation meant that the stomach motility was found to be decreased in the medetomidine group. Namely, it was the same result as previously reported in dogs [28] that medetomidine effect was found to reduce the gas-trointestinal motility.
This study has several limitations. First, the stomach gas revealed disturbed imaging of the entire cross section, as was seen during the ultrasonography examination. This result made it difficult to determine the ROI region in 2D-speckle tracking. The reverberation artifact and shadowed area in 2Dspeckle tracking imaging could produce an error in the result [11]. Therefore, the evaluation may have an effect on the strain result for this reason. Second limitation was that excessive adipose tissue also could impair image quality in general ultrasonography imaging [15,17]. Third, in BCS groups there were significant differences in the number of dogs between the groups. These differences may influence the strain results. Further studies are necessary to provide a review of and evaluation of these correlations.
In conclusion, the usefulness of 2D-speckle tracking was verified to measure the antral contraction in dogs. The mean antral contraction strain was 58.2 ± 20.47% (mean ± SD) and the 95% confidence interval was 52.39% to 64.02%. Notably, this is the first report to measure antral contraction strain using 2D-speckle tracking in dogs.
The obesity which was assessed by a BCS score affected antral contraction strain, and it was noted that the obese group showed a strong antral contraction strain than the lean group. Medetomidine also affected the stomach motility and showed the incidence of a low antral contraction strain than was noted in the normal group.
2D-speckle tracking may be useful to evaluation in stomach motility disorders through the antral contraction strain result. | 2020-11-14T08:07:44.904Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "efb9e3d1c7708465cc04011a3cf04adf00303efe",
"oa_license": "CCBYNC",
"oa_url": "https://www.kjvr.org/upload/pdf/kjvr-2020-60-2-55.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "33dbb5479c161dc0f961f2f5dd1760f5b0926896",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
253511203 | pes2o/s2orc | v3-fos-license | Realizability of hypergraphs and intrinsic link theory
In this expository paper we present short simple proofs of Conway-Gordon-Sachs' theorem on intrinsic linking in three-dimensional space, as well as van Kampen-Flores' and Ummel's theorems on intrinsic intersections. The latter are related to nonrealizability of certain hypergraphs in four-dimensional space. The proofs use a reduction to lower dimensions which allows to exhibit relation between these results. We use elementary language which allows to present the main ideas without technicalities. Thus our exposition is accessible to non-specialists in the area, including students who know basic three-dimensional geometry, and who are ready to learn straightforward four-dimensional generalizations.
1. Introduction 1.1.Impossible constructions, intrinsic intersection and intrinsic linking.'Impossible constructions' like the impossible cube, the Penrose triangle, the blivet etc. are wellknown, mainly due to pictures by Maurits Cornelis Escher, see Figure 1, [Io], and also [Br68,CKS+,GSS+].The pictures do not allow the global spatial interpretation because of collision between local spatial interpretations to each other.In geometry, topology and graph theory there are also famous basic examples of 'impossible constructions' (of which local parts are 'possible').The following example of an 'impossible construction' or an 'intrinsic intersection' is already directly relevant to this paper: For 5 points in the plane one cannot join each point to each other by a path so that the paths intersect only at their starting points or endpoints. 1 Proposition 1.1.For any 5 points in the plane there are two intersecting segments joining these points, and having no common vertices.
For next results we need some notation.We abbreviate 'three-dimensional Euclidean space R 3 ' to '3-space'.Analogous meaning have '4-space' (R 4 ) and d-space (R d ).By a triangle we mean the part of the plane bounded by a closed polygonal line of three segments.
1 Also, one cannot take 3 houses and 3 wells in the plane and join each house to each well by a path so that the paths intersect only at their starting points or endpoints.In graph-theoretic terms these assertions mean that the complete graph K 5 on 5 vertices, and the complete bipartite graph K 3,3 are not planar, see Figure 2. Proposition 1.1 below is a 'linear' version of the non-planarity of K 5 .
Take two triangles in 3-space no 4 of whose 6 vertices lie in one plane.The triangles are called linked, if the outline of the first triangle intersects the second triangle exactly at one point.E.g. the triangles A 1 A 3 A 5 and A 2 A 4 A 6 , ∆ and ∆ ′ in Figure 3 Theorem 1.2 (Linear Conway-Gordon-Sachs Theorem; [Sa81,CG83]).If no 4 of 6 points in 3-space lie in one plane, then there are two linked triangles with vertices at these 6 points.
1.2.Why this paper might be interesting?We exhibit striking relations • of 'intrinsic intersections' in the plane (Propositions 1.1 and 2.3) to 'intrinsic linking' results in 3-space (Theorems 1.2 and 2.7 below); • of the latter results in 3-space to 'intrinsic intersections' in 4-space (Theorems 1.5 and 4.2 below generalizing Proposition 1.1).
Remark 1.3 (lowering of dimension).Often it is convenient to reduce a planar result to a one-dimensional result (i.e., to a result in a line), and a spatial result to a planar result.Similarly, a tempting approach to a 4-dimensional result is an analogy to, or a reduction to, a spatial result.Some examples are given in Remark 1.4.
Proposition 1.1 on intrinsic intersection in the plane is reduced (in §2.1) to Proposition 2.1 on intrinsic linking in the line.Analogously, Theorem 1.2 on intrinsic linking in 3-space is reduced (in §2.4) to Proposition 1.1 (more precisely, to its quantitative version, Proposition 1.1 ′ ).Analogously, Theorem 1.5 below on intrinsic intersection in 4-space is reduced (in §2.5) to Theorem 1.2.This relation between intrinsic linking and intrinsic intersection in consecutive dimensions generalizes to higher dimensions (Theorem 1.6; for simplicity we mention dimensions higher than 4 only in that theorem).Because of such 'lowering of dimension' the reader not familiar with 4-space need not be scared.
The results on intrinsic intersections give a natural generalization of non-planarity of graphs: examples of two-dimensional analogues of graphs non-realizable in 3-and 4-space.This is explained in Remark 3.1.
We give a simplified exposition accessible to non-specialists in the area.We state results in terms of systems of points.So we do not use the notion of realizability of a hypergraph (we do mention this notion because it is an important motivation).For understanding most of the paper it suffices to know basic geometry of 3-space, and to know or learn straightforward 4-dimensional generalizations.We believe that the elementary description of simple applications of topological methods makes these methods more accessible.Comparison with other proofs is discussed in Remark 3.2.
The history is exposed in Remark 3.3.
Plan of the paper.The remarks are not formally used later and so could be omitted.The same is true for §1.4 and §1.5.Sections 2, 3 and 4 are independent of each other, so they could be read in any order.Forward references and references to other papers can be ignored for the first reading.
Remark 1.4 (some intuition on 4-space).(a) 'Typical' intersection • of two segments in the plane is either empty set or a point (here 'typical' means that no 3 points among the vertices of segments lie in one line); • of a segment and a triangle in 3-space is either empty set or a point; • of two triangles in 4-space is either empty set or a point.(b) For each two points • of the plane distinct from a point A in the plane there exists a polygonal line joining these points and not passing through A.
• of 3-space not belonging to a line l in 3-space there exists a polygonal line joining these points and disjoint with l; • in 4-space not belonging to a 2-dimensional plane α in 4-space there exists a polygonal line joining these points and disjoint with α.
Theorem 1.5 (Linear Van Kampen-Flores Theorem; [vK32,Fl34]).From any 7 points in 4-space one can choose two disjoint triples such that the two triangles with vertices at the triples intersect.
Analogues of Theorem 1.5 are true for 5 points in the plane, and for 6 points in 3-space (Propositions 1.1 and 2.4.b).Analogues of Proposition 1.1 and of Theorem 1.5 are false for 4 points in the plane and for 6 points in 4-space, respectively: in R 2k take the 2k + 1 vertices and an interior point of a 2k-simplex, cf. Figure 4. 1.4.Intrinsic intersection and linking in higher dimensions.A subset of R d is called convex, if for any two points from this subset the segment joining these two points is in this subset.The convex hull of X ⊂ R d is the minimal convex set that contains X.
Theorem 1.6.Take any d + 3 points in R d of which no d + 1 points lie in one (d − 1)dimensional hyperplane.
For d even there are two disjoint (d + 2)/2-element subsets whose convex hulls intersect.If d is odd, then there is an unordered pair of (d + 1)/2-simplices with vertices at these points which is linked (i.e., the boundary of the first simplex intersects the convex hull of the second simplex exactly at one point).Theorem 1.6 is proved by induction on d.The base is d = 1 and is trivial.The inductive step is proved in §2 for d = 2, 3, 4; the proof for the general case is analogous.
The analogue of Theorem 1.6 for d + 2 points • does not make sense for d odd because (d + 1)/2-simplex has (d + 3)/2 vertices; • is false for d even, analogously to the corresponding counterexample to Theorem 1.5.For d odd there is Proposition 2.4.b on intrinsic intersection and its higher-dimensional analogue.They are weaker than the corresponding Theorems 1.2, 1.6 on intrinsic linking.More results are presented in [KRR+,§3], [Sk16, §4].
1.5.Multiple intersection and linking.Let us formulate the analogues of the above results for r-fold intrinsic intersections.
Theorem 1.7 ([Sa91g]).From any 11 points in 3-space one can choose 3 triangles having pairwise disjoint vertices but having a common point.
It is surprising that known proof of such an elementary result involves algebraic topology.It would be interesting to obtain an elementary proof.
Example 1.8.In 3-space take the vertices of a 3-dimensional simplex and its center, see Figure 4.For every of these 5 points either take it with multiplicity two or take a close point.We obtain 10 points for which the analogue of Theorem 1.7 is false.For a higher-dimensional higher-multiplicity analogue of Theorems 1.6 and 1.7 see [Sk16, Theorem 1.6], [Sk18, Conjecture 3.1.4and the text below].
Let us formulate the analogues of the above results for intrinsic triple linking.
There are three triangles in 3-space which are pairwise unlinked but linked together (Figure 5; one can check that this projection is realizable, as opposed to Figure 1, right.)Such a triple of triangles is called Borromean, cf.[Val] and [Sk,§4.6].
Theorem 1.9 (Negami [Ne91]).There is N such that if no 4 of N points in 3-space lie in one plane, then there is a Borromean triple of triangles with the vertices at these points.
See also [PS05,FNP].It would be interesting to obtain an analogue of Theorem 1.9 with specific N.By Example 1.8 one cannot take N = 10.Can one take N = 11 (as Figure 5. Borromean triple of triangles in Theorem 1.7)?One can make computer experiments to solve this problem using an equivalent definitions of Borromean triple [Ko19].It would be interesting to obtain a higherdimensional higher-multiplicity analogues of Theorem 1.9, cf.[BL, FFN+].
2.1.Intersections in the plane: proof of Proposition 1.1.Proposition 1.1 is easily proved by analyzing the convex hull of the 5 points.In order to illustrate the 'lowering of dimension' idea (see Remark 1.3) in the simplest situation, we also deduce Proposition 1.1 from the following obvious 1-dimensional result.
Proposition 2.1.Every 4 points in a line can be colored in two red and two blue so that they alternate: red-blue-red-blue or blue-red-blue-red (one says 'the red pair is linked with the blue pair').Proof of Proposition 1.1.There is a line l such that one of the given points, say O, lies on one side of l, and the other points, say A, B, C, D, lie on the other side of l.If for some two points X, Y ∈ {A, B, C, D} the point X belongs to the segment OY , then we are done.Otherwise we can assume that the points A, B, C, D are seen from O in this order, see Figure 6.Then the required assertion follows by Lemma 2.2 below.Lemma 2.2 (lowering of dimension; see Figure 6, left).Two triangles in the plane have a common vertex.A line l splits this point from the bases of the triangles.The intersections of l with the triangles alternate along l.Then some two sides of the triangles intersect but do not have common vertices.This lemma is trivial.It is explicitly stated in order to conveniently use it (here and in §2.5), and to illustrate its generalization to higher-dimensions (Lemma 4.6).
Proposition 2.3.(a) (See Figure 2, right) Two triples of points are given in the plane.Then there exist two intersecting segments without common vertices and such that each segment joins the points from distinct triples.
(b) (See Figure 6, right) Four red and two blue points B 1 , B 2 are given in the plane.Suppose that any two segments joining points of different colors either are disjoint or intersect at their common vertex.Then there are two red points R 1 , R 2 such that the quadrilateral R 1 B 1 R 2 B 2 does not have self-intersections, and the remaining two red points lie in different sides w.r.t. the quadrilateral.('In different sides' means that a general position polygonal line joining the remaining two red points intersects the outline of the quadrilateral at an odd number of points.)(b) disjoint pair and triple such that the segment joining points of the pair intersects the triangle spanned by the triple.
Proof of (a).
There is a plane α such that one of the given points, say O, lies on one side of α, and the other 5 points lie on the other side of α (Figure 7). Figure 4 shows that the analogue of (a) for 5 points is false.
2.3.'Quantitative' versions.We prove the following stronger 'quantitative' (i.e., algebraic modulo 2) version of the results from §1.(The deduction of Theorem 1.2 from Proposition 1.1, not from its quantitative version, has a technical detail which is hard to generalize to higher dimensions.) Proposition 2.1 ′ (obvious).Any 4 points in the line have a unique unordered splitting into two linked pairs.
Proposition 1.1 ′ .No 3 of given 5 points in the plane lie in one line.Then the number of intersection points of interiors of segments joining the 5 points is odd.
This is easily proved by analyzing the convex hull of the 5 points, or follows by Proposition 2.1 ′ and the following lemma.
Part (a) is analogous to lemma 2.2 (and so is trivial).Part (b) follows from (c).A simple proof of (c) is left to a reader.
Remark 2.5.Proposition 1.1 ′ is indeed stronger than Proposition 1.1 because it suffices to prove Proposition 1.1 under the assumption that no 3 of the 5 points lie in one line, (a) first, since otherwise Proposition 1.1 is obvious: if points A, B, C among given 5 points lie in one line, B between A and C, and D is any other given point, then segments AC and BD intersect.
(b) second, since we can make a small shift so that no 3 of the 5 shifted points lie in one line, and no intersection points of segments with disjoint vertices are added.
Analogously to (b), Theorems 1.2 ′ and 1.5 ′ below are stronger than Theorems 1.2 and 1.5.Triangles differing only by permutation of vertices are considered to be the same.]).No 4 of given 6 points in 3-space lie in one plane.Then the number of linked unordered pairs of triangles with vertices at these 6 points is odd.vK32,Fl34]).If no 5 of 7 points in 4-space lie in one 3-dimensional hyperplane, then the number of intersection points of triangles with vertices at these points is odd.
2.4.Linking in 3-space: proof of Theorem 1.2 ′ .In 3-space a segment p is below a segment q (looking from point O), if there exists a point X ∈ p such that the segments OX and q intersect.Lemma 2.6 (lowering of dimension).No 4 of 6 points O, A 1 , . . ., A 5 lie in one plane.Then the triangles OA 1 A 2 and A 3 A 4 A 5 are linked if and only if A 1 A 2 is below exactly one side of the triangle A 3 A 4 A 5 .
The lemma follows because the number of those sides of the triangle A 3 A 4 A 5 that are higher than A 1 A 2 equals to the number of intersection points of the outline of the triangle A 3 A 4 A 5 with the triangle OA 1 A 2 .
Proof of Theorem 1.2 ′ .There is a plane α such that one of the given points, say O, lies on one side of α, and the other points, say A 1 , . . ., A 5 , lie on the other side of α (Figure 7).Take the intersection points of α and segments OA 1 , . . ., OA 5 .Since no 4 of the 6 points lie in one plane, no 3 of the taken 5 points lie in one line.So in the plane α we obtain a picture analogous to Figure 3, middle.Then the following numbers have the same parity: • the number P of linked unordered pairs of triangles formed by given 6 points; • the number Q of segments A i A j that are below an odd number of sides of their 'complementary' triangles A k A l A m , {i, j, k, l, m} = {1, 2, 3, 4, 5}; • the number of 'undercrossings', i.e., of ordered pairs (A i A j , A k A l ) of segments in which the first segment is below the second one; • the number of intersection points of interiors of segments whose vertices are the 5 taken points in α.
Here the numbers P and Q have the same parity by the Lowering of Dimension Lemma 2.6: a segment cannot intersect a triangle by more than two points, so in the lemma we can replace 'exactly one side' by 'an odd number of sides'.
By Proposition 1.1 ′ the latter number is odd.Hence P is also odd.
The following version of Theorem 1.2 is analogously reduced to Proposition 2.3.b[Zi13].This version is used for some 4-dimensional result (Theorem 4.2) in §4.5.
Figure 8. Whitehead link formed by space quadrilaterals
In 3-space take two quadrilaterals (i.e., closed quadrangular polygonal lines) ABCD and A ′ B ′ C ′ D ′ , no 4 whose 8 vertices lie in one plane.The quadrilaterals are called linked modulo 2 if the number of intersection points of the quadrilateral ABCD with the union of the triangles A ′ B ′ C ′ and A ′ D ′ C ′ is odd.(As opposed to triangles, there are space quadrilaterals linked but not linked modulo 2, see Figure 8. Cf. definition at the beginning of §4.5.)
Theorem 2.7 ([Sa81]
).There are 4 points red and 4 blue points in 3-space.No 4 of these 8 points lie in one plane.Then there are two linked modulo 2 space quadrilaterals consisting of segments joining points of different colors.
Proof of Theorem 1.5.There is a 3-dimensional hyperplane α such that one of the given points, say O, lies on one side of α, and the other 6 points lie on the other side of α (Figure 9).Take the 6 intersection points of α with the segments joining O to the other 6 points.We may assume that no 5 of the given 7 points lie in one 3-dimensional hyperplane (analogously to Remark 2.5.b).Hence no 4 of the 6 points in α lie in one plane.Then by Theorem 1.2 there are two linked triangles with vertices at the taken 6 points.So we are done by Lemma 2.8.
To the proof of Theorem 1.5; the hyperplane α in 4-space is shown as a plane in 3-space Lemma 2.8 (lowering of the dimension; Figure 9).Two tetrahedra τ and τ ′ in 4-space have a common vertex.A 3-dimensional hyperplane α splits this vertex from the bases of the tetrahedra.The outlines of the triangles α ∩ τ and α ∩ τ ′ are disjoint and linked in α.Then some two faces of the tetrahedra intersect but do not have common vertices.
This lemma is not as obvious as its low-dimensional analogues (Lemma 2.2 and analogous result for a triangle and a tetrahedron in 3-space) because the surface of a tetrahedron in 4-space does not split 4-space (cf.Remark 1.4.b).
Proof of Lemma 2.8.Denote by γ the intersection plane of the 3-dimensional hyperplanes spanned by the tetrahedra.Then α ∩ γ is the intersection line of the planes of the linked triangles α ∩ τ and α ∩ τ ′ .Hence τ ∩ γ and τ ′ ∩ γ are triangles with a common vertex O, which is a common vertex of the tetrahedra (Figure 6, left).Since α splits O from the bases of the tetrahedra, we have that α ∩ γ splits O from the bases of the triangles.Since the triangles α ∩ τ and α ∩ τ ′ are linked, the intersection points of the line α ∩ γ and the outlines of the triangles τ ∩ γ and τ ′ ∩ γ alternate along the line [Sk, Proposition 4.1.3.b](cf. Figure 3, right).Hence by Lemma 2.2 two sides of the triangles τ ∩ γ and τ ′ ∩ γ intersect but do not have common vertices.At most one of these sides contains O. Hence the two sides are contained in two faces of the tetrahedra intersect but do not have common vertices.
Lemma 2.8 ′ .(a) Two tetrahedra τ and τ ′ in 4-space have a common vertex O.No 5 of their 7 vertices lie in one 3-dimensional hyperplane.A 3-dimensional hyperplane α splits O from the bases of the tetrahedra.Then the surfaces of the tetrahedra τ, τ ′ intersect at an even number of points if and only if the triangles α ∩ τ and α ∩ τ ′ are linked in α (i.e., if the intersection of one tetrahedron with the surface of the other contains exactly one segment with the end O).
(b) No 5 of 7 points O, A 1 , . . ., A 6 ∈ R4 lie in one 3-dimensional hyperplane.Then the number from Theorem 1.5 ′ equals to the sum of the numbers of intersection points of the interiors of faces of tetrahedra O∆ and O∆ ′ , over all unordered splittings of points A 1 , . . ., A 6 into two unordered triples ∆ and ∆ ′ .(The interior of a triangle is its complement to the outline.)(c) Denote by X = ( [7] 3 ) 2 the set of all unordered pairs of 3-element subsets of [7].For any of the 10 non-ordered partitions σ ⊔ τ = [6] into 3-element sets denote Then a pair {α, β} ∈ X is contained in an odd number of sets T {σ,τ } if and only if α ∩β = ∅. 4 Part (a) is analogous to lemma 2.8.Part (b) follows from (c).A simple proof of (c) is left to a reader.
The following higher-dimensional version of Proposition 2.3.a is related (analogously to Theorem 1.5) to some 3-dimensional intrinsic linking result [DS22, Remark 2.5].
Theorem 2.9 ( [vK32,Fl34]).Three triples of points in 4-space are given.Then there exist two intersecting triangles without common vertices such that the vertices of each triangle belong to distinct triples.
2.6.Unlinking properties.Here we present quantitative versions asserting that the number of intersections (or linkings) is even, cf.§2.3.
Proposition 2.10.(2) There are 5 points in the plane such that no 3 of them lie in a line, and every segment joining two of the points intersects the outline of the triangle formed by the remaining three points at an even number of points.(I.e., every pair of points is 'unlinked' with the triangle formed by the remaining three points.) (2') No 3 of 5 given points in the plane lie in a line.Then the number of those segments joining two of the points that intersect the outline of the triangle formed by the remaining three points exactly at one point, is even.
Proofs are easy and are left to the reader.In 3-space instead of unlinking properties 2.10.2,2'there is a linking property (Theorem 1.2) and the following unlinking properties.
Proposition 2.11.(3) There are 6 points in 3-space such that no 4 of them lie in a plane, and every segment joining two of them intersects the surface of the tetrahedron formed by the remaining four points at an even number of points.(I.e., every pair of points is 'unlinked' with the tetrahedron formed by the remaining four points.) (3') No 4 of given 6 points in 3-space lie in one plane.Then the number of intersection points of segments joining two of the points, and surfaces of tetrahedra formed by the remaining four points, is even.Sketch of a proof.(3) Take points close to the vertices of regular octahedron, points close to the vertices of a triangular prism, or points on the moment curve.
(3') Any two triangles spanned by two disjoint triples of given points either are disjoint or intersect by a segment (non-degenerate to a point).There is an even number of ends of such segments.The ends of such segments are exactly intersection points of segments joining pairs of points, and surfaces of 'complementary' tetrahedra.
Alternatively, we have Here • |S| 2 is the parity of the number of elements in a finite set S, • T AB is the tetrahedron formed by the four given points distinct from A, B, and • the last equality holds because for every A the set Propositions 1.1, 1.1 ′ , 2.4.b and 2.11.3' show that under transition from dimension 2 to dimension 3 the property of the existence of intersection is preserved, while the parity of the number of intersections change.The 3-dimensional versions of Propositions 1.1, 1.1 ′ have a stronger form: Theorems 1.2 and 1.2 ′ .Proposition 2.12.(4-3) There are 7 points in 4-space such that no 5 of them lie in a 3-dimensional hyperplane, and every triangle formed by 3 of them intersects the surface of the tetrahedron formed by the 4 remaining points at an even number of points.(I.e., every triangle formed by three of the points is 'unlinked' with the tetrahedron formed by the remaining four points.)(4'-3) No 5 of 7 given points in 4-space lie in a 3-dimensional hyperplane.Then the number of those triangles spanned by three of the points that intersect exactly at one point the surface of the tetrahedron formed by the remaining four points, is even.
(4-2) (conjecture) There are 7 points in 4-space such that no 5 of them lie in a 3dimensional hyperplane, and every segment joining two of them intersects the 3-dimensional surface of the 4-simplex formed by the remaining five points at an even number of points.(I.e., every pair of points is 'unlinked' with the 4-simplex formed by the remaining five points.)(4'-2) No 5 of 7 given points in 4-space lie in a 3-dimensional hyperplane.Then the number of intersection points of segments joining them and 3-dimensional surfaces of 4-dimensional simplices formed by the remaining five points, is even.Sketch of a proof.(4-3) Take points on the moment curve.See details in [St].
Some important remarks
Remark 3.1 (Relation to hypergraphs).(a) Two-dimensional analogues of graphs are 3-homogeneous, or 2-dimensional hypergraphs defined as collections of 3-element subsets of a finite set. 5 For brevity, we omit '3-homogeneous' and '2-dimensional'.For instance, a complete hypergraph on k vertices is the collection of all 3-element subsets of a k-element set.Realizability (also called embeddability) of a hypergraph in R d is defined similarly to the realizability of a graph in the plane: one 'draws' a triangle for every three-element subset.See Hypergraphs (and simplicial complexes) play an important role in mathematics.One cannot imagine topology and combinatorics without them.They are also used in computer science and in bioinformatics, see e.g.[PS11].
A 'small shift' (or 'general position') argument shows that every graph is realizable in 3space.A straightforward generalization shows that every hypergraph is realizable in 5-space.
The complete hypergraph on 6 vertices contains 'the cone over K 5 ' and hence is not realizable in 3-space (Proposition 2.4.a).Already in the early history of topology (1920s) mathematicians tried to construct hypergraphs non-realizable in 4-space.Egbert van Kampen and A. Flores in 1932-34 proved that the complete hypergraph on 7 vertices is not realizable in 4-space (Theorem 1.5).This is both an early application of combinatorial topology (nowadays called algebraic topology) and one of the first results of topological combinatorics (also an area of ongoing active research).
(b) Realizations (=embeddings) are maps without self-intersections.For topological combinatorics and discrete geometry it is interesting to study maps whose self-intersections are non-empty (like for embeddings), but 'not too complicated'.An important particular case is studying maps without triple intersections and, more generally, maps without r-tuple intersections, see §1.5 and surveys [Sk16], [Sk18,§3.3].
PL versions of 'quantitative' results (see §2.3) imply the PL versions for almost-embeddings (see the PL case of [Sk18, Theorems 1.4.1 and 3.1.6]).The latter imply the topological versions (see explanation in [Sk18, the paragraph after Theorem 1.4.1]).
Remark 3.2 (Comparison with other proofs).Theorem 1.6 for d even (and so its particular cases, Proposition 1.1 and Theorem 1.5) has an alternative simple proof using the van Kampen number, see e.g.[Sk18, §1.4], [Sk, §1.4,§5].(That proof works for quantitative, PL, and topological versions, see Remark 3.1.c.)That proof and the proof sketched in this paper, are presumably the simplest known proofs ('proofs from the Book').
Usually Theorem 1.6 for d even (more precisely, the topological version of Theorems 1.6 and 2.9) is proved using the Borsuk-Ulam theorem [Sk20, §8], [Ma03,§5].As opposed to this paper (and to the alternative proof using the van Kampen number), this requires some knowledge of algebraic topology.This knowledge does not make things simpler: known proofs of the Borsuk-Ulam theorem (see [Ma03] and the references therein) are not easier than the above-discussed direct proofs of Theorem 1.6 for d even.(The Borsuk-Ulam theorem is proved using the degree analogously to the direct proof of Theorem 1.6 for d even using the van Kampen number.) The paper [BM15] presents short algebraic proof of Theorem 1.6 (and so of its particular cases).That proof is in the spirit of the algebraic proof of the Radon theorem 6 .
Remark 3.3 (history).General 'lowering of dimension' or 'the link of a vertex' ideas are simple and well-known (see Remark 1.3).For proofs of the Radon theorem based on this idea see [Pe72,Ko18,RRS].For an application in computer science see [DE94, proof of 2.3.i].Also well-known is relation between linking and intersection. 7An elaboration of this idea to a relation between intrinsic linking and non-realizability is non-trivial (cf. the difference 6 See e.g.[Sk16,§1] for the statement of the Radon theorem.See [Sk16,§4] for relations between the Radon theorem and Theorems 1.2, 1.5, 1.6.The proof of [BM15] is presumably a direct (i.e., without use of the Gale transform) version of the proof of [So12, Theorem 5] for k = 1, which is Theorem 1.6 for d even.E.g. the linking number of two disjoint closed polygonal lines in 3-dimensional sphere ∂D 4 equals to the algebraic intersection number of two general position 2-dimensional disks in 4-dimensional ball D 4 spanning between Proposition 2.4.a and Theorem 1.2).Proofs that discover and use that relation seem to have not been published • before [RST, RST', Sh03, RSS+, Zi13], for a proof of the Conway-Gordon-Sachs Theorem 1.2 by reducing intrinsic linking to intrinsic intersection in lower dimension, • before [Sk03, Example 2, Lemmas 2 and 1'], [RSS+], for proofs of Theorem 1.5 and of the Menger conjecture (see §4.1) by reducing intrinsic intersection to intrinsic linking in lower dimension.4. Realizability of products and the Menger conjecture 4.1.The Menger conjecture.The (Cartesian) product F × F ′ of two figures F, F ′ in R 3 is the set of all points (x, y, z, x ′ , y ′ , z ′ ) ∈ R 6 such that (x, y, z) ∈ F and (x ′ , y ′ , z ′ ) ∈ F ′ .
Figure 10.Realizations of the products: in 1978 by Brian Ummel [Um78] using advanced algebraic topology.A simple proof was obtained in 2003 by Mikhail Skopenkov [Sk03] using lowering of dimension, see exposition below.His argument proves the generalized Menger conjecture ('the k-th power of a nonplanar graph is not realizable in R 2k '), and even gives a short formula for the minimal number d such that given product of several graphs is realizable in R d [Sk03].
A combinatorial version of the product is product of two graphs (not necessarily planar).This product can be considered (although not canonically) as a hypergraph.See Remark 3.1.a;cf.[ADN+, §2, §5].
Remark.Proofs of the Menger conjecture using the van Kampen number or the Borsuk-Ulam theorem (see Remark 3.2) are unknown.The proof of the Menger conjecture in [Um78] works for the topological version but is complicated.The simpler proof in [Sk03] uses for the topological version the non-trivial Bryant approximation theorem.A simpler proof of the topological version can be obtained by inventing a quantitative PL version of the Menger conjecture (i.e., by improving the PL version of Theorem 4.2 analogously to §2.3, see Remark 4.4).
Realizability of products.
Let us formalize the idea of K m × K n drawn in 3-or 4-space.Suppose that A jp , where j ∈ [m] and p ∈ [n], are mn points in 3-or 4-space.For numbers j, k ∈ [m], j < k, and p, q This is a pair of triangles having a common side (Figure 10, left).Their union could be, but need not be, a plane quadrilateral.An (m, n)-product is a collection of triangles from jk × pq, where 1 ≤ j < k ≤ m, 1 ≤ p < q ≤ n.
The Square Theorem 4.2 is reduced to Theorem 2.7 in §4.5.
Remark 4.4.Denote It would be interesting to find a subset M ⊂ K 2 5 such that for any PL map f : Sketch of a proof of Example 4.3.a.Let A 11 , . . .A 1n be points in R 3 of which no 4 lie in one plane.Take a vector v not parallel to any plane passing through some three of these points.For every p ∈ [n] denote A 2p := v + A 1p .If v is small enough, then the points A jp , j ∈ {1, 2}, p ∈ [n], are as required: there are no triangle and a side of a triangle with vertices at these points, which have disjoint vertices but intersect.
Indeed, 12 × pq is a parallelogram for every p = q.Since no 4 of the points A 11 , . . .A 1n lie in one plane, and by the choice of v, for any distinct p, q, r, s the segments A 1p A 1q and A 1r A 1s are disjoint.Since v is small enough, the same holds for 1 replaced by 2. Then any two (convex hulls of) parallelograms 12 × pq and 12 × rs that have no common side are disjoint.Now one can check that the points A jp are as required.
Sketch of a proof of Example 4.3.b. Let
A 11 = (1, 0, 1), A 12 = (−1, 0, 1), A 13 = (0, 0, 2), A 14 = (0, 0, 3).Indeed, jk × pq is a parallelogram for every j = k, p = q.Since every two segments joining points A 1p either are disjoint or intersect at a common vertex, any two of such parallelograms that have no common side are disjoint.Now one can check that the points A jp are as required.
Sketch of a proof of the weaker version of Example 4.3.c:(3, 5)-product in 4-space.Take a 3dimensional hyperplane in R 4 (shown in Figure 12, left, as a plane in 3-space).In this hyperplane take 10 vertices A jp , where j ∈ [5], p ∈ {1, 2}, shown in Figure 11.Take a vector v not parallel to the hyperplane.Set A j3 := A j1 + v. (In Figure 12, left, we see the lateral surface of the prismoid A 41 A 42 A 43 A 51 A 52 A 53 .)Then the points A jp , j ∈ [5], p ∈ [3], are as required: there are no two triangles with vertices at these points, which have disjoint vertices but intersect.
Sketch of a proof of Example 4.3.c. Take points
A 2q for every p = q.Take non-collinear vectors v 3 , v 4 ∈ R 4 not parallel to the hyperplane R 3 ⊂ R 4 .Denote A jp := A 1p + v j , j ∈ {3, 4}.We Proof of Proposition 4.1.a.(The proof is analogous to Proposition 2.4.)There is a plane α such that one of the given points lies on one side of α, and the other 15 points lie on the other side of α (Figure 13, left).The intersection point of α • with the segment A jp A kp is colored in blue for every k ∈ [4] − {j}; • with the segment A jp A jq is colored in red for every q ∈ [4] − {p}.
The intersection of a product jk × pq with α is called an arc.Then arcs have ends of different color.(The intersection of α with the body of the (4, 4)-product is a PL drawing, possibly with self-intersections, of K 3,3 in α, i.e., the image of a PL map K 3,3 → α.) Then by the PL analogue of Proposition 2.3.a9there are intersecting arcs without common edges.Then there are k, k ′ ∈ [4] − {j} and q, q ′ ∈ [4] − {p} such that k = k ′ , q = q ′ , and some two triangles, one from jk × pq and the other from jk ′ × pq ′ , have a common point distinct from the common vertex A jp of the triangles.Hence one of these triangles intersects a side of the other not passing through A jp , hence not having any common vertices with the first triangle.Given 9 points A jp , j ∈ {u, v, w}, p ∈ {u ′ , v ′ , w ′ }, in 3-or in 4-space denote by uvw×u ′ v ′ w ′ the corresponding (3, 3)-product (Figure 10, right; as opposed to the figure, the (3, 3)-product can have self-intersections).
Proof of Proposition 4.1.b.We may assume that no 4 of given 15 points A jp , j ∈ [3], p ∈ [5], lie in one hyperplane (analogously to Remark 2.5.b).There is a plane α such that one of the given points, say A j,p , lies on one side of α, and the other 15 points lie on the other side of α (Figure 13, right).The intersection point of α • with the segment A jp A kp is colored in blue for every k ∈ [3] − {j}; • with the segment A jp A jq is colored in red for every q ∈ [5] − {p}.
The intersection of a product jk × pq with α is called an arc.Then arcs have ends of different color.(The intersection of α with the union of the triangles of the (5, 3)-product is a PL drawing, possibly with self-intersections, of K 2,4 in α.) Analogously to the last paragraph of the proof of Proposition 4.1.aeither (1) the (5, 3)-product has a triangle and a side of a triangle which have disjoint vertices but intersect, or (2) any two arcs intersect only at their common vertex (if there is one).
In the second case denote the blue points by K, L. By the PL analogue of Proposition 2.3.bthere are two red points Q, R such that the remaining two red points S, T lie on different sides w.r.t. the closed polygonal γ formed by arcs QK, KR, RL, LQ.Take k, l ∈ [3] − {j} and q, r, s, t ∈ [5] − {p} such that the points K, L, and Q, R, S, T belong to the segments joining A jp to A kp , A lp , and to A jq , A jr , A js , A jt , respectively.Then α intersects • the outline of the triangle j × pst := A jp A js A jt by S and T (note that the triangle is not contained in the (3, 5)-product); • the body of the (3, 3)-subproduct jkl × pqr (contained in the (3, 5)-product) by γ.
If γ has self-intersections, then we obtain the property (1).If not, then we obtain the property (1) by Lemma 4.5 below because S, T lie on different sides w.r.t.γ.Analogously, the line α ∩ ∆ intersects |τ |∩ ∆ at an even number of points.Since also no 4 vertices of ∆ and τ lie in one plane, it follows that τ ∩ ∆ := {Γ ∩ ∆ : Γ ∈ τ } is a 1-cycle, i.e., is a set of segments in the plane ∆ such that every point of ∆ is the endpoint of an even number (possibly, zero) of the segments.By the assumption on α we may choose the segments so that the line α ∩ ∆ splits O from S, T , and from all the vertices of the segments distinct from O (cf. Figure 6, left).Hence by an analogue of Lemma 2.2 for a triangle and a 1-cycle, some two segments from ∂∆ and from τ ∩ ∆ intersect but do not have common vertices.At most one of these segments contains O. Hence the obtained segment of the 1-cycle τ ∩ ∆ is the intersection with ∆ of a triangle from τ , which intersects a side of ∆, but does not have common vertices with the side.4.5.Non-realizability of products in 4-space.A Seifert chain (or a coboundary) of a closed polygonal line a in 3-space is a finite collection S of triangles (non-degenerate to a segment or a point) in 3-space such that • every edge of a is the side of exactly one triangle from S; • every segment that is not an edge of a is the side of an even number (possibly, zero) of triangles from S.
Two disjoint closed polygonal lines a and a ′ in 3-space linked modulo 2 if for any Seifert chains S of a and S ′ of a ′ such that the outline of any triangle of S is disjoint from the outline of any triangle of S ′ , the number of linked modulo 2 pairs (∆, ∆ ′ ) of triangles ∆ of S and ∆ ′ of S ′ is odd.The equivalence to other definitions of being linked modulo 2 (in particular, to the definition before Theorem 2.7) is proved in [Sk,Lemma 4.8.3].
Proof of the Square Theorem 4.2.We may assume that no 5 of the given 25 points A jp , j, p ∈ [5], lie in one 3-dimensional hyperplane (analogously to Remark 2.5.b).There is a 3-dimensional hyperplane α such that one of the given points, say A j,p , lies on one side of α, and the other 24 points lie on the other side of α (Figure 14).The intersection point of α • with the segment A jp A kp is colored in blue for every k ∈ [5] − {j}; • with the segment A jp A jq is colored in red for every q ∈ [5] − {p}.
The intersection of a product jk × pq with α is called an arc.Then arcs have ends of different color.Analogously to the last paragraph of the proof of Proposition 4.1.aeither (1) the (5, 5)-product has two triangles which have disjoint vertices but intersect, or (2) any two arcs intersect only at their common vertex (if there is one).In the second case the intersection of α with the body of the (5, 5)-product is a PL drawing without self-intersections of K 4,4 in α.Use the following PL analogue of Theorem 2.7: in any PL drawing without self-intersections of K 4,4 in R 3 there are two cycles of length 4 which are linked modulo 2 (see proof in [Sa81,Zi13]).We obtain two linked modulo 2 closed polygonal lines in α, each consisting of four arcs.Take {a, b, a ′ , b ′ } = [5] − {j} and {c, d, c ′ , d ′ } = [5] − {p} such that the arcs • of the first polygonal line belong to the products ja × pc, jb × pc, ja × pd, jb × pd, • of the second polygonal line belong to the products ja ′ × pc ′ , jb ′ × pc ′ , ja ′ × pd ′ , jb ′ × pd ′ .Then the polygonal lines are the intersections with the hyperplane of the bodies of the (3, 3)-products jab × pcd and ja ′ b ′ × pc ′ d ′ .So the required statement is implied by the following Lemma 4.6.Proof.Denote by S (by S ′ ) the set of all triangles from the first (the second) (3, 3)-product, which do not contain O. Since no 5 of the 17 vertices of the (3, 3)-products lie in one 3-dimensional hyperplane, no 4 of their 16 projections to α with the center O lie in one plane.Then the outlines of the triangles α ∩ O∆ and α ∩ O∆ ′ are disjoint for any ∆ ∈ S and ∆ ′ ∈ S ′ .Denote by γ and γ ′ the given disjoint closed polygonal lines.Since γ and γ ′ are linked modulo 2 in α, the number of linked modulo 2 pairs (∆, ∆ ′ ) of such triangles is odd.By Lemma 2.8 ′ .asuch triangles are linked modulo 2 if and only if the surfaces of the tetrahedra O∆ and O∆ ′ intersect at an even number of points (including O).Take any side MN of a triangle from S, and any side M ′ N ′ of a triangle from S ′ .If MN is not contained in γ, and M ′ N ′ is not contained in γ ′ , the intersection OMN ∩ OM ′ N ′ appears exactly in two intersections of lateral surfaces of tetrahedra O∆ and O∆ ′ .Then the numbers of pairs of intersecting triangles having one of the following types is odd: • pairs (∆, ∆ ′ ) for ∆ ∈ S and ∆ ′ ∈ S ′ ; • pairs (OMN, ∆ ′ ) for a side MN of γ, and a triangle ∆ ′ ∈ S ′ .
• pairs (∆, OM ′ N ′ ) for a side M ′ N ′ of γ ′ , and a triangle ∆ ∈ S. Now the lemma follows because in any of these pairs triangles have disjoint vertices, and are contained in triangles of given (3, 3)-products.
Figure 4 .
Figure 4. Five points in R 3 (realization of the complete 3-homogeneous hypergraph on 5 vertices)
Theorem 1.2.First we illustrate the 'lowering of the dimension' idea (see Remark 1.3) by proving the following weaker versions of Theorem 1.2.Proposition 2.4.From any 6 points in 3-space one can choose (a) 5 points O, A, B, C, D such that the triangles OAB and OCD have a common point other than O.
Consider the intersection of α with the union of triangles OAB for all pairs A, B of given points.Now part (a) follows by Proposition 1.1.Part (b) follows from (a) (and vice versa).Part (b) is a spatial analogue of Proposition 1.1.
Lemma 2.2 ′ .(a) Two triangles in the plane have a common vertex O.No 3 of their 5 vertices lie in one line.A line l splits O from the bases of the triangles.Then the outlines of the triangles intersect at an even number of points if and only if the intersections of l with the outlines alternate along l (i.e., if the intersection of one triangle with the outline of the other contains exactly one segment with vertex O).(b) No 3 of 5 points O, A 1 , A 2 , A 3 , A 4 in the plane lie in one line.Then the number from Proposition 1.1 ′ equals to the sum of the numbers of intersection points of the interiors of sides of triangles OP Q and ORS, over all unordered splittings of points A 1 , A 2 , A 3 , A 4 into two unordered pairs P, Q and R, S. (c) [DGN+, Proposition 7.5.a]Denote by X = ( [5] 2 ) 2 the set of all unordered pairs of 2-element subsets of [5].For any of the 3 non-ordered partitions σ 3'. (4-2) Perhaps one can take points on the moment curve.(4'-2) Analogously to the alternative proof of Proposition 2.11.3'.Conjecture 2.13.(d-k) For any d = 2k −1 there are d+3 points in R d , of which no d+1 lie in one (d − 1)-hyperplane, and such that any k-simplex spanned by k + 1 of them, intersects the surface of (d + 1 − k)-spanned by the remaining d + 2 − k points, at an even number of points.Hint.Perhaps one can take points on the moment curve.(d'-k) No d + 1 of d + 3 points in R d lie in he same (d − 1)-hyperplane.Then the number of intersection points of k-simplices, spanned by k + 1 of them, with the surfaces of (d + 1 − k)simplices, spanned by the remaining d + 2 − k points, is even.
Figures 4, 10 and 11; on the last figures a subdivision of quadrilaterals analogous to Figure 10, left, is not shown.See rigorous definitions e.g. in [Sk18, §3.2].
Figure 11 .
Figure 11.Realization of the product K 5 × K 2 Examples of realization of products are given in Figures 10 and 11.For definition of realization see e.g.[Sk18, §3.2], [Sk, §5].Karl Menger conjectured in 1929 that the square of a nonplanar graph is not realizable in R 4 [Me29] (cf.Theorem 4.2).This was proved only the two polygonal lines.For a certain inductive argument involving assertion on linking in odd dimensions and assertion on intersection in even dimensions see [RS72, Whitney Lemma 5.12 and Theorem 5.16].
where by X, Y, X ′ , Y ′ we understand edges of K 5 . 8This is related to the following algebraic Menger problem [Pa20, Conjecture 2]: Complexes K, L have non-trivial van Kampen obstructions to embeddability in R m and in R n , respectively (see definition e.g. in [Sk18, §1.5]).Does the cartesian product K × L of K and L has non-trivial van Kampen obstruction to embeddability in R m+n ?4.3.Realization of products in 3-and 4-space.
Figure 12 .
Figure 12.Left: to realization in R 4 of the product K 3 × K 5 .Right: to realization in R 4 of the product K 4 × K 5 .
Figure 13 .
Figure 13.To the proofs of Proposition 4.1.a(left) and 4.1.b(right) Lemma 4.5 (lowering of dimension).In 3-space the outline ∂∆ of a triangle ∆ and a (3, 3)product τ have a unique common vertex O.No 4 vertices of ∆ and τ lie in one plane.A plane α splits O from the base of ∆, and from the remaining 8 points of τ .The plane α intersects • the body |τ | by a closed polygonal line without self-intersections; • ∂∆ by two points S, T lying on different sides w.r.t. the polygonal line.Then τ and ∂∆ have a triangle and a side of a triangle which have disjoint vertices but intersect.Proof.Denote by ∆ the plane of ∆.Then α ∩ ∆ is a line.The intersection α ∩ ∆ is the segment ST .The points S, T lie in α on different sides w.r.t. the closed polygonal line α ∩|τ | which does not have self-intersections, and no 3 points among S, T and the vertices of α ∩ |τ | lie in one line.Hence the segment α ∩ ∆ intersects |τ | ∩ ∆ at an odd number of points.
Figure 14 .
Figure 14.To the proof of the Square Theorem 4.2 Lemma 4.6 (lowering of dimension).Two (3, 3)-products in 4-space have a unique common vertex O.No 5 of their 17 vertices lie in one 3-dimensional hyperplane.A 3-dimensional hyperplane α splits O from the remaining 16 points of the (3, 3)-products.The hyperplane α intersects the bodies of the (3, 3)-products by a pair of disjoint closed polygonal lines linked modulo 2 in α.Then the (3, 3)-products have two triangles which have disjoint vertices but intersect. | 2019-04-11T22:48:19.868Z | 2014-02-04T00:00:00.000 | {
"year": 2014,
"sha1": "969f6547f22b20cd553b915121067c41e8ed74c7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "969f6547f22b20cd553b915121067c41e8ed74c7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257126962 | pes2o/s2orc | v3-fos-license | INTEGRATING MICRO PROJECT-BASED LEARNING TO IMPROVE CONCEPTUAL UNDERSTANDING AND CRUCIAL LEARNING SKILLS IN CHEMISTRY
. Active participation in project-based learning (PBL) could develop students` knowledge and crucial learning skills across various disciplines. However, the implementation of PBL in the K-12 classroom is usually impeded by the step-by-step PBL cycle. Micro project-based learning (MPBL), which advocates and adheres to the learning principles and mechanisms of PBL with constraining the learning cycle in shorter periods, has been considered as a lightweight alternative to PBL process. As an exploration, this study explored the impact of MPBL on the conceptual understanding of sodium bicarbonate and the crucial learning skills in chemistry class at upper-secondary schools. The quasi-experimental research was implemented for 125 students, with an experimental group receiving MPBL teaching and a control group receiving conventional teaching. In the study, data including knowledge tests, crucial learning skills survey, and student interview were collected and analysed. The results indicated that MPBL was effective in the development of students’ conceptual understanding and crucial learning skills (i.e., communication and collaboration, information integration, independent learning, and problem-solving). The study will inform the pedagogical innovation in chemistry education and teaching practices in chemistry class.
Introduction
The purpose of education is to cultivate students with a distinct sense of social responsibility, a sound personality, the spirit of innovation, the desire and ability for lifelong learning, good information literacy, etc., so that the new generations will have the skills necessary to adapt to the social, scientific and technological and economic development of the 21 st century (Care, et al., 2017;Friesen & Anderson, 2004;Luo, et al., 2021;Walsh et al., 2022).However, in China, upper-secondary schools, teachers, parents, and students are still biased toward the pursuit of academic excellence and somewhat neglect the development of these learning skills (Cui et al., 2022;Xu & Liu, 2010).Being aware of such inadequacies, innovative pedagogical approaches, such as project-based learning (PBL), which foster competencies required in the 21 st century, are suggested (Desnelita et al., 2021;Fiore et al., 2018).PBL is a learner-centred instructional approach that emphasizes learner-autonomy, productive inquiry, goal setting, collaboration, interaction, and feedback in the context of practical applications (Baser et al., 2017;Kokotsaki et al., 2016).It has been integrated into the teaching of different subjects.The research has demonstrated that PBL is effective in developing students' conceptual understanding and crucial learning skills (Chen & Yang, 2019;Krajcik, et al., 2021).Besides, some research also identified several challenges which hinder the implementation of PBL in actual classrooms, especially in uppersecondary schools (Chen et al., 2021).The most prominent challenge lies in the conflict between the long time spent on a PBL cycle and the limited time available in a class (Habók & Nagy, 2016).Following the PBL approach, teachers usually find it difficult to complete the planned lessons.Moreover, studies found that students' engagement may decline in a prolonged learning cycle (Oteyza, 2012).
In school-based learning, a lightweight form of PBL approach is more feasible.Micro project-based learning (MPBL), which shares the same core principles and mechanisms with PBL but features a shorter learning cycle, is recognized as a lightweight alternative in a regular classroom (Díaz Redondo et al., 2021;McDonnell et al., 2007).With the desirable characteristics of be-ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/
Micro Project Based Learning (MPBL)
Along with the accumulation of experience from theoretical and empirical studies of PBL, its deficiencies are increasingly apparent.The time spent on the implementation of PBL cycles is usually too much to deliver all the learning content of designed lessons on time (Wang et al., 2019).The issue also increases teachers' workloads and affects the learning effectiveness of PBL (Aldabbus, 2018).To mitigate this issue of incompatibility displayed by PBL in the classroom, researchers propose MPBL approach as an alternative, which is an innovative pedagogical approach that combines project learning and micro course (McDonnell et al., 2007).In comparison with PBL, MPBL is efficient, flexible, and practical.MPBL retains the advantages of PBL and could get students engaged in authentic problems, allowing them to develop skills covering collaboration, communication, and problem-solving in collective exploration (Bell, 2010;Frank et al., 2003); meanwhile, due to its characteristic of "micro", an MPBL cycle can be completed within one or two class periods.Therefore, it functions as a lightweight alternative to PBL (McDonnell et al., 2007).It was revealed by literature that there is limited research and application of MPBL in teaching and learning practices, with only a few studies on vocational education and higher education (Ji, 2020).In K-12 education, MPBL is a comparatively new topic, and the publications available are confined to miniaturizing PBL in laboratory instruction (McDonnell et al., 2007).The learning processes of MPBL, such as how to design a mini project which can include the core concept and involve task introduction, implementation, presentation, evaluation and reflection, feedback and adjustment, and effectiveness shown by MPBL are not explored sufficiently.
The Teaching for Sodium Bicarbonate
Sodium and its main compounds (e.g., sodium carbonate and sodium bicarbonate) are essential concepts in the chemistry curriculum at upper-secondary schools in China.Related learning objectives stated by the curriculum standard include: 1) understanding the main properties of sodium and its important compounds through explorations of experiments or paying attention to their applications in real-life situations, and 2) comprehending the applications of sodium compounds in life and production (MoE of China, 2020).Students of the 1 st grade at upper secondary schools are expected to be able to compare and contrast the chemical properties of sodium carbonate and sodium bicarbonate and understand and apply the core principle of chemistry discipline.
When sodium bicarbonate, an important sodium compound, is heated, it decomposes into carbon dioxide, water, and sodium carbonate; it reacts with acetic acid to generate sodium acetate, water, and carbon dioxide (Greenwood & Earnshaw, 1997).These properties make it as an ideal ingredient for the leavening agent, due to the process that increases gases such as carbon dioxide to improve the volume and taste of essential foods, steamed buns for instance, in our daily life (Arranz-Otaegui et al., 2018).Sodium bicarbonate produces carbon dioxide in thermal decomposition and interacts with acid, both of which are essential mechanisms in leavening (Rask, 1932).
Using food leavening agents as an example of the application of sodium bicarbonate in daily life is one of the common strategies to teach the properties of sodium bicarbonate (Babinčáková, 2020;Lanni, 2014), but they are more positioned as fun activities, rather than a systematically organized learning project which can deepen understanding of the concept through the exploration of the principles related to fermentation and leavening.In addition, there is a lack of coherence among various activities and knowledge points.To resolve all these issues, it is expected that there will be a project-based approach providing connected, consistent, and systemic experiences.
Research Purpose and Questions
Stimulated by the potential of MPBL in science education, this study explored the impact of an MPBL pedagogical approach on students' conceptual understanding and the development of related crucial learning skills in chemistry class.The following research questions were answered: 1. What were the specific strategies for integrating the MPBL pedagogical approach into the teaching of chemistry concepts and crucial learning skills at upper-secondary schools? 2. Could the MPBL pedagogical approach improve students' conceptual understanding of sodium bicarbonate in chemistry class? 3. Could the MPBL pedagogical approach develop students' crucial learning skills including communication and collaboration, information integration, independent learning, and problem-solving?4. What were students' perceptions and attitudes toward MPBL integrated chemistry class after experiencing intervention?
Research Design
The study was conducted in the fall term of the academic year of 2021/2022 at a local public upper-secondary school in mainland China.In this study, a quasi-experimental research design was implemented.The design consisted of one experiment group receiving the MPBL approach, and one control group receiving conventional teaching.Both qualitative and quantitative methods were used in the data analysis.
Participants
Two classes with the same knowledge level from Grade 1 at an upper-secondary school (A total of 18 classes) were selected as an experimental group (n = 63), and a control group (n = 62).The sampling method adopted in this study was purposive sampling, i.e., the two classes with the closest levels in their grade were selected to participate in the intervention based on their midterm exam results.The school was located in the north of Henan, China.Since the study was conducted based on a class, the aim-an equal number of participants between two groups-was not achieved.Besides, the two sexes in the experimental and control group classes were roughly equal because the school assigned each class equally between boys and girls as much as possible when they enrolled.It can be seen from Table 1 that in both groups, over 45% of the participants are female, with a mean age of 15 and a standard deviation of 0.47 and 0.51.According to the participant's demographic information on socioeconomic status (Hair, et al., 2015), they were generally from middle-class families, and in both groups, most of the students' parents had high school diplomas.
The two classes were comparable in terms of academic performance obtained from the mid-term exam.A teacher with more than five years of teaching experience was recruited to teach the students from the control and experimental groups in a separate class.Students' natural learning environment did not change during the study period.Consent from the participants, parents, and the school was obtained before the intervention.
MPBL Pedagogical Approach and Process
Through the integration of the principles with the mechanisms of PBL and micro course, a process-based model for the design and implementation of MPBL in chemistry class was developed.The process was divided into 6 phases (Figure 1), task design and introduction, implementation, presentation and review, evaluation, and summary and reflection, and feedback and adjustment (Krajcik & Blumenfeld 2006;Lombardo, 2006).Take the teaching of the leavening agent as a case, the MPBL in the teaching of the leavening agent contained an integration of in-class and out-of-class sessions which takes about 1-2 class hours (40 mins in one session).In the first phase, teachers and students prepared for the project (out-of-class activities).Teachers analysed the curriculum, determined the learning content, divided students into 9 groups in advance, and then provided a report on food https://doi.org/10.33225/jbse/23.22.130ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/ additives to students as a preview.Students were required to search and gather relevant information online and offline by themselves to make better preparation for the lesson.Through completing the tasks, students developed their information integration skills and independent learning skills (Ayaz & Söylemez, 2015).In the second phase, students conducted inquiry activities in groups.Each of the 7 students in every group played the specific role of the Leader, Experimenter, Observer, Recorder, and Reporter.The inquiry activities contained exploring the working principle of sodium bicarbonate as a leavening agent, the chemical properties of sodium bicarbonate, and the comparison of sodium bicarbonate and sodium carbonate.Most of the inquiry activities need collaboration so that students could develop their collaborative learning skills and problem-solving skills (Chiang & Lee, 2016).After the exploration phase of task understanding (5min), hypotheses forming (5 min), experimentation (10 min), analysis and discussion (10 min), and elaboration and extension (10 min), students had to apply conceptual knowledge and creative thinking to the settlement of practical problems, making their products (out-of-class activities) and sharing their product and result (10 min).In this phase, they deepened their understanding of the concept and experienced the joy obtained from problem-solving (Nainggolan et al., 2020;Song, 2018).After the presentation phase, the teacher, peers, and themselves evaluated the students' presentation (5 min).At the end of the lesson, the teacher guided their students to make a summary and reflection for the project learning (5 min).After the implementation, teachers collected feedback from peers and students, and made an adjustment for the projects further improved.The specific lesson design can be seen in Appendix A.
Procedures and Instrument
This study was implemented for about 3 weeks after the mid-term exam.In the first week, students from both the control and experimental group completed the crucial learning skill survey and the data of their responses were analysed.Then, the micro project was implemented in the second week.For the last week, students from both groups completed the knowledge test, and the post-survey of crucial learning skills, and the experimental group students attended the interview.
To evaluate the effectiveness of MPBL integrated lessons on students' understanding of conceptual knowledge and the development of crucial learning skills, data in multiple forms including the results from the knowledge test, responses to the survey questionnaire, and interview responses were collected and analysed.Knowledge tests were composed of the mid-term chemistry exam, which was treated as the pre-test, and a specifically designed test on "sodium bicarbonate", which was regarded as the post-test.The post-test contained the classification, main components of leavening agents, and the principle of its action (sodium bicarbonate easily decomposed by heat, sodium carbonate does not decompose by heat, sodium bicarbonate and sodium carbonate acid-base comparison, the reaction of sodium bicarbonate with acid).The post-test adopted and adapted the questions used in the academic-level tests carried out in previous years.It included a total of 10 one-option questions (5 points each) and 10 blank-filling questions (2 points for each blank).Before the implementation, the tests were distributed on a small scale.The collected data's Cronbach's Alpha coefficient based on the standardized term and the KMO (Kaiser-Meyer-Olkin) sampling relevance measure were ensured to be higher than 0.7, indicating that the tests were valid and credible.
The survey questionnaire (Appendix B) was developed for the evaluation of the crucial learning skills development before and after the intervention.It was designed to refer to the General High School Chemistry Curriculum Standards and precious research, encompassing 4 sets of learning skills covering communication and collaboration, information integration, independent learning, and problem-solving (Gok, 2012;MoE of China, 2020;van Laar et al., 2019).For each skill set, there were 4 items.A 5-point Likert scale, ranging from Strongly Agree to Strongly Disagree, was applied for the collection of responses.Later on, a small-scale pilot test was conducted and the reliability of the collected data Cronbach's α = .815(> .8)and the KMO value = .848(> .8).It indicated that the survey was credible and valid.The survey was implemented in both the control and experimental groups before and after the intervention.
Besides, in the experimental group, a semi-structured interview was conducted for the selected students to probe their perceptions and attitudes toward the MPBL integrated class after the intervention.The interview contained 4 questions (Appendix C), which asked students' view towards the MPBL compared with the traditional learning approach, whether and how MPBL helped their learning, if MPBL was an extra burden for their study and their suggestions about the MPBL further improving.Regarding the summary of the data collection process, please refer to Figure 2. Data collected were analysed qualitatively and quantitatively.The application of quantitative analysis was carried out for the knowledge tests and the survey using the SPSS 22.0 software.Single-sample Kolmogorov-Smirnov (K-S) tests and independent samples t-tests were adopted to uncover group-based differences in conceptual understanding, which were reflected in the knowledge test.Descriptive analysis was adopted to investigate students' responses to the survey to explore group-based differences existing in the development of each learning skill set.Qualitative analysis of the semi-structured interview transcripts was conducted, with a focus on students' perceptions of and attitudes toward MPBL approach.
Performance on Conceptual Understanding
One-sample K-S test and independent sample t-test of both the pre-test and post-test of the two groups were performed.As Table 2 shows, the asymptotic significance values of pre-test scores in both groups are greater than .05,indicating that the scores obtained from the pre-test of both groups are normally distributed.The mean scores obtained from the control group and the experimental group are 42.90 points and 42.44 points, respectively.The post-test scores of both groups are also normally distributed (p > .05).The mean scores obtained from the control group and the experimental group were 46.87 points and 49.95 points, respectively.There was a difference of 3.08 points on average between the two groups, suggesting the positive impact of MPBL instruction on the development of conceptual knowledge in comparison with the conventional method.The group difference of the pre-test was not significant in accordance with the results from the independent sample t-test (p = .872)(Table 3), indicating the homogeneity of the two groups in terms of overall chemistry level before the intervention.The post-test Levene variance equality test in Table 3 shows that p = .092(> .05).Then, assume equal variance and the p = .204(> .05),indicating that the post-test scores of the two groups are not significantly different.This insignificance may be ascribed to the limited time spent on the implementation of MPBL.
Students Responses to Crucial Learning Skills Survey
All 125 participants in both groups completed the pre-test survey about crucial learning skills, with a completion rate of 100%.A descriptive analysis of the percentage of students' choice of each option on each item was conducted, with the result displayed in Figure 3.
The pre-test questionnaire analysis was divided into 4 groups, including communication and collaboration (as reflected in Item 3/4/7/9 in Figure 3a), information integration (as reflected in Item 1/8/11/14 in Figure 3b), independent learning (as reflected in Item 6/10/12/15 in Figure 3c), and problem-solving (as reflected in Item 2/5/13/16 in Figure 3d).In every Item, the E represents the experimental group and C represents the control group.According to observation (Figure 3), the distribution of the responses of different types is consistent between the two groups.The experimental group and the control group were equal in the area of skills including communication and collaboration, information integration, independent learning, and problem-solving before the intervention.
To assess the effectiveness shown by the MPBL approach for the development of important learning skills, the same survey was adopted again after its implementation, and a comparison between the experimental group results and those of the control group was conducted using the same method.The descriptive analysis results, as displayed in Figure 4, are in favour of the MPBL teaching.The ratio occupied by positive responses of "Strongly agree" and "Agree" in the experiment group is greater than that of the control group across different items.In general, it was reported that the students in the experimental group were equipped with better skills in the four foci areas.To delve into and explain how the two groups get improved in each learning skill set after the intervention, every option was assigned points (1 for Strongly agree, 2 for Agree, 3 for Neutral, 4 for Disagree, and 5 for Strongly disagree).The differences in mean scores between the pre-test and post-test of the two groups obtained on each survey item were calculated using the following formula.[Main difference between post and pre-test for every item = ∑ post-test (Assigned point for each option × Percentage of people who select this option) − ∑ pre-test (Assigned point for each option × Percentage of people who select this option)].
Items 3, 4, 7, and 9 of the survey explored the communication and collaboration skill of the respondents, and it was indicated by the overall results that MPBL improved students' communication and collaboration skills successfully in this regard.As shown in Figure 5, the average score obtained by the control group stays almost unchanged in the post-test, with a minor increase on Item 3. In contrast, it is an apparent increase in the mean score in the experiment group, with improvement on all items, especially in Item 9.The 3 rd item had a close connection with group discussion and negotiation (I3: I persuade others when I disagree with them).In conventional teaching, there were also opportunities for students, though not frequent, to get engaged in group work.Such experience might lead to the development of this specific skill.Nevertheless, group work functioned as a major component of MPBL, and students in the experimental group, in comparison with their peers, achieved considerable progress after the instruction.The group-based inquiry activities in MPBL also got students involved in communication and collaboration with the members from the same group frequently to solve problems.That was probably the Items 1, 8, 11, and 14 were relevant to the skill of information integration.As shown in Figure 6, both the control and the experimental groups get improved in this skill set on the whole after the instruction, but the performance of the experimental group is slightly better than that of the control group.It is worth noting that for Item 8 (I8: I know how to find chemistry-related websites and information), group difference is much evident.In terms of the experimental group, the mean score obtained is 0.3 points greater than that of the pre-test survey.Nevertheless, the control group witnessed a decrease in the mean score obtained from the post-test.Such discrepancy may be explained by whether students got engaged in actual application and practice regarding the skill of information collection and consolidation in instruction.In MPBL teaching, it was required that students should search for and compile relevant information before the lesson, whereas in the conventional classroom, there was no existence of such experiences.As a consequence, in comparison with their counterparts, the experimental group reported better in the skill of information integration on the whole.In comparison with the first two skill sets discussed above, not much growth is available in the mean score for the independent thinking skills in the post-test.As shown in Figure 7, there is a minor increase in the mean score regarding Items 6, 12, 15 in the post-test survey of the experimental group.Thanks to the MPBL approach, students were able to get engaged in self-regulated learning sessions frequently.It required that students could do previews, complete application-related tasks, and reflect and retrospect on their own.These experiences were useful and necessary for the development of independent learners.However, the cultivation of autonomy in learning is a long-term process, and the implementation of this exploratory study was far from sufficient, with minor effects generated.
Figure 6 Mean Score Differences between Groups of Information Integration
On the contrary, in the control group, no growth is available across all four items.For Item 6 and 10, the mean score even decreases in the post-test process concerning the control group.Following the conventional method, the teacher assigned many tasks to students for completion before the class, so she could attach greater importance to the target concepts in the time during the class.Some of the tasks were too complex to finish, and the students were somewhat discouraged, which probably led to the control group reporting more negative responses for Item 6 in the post-test (I6: I can complete the tasks assigned by the teacher before the lesson).Correspondingly, they did not think they managed the time to study effectively outside of the classroom and responded negatively to Item 10 (I10: I manage my study time effectively outside of the class).
Figure 7
Mean Score Differences between Groups of Independent Learning Items 2, 5, 12, and 16 were about the problem-solving skill.Similarly, the enhancement of this skill set was more prominent in the experimental group.As shown in Figure 8, there was an increase in the mean score across all four items in the post-test of the experimental group, and the increase is much greater than that of the control group, indicating the superiority shown by MPBL over the traditional method of instruction.In MPBL, students solved authentic problems by connecting chemistry with real-life situations.During the process, they developed both the mindset and skill for problem-solving.On the contrary, students in the control group were deprived of these experiences, and therefore they failed to be equipped with the confidence and capability for problem-solving which was reflected in their responses to the post-test survey, especially for Item 5 (I5: I can use my knowledge of chemistry to solve problems).It was demonstrated by the overall analysis of the survey questionnaire that MPBL could function as an effective pedagogical approach for the development of crucial learning skills including communication and collaboration, information integration, independent learning, and problem-solving.Moreover, the effect was more evident in the promotion of collaborative skills.
Student Interview Results
To better understand student experiences and perceptions of MBPL, 8 students from the experimental group were randomly selected for a semi-structured interview.Some representative responses from the interviewees were presented below: Question 1: Which teaching method do you prefer, MPBL or the conventional method?Why? Answer 1: I prefer MPBL, because in the conventional method, teachers talk more in most of the time, while in MPBL, we read materials before the class, did hands-on practice, and displayed and shared our results.These activities significantly enhanced my interests in learning chemistry, so I prefer the former one.Answer 2: I like the way of learning with micro-projects.The questions in MPBL were exciting and required constant thinking, so we paid close attention in class and did not get distracted.In the MPBL class, we could also speak freely.Answer 3: I...how to say?I feel that the MPBL method is relatively new.However, the teacher asked too many questions, and I had to move from one question to the next before I completely understood and solved it first.Given the current situation that focuses on tests and examinations, I still prefer the conventional method where teachers emphasize and elaborate key knowledge points, making it easier for me to grasp them.Question 2: Was MPBL helpful?if yes, how did it help you?Answer 1: MPBL was helpful, especially in improving important learning skills such as problem-solving and collaboration.Answer 2: It helped me experience the connection between chemistry and life, the value of knowledge, and the joy of applying knowledge.In addition, we had the opportunity to do experiments by ourselves and have more exchanges with classmates.
Question 3: Has MPBL added an extra burden to your study?
Answer: Most students believe there was no added burden because the information collection and production activities of micro-projects were completed at home.However, one of them felt collecting information and completing the product shortened his rest time.He felt some pressure.Question 4: How do you view MPBL?Any suggestions?Answer 1: Generally speaking, I feel this pedagogical approach is okay, but there were some minor problems.For example, the group work was not well organized or supervised.Some students were lazy and idle in collaborative activities.Overall, students interviewed were optimistic about MPBL teaching.MPBL engaged them in class activities more than the traditional method did.The activities of experimentation and inquiry promoted their interest, activated thinking, and deepened learning.MPBL also provided them with more opportunities to communicate with classmates.Most students thought the MPBL approach was fresh and fun, and they hoped teachers to use it more often in the future.
A small number of students expressed their concern that they might be unable to grasp the key points in MPBL.A deeper look into such negative responses unveiled the influence of their knowledge foundation and learning style.This finding highlights the significance of student analysis and preparation to improve the acceptance and adaptability to MPBL.During implementation, teachers also need to engage these students in successful experiences to improve their attitude, confidence, and sense of personal value in the long run.
Discussion
As a lightweight alternative to PBL, MPBL utilizes the key mechanisms of PBL while mitigating its incompatibility with the classroom, shortening the learning cycle, and activating students' interest and motivation to focus on classroom projects (McDonnell et al., 2007).For the specific strategies of integrating the MPBL pedagogical approach into the teaching of chemistry concepts and crucial learning skills at upper-secondary schools, it is noteworthy that the previous papers, although they mentioned the benefits of PBL and its impact on students' study, did not specifically address the improvement of certain problems in the implementation of PBL in the classroom at this stage (Ayaz & Söylemez, 2015).Besides, some studies have established some PBL design processes, but most of them are rough and lack a refined design process and specific cases of lesson plan integrating implementation (Zhao & Wang, 2022).In this study, the PBL pedological approach was revised into MPBL to make it more compatible with the upper-secondary science classroom.
In this study, the specific MPBL design process was divided into 6 steps, which contained task design and introduction, implementation, presentation and review, evaluation, summary and reflection, and feedback and adjustment (Barron et al., 1998).Based on these key elements of MPBL design, the concept that students need to understand during the lesson according to the curriculum standard, and the skills that they shall develop through the activities, the micro project for teaching sodium bicarbonate was developed and implemented in a local uppersecondary school (Krajcik et al., 1994;Krajcik et al., 2013).
The MPBL lesson plan for teaching sodium bicarbonate contained preparation and introduction, understanding the task, forming hypotheses, experimentation, analysis and discussion, elaboration and extension, summary, application, and presentation.The preparation, introduction, and task understanding part need students to collect and exact information by themselves so that they could develop information integration skill and independent learning skill.The designed driving questions in inquiry activities (forming hypotheses, experimentation, analysis and discussion, elaboration and extension, and summary) guided students to explore the working principles of sodium bicarbonate as a leavening agent step by step which mainly developed their problem-solving skill (Krajcik & Blumenfeld, 2006).Besides, most exploring activities need students' group work so that they could develop communication and collaboration skill.The application and presentation contained a making project which could inspire students' sense of satisfaction and achievement and help them deepen their understanding of the chemical properties of sodium bicarbonate (Hanif et al., 2019) Regarding students' conceptual understanding of sodium bicarbonate, the MPBL experimental group achieved comparatively better performance than that of the control group in the knowledge understanding of the chemical properties of sodium bicarbonate from the result of the post-test.It has been confirmed by established research that the benefits of PBL to students' knowledge improvement were available (Günter & Alpat, 2017;Hakim et al., 2016;Musengimana et al., 2022).Sharing the same principles of PBL, MPBL was also proved to be effective in the enhancement of conceptual understanding in this study.Students designed and conducted chemical experiments, observed phenomena, and gained insights into the composition and working mechanisms of chemical leavening agents, by solving the encountered problems in the activity of making steamed buns which were common in daily life (National Research Council, 2012).Meanwhile, through the strengthened connection between chemistry https://doi.org/10.33225/jbse/23.22.130ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/ knowledge and daily life experience, students developed an in-depth comprehension of the properties and applications of the target concept and a core principle of the chemistry discipline (Blumenfeld et al., 1991;Krajcik & Czerniak, 2013).However, although this study demonstrated that the experimental group, i.e., the class using the MPBL approach, had higher learning scores than the students in the control class in the post-test, the results were not significantly different statistically.This may be due to the limitation of experimental time and selection of teaching contents, the lack of relevant experience of the teachers, and the small sample size.
Regarding crucial learning skills, MPBL proved to be effective.It has been confirmed by previous research that the effect of PBL is available in nurturing communication, presentation, and independent learning (English & Kitsantas, 2013;Isik-Ercan, 2020;Situmorang et al., 2018;Tosun & Taskesenligil, 2013).However, most relevant studies focused on the university level (Belt et al., 2002;Overton & Randles, 2015).As a lightweight alternative to PBL, MPBL applied the affordances of scientific inquiry projects to the chemistry classes at the upper-secondary level to improve crucial learning skills including communication and collaboration, information integration, independent learning, and problem-solving, as reflected in the surveys.After participating in MPBL, students in the experimental group generally provided more positive responses concerning the survey items probing into their confidence in and application of the target skill sets.However, in the classroom with the traditional type, such improvement, if any, was less evident.For some items, even a decrease was available in the mean score obtained by the control group, which consequently suggests that conventional instruction is not supportive of or may even hinder the development of learners.Following the traditional approach, students relied too much on the teachers, and thus the opportunities for them to participate in scientific inquiry and problem-solving were somewhat deprived so they were seldomly engaged in collaborative learning experiences.Their motivation and interest in learning would be affected, and so did their self-efficacy (McParland et al., 2004;Vlassi & Karaliota, 2013).
For students' perceptions and attitudes toward MPBL integrated chemistry class, most of the students interviewed recalled that they had a sense of being involved and engaged deeply in the MPBL class.They were required to verify their hypotheses through the design of relevant experiments, which made them feel that learning knowledge was interesting.For the experimental group, learning was investigative rather than dictated.The experimental and investigative processes in MPBL, as discussed by McDonnell et al. (2007) and Mataka and Kowalske (2015), enhanced interest, activated thinking, and contributed to more in-depth learning.Moreover, MPBL provided students with more opportunities to interact with their classmates, and having perceived such benefits, students in the experimental group generally held a positive attitude towards MPBL.There were several students mentioned that they might be unable to grasp the key points in MPBL.A deeper look into such responses unveiled the influence of their knowledge foundation and learning style.This finding highlights the significance of teachers' lesson design and student analysis and preparation to improve the acceptance and adaptability to MPBL implementation.
Conclusions and Implications
This study developed a specific strategy for integrating the MPBL pedagogical approach into the teaching of chemistry concepts and crucial learning skills at upper-secondary schools.Based on the MPBL design strategy, an intervention of MPBL integrated lessons was implemented in chemistry class.Through the collection and analysis of multiple data sources including knowledge tests and survey questions, the results of the quasi-experimental study demonstrated the learning effectiveness of the MPBL approach in improving students' conceptual understanding of sodium bicarbonate, and crucial learning skills (communication and collaboration, information integration, independent learning, and problem-solving) in chemistry education.Besides, student interviews showed most of the students who had received MPBL teaching approach had positive perceptions and attitudes toward MPBL integrated chemistry class.MPBL is largely effective in teaching chemistry and can have some positive effects on students' conceptual understanding and skills development.Therefore, teachers are suggested to design micro-projects with PBL principle in teaching practices, especially in the teaching of the physical and chemical properties of some materials and chemical reactions in chemistry classes.Using a project to link all knowledge organically as a whole can stimulate students' interest in learning, enhance their confidence in learning, and give them a sense of achievement in problem-solving.Teachers can set driving questions in the microprojects that are close to students' "nearest developmental zone" to avoid confusing students after the questions are asked, which will affect the project and reduce the sense of efficacy.In addition, teachers may assign some small tasks outside the classroom and focus on guiding students to cooperate in solving problems and making products to develop https://doi.org/10.33225/jbse/23.22.130ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/ the ability to analyse and solve real-life problems.In addition, teacher professional development is necessary and will facilitate them to design and conduct MPBL lessons more effectively.
Based on the current study, there are several recommendations that can be carried out in further research.Although the MPBL approach achieved better results in knowledge development than the traditional approach, this difference was not significant.This may be due to the limited time spent on implementation and the lack of experience of teachers in guiding MPBL.In addition, as an exploratory study, the implementation process involved only a few participants with limited data.Moreover, due to some limitations, only one particular school and grade were selected for the implementation.Participants of different study levels and schools of different areas were not included.Therefore, the generalization of the study results should be done with caution.The insights gained and limitations found in this study can inspire and inform future research that aims to develop models, principles, implementation, assessment, and professional development for teachers to enhance MPBL capacity across disciplines, levels, and schools.-Lu found that sodium bicarbonate could be used to leaven buns, but it may affect the taste and colour.The addition of vinegar would be a solution, which Lu and her mother tried and verified.
2) Speculate on the mechanisms involved.
Guide students to pay attention to the properties and applications of the chemical using a real-life situation.
Forming hypotheses (In class activity)
Propose Guiding Question 4 & 5, before guiding and facilitating students to generate solutions by working in groups.Q4: Sodium bicarbonate is the ingredient for chemical leavening agents which are most commonly adopted.Why is it?What's the working principle of it?Q5: What properties does sodium bicarbonate, as an important ingredient of chemical leavening agents have?Discuss in small groups and put forward hypotheses about the properties of sodium bicarbonate.
-The phenomenon that the steamed buns puffed up suggests that sodium bicarbonate generates gas upon being heated.
-The alkaline smell of the steamed buns indicates that sodium bicarbonate, after being heated, generates an alkaline residue.
-The phenomenon that the addition of vinegar changed the colour of the steamed buns suggests that sodium bicarbonate reacts with acetic acid in the vinegar.
1) Get students further engaged and connect chemistry to daily life using real-life situations.2) Develop skills in information extraction.
Experimentation (In class activity)
Guide students to design and conduct experiments to verify the hypotheses proposed.Discuss and analyse the phenomena observed in the experiments.
-In Experiment 1, we observed the solid change from crystal clear to white, at the same time of producing some gas, the lime water becoming unclear.
The phenolphthalein can be used to test whether the white solid generated was sodium carbonate or not.
-As observed, sodium carbonate did not decompose after being heated, and therefore it cannot be used to make a leavening agent.
Develop an in-depth comprehension of the target and related concepts (i.e., sodium bicarbonate, sodium carbonate, and salt solutions).
Elaboration and extension (In class activity)
Propose Guiding Question 9 to further deepen students' thinking.Q9: What is the role of each ingredient in a compound leavening agent?
1) Read the materials about compound leavening agents.
2) Recognize respective roles played by carbonates, acids and additives in compound leavening agents.
1) Discover and deepen the understanding of compound leavening agents and respective roles played by each ingredient.
3) Appreciate the value shown by chemistry in the improvement of human life.
Summary (In class activity)
Guide students to make a summary of the working mechanisms of leavening agents, analyse the pros and cons for making leavening agents using sodium bicarbonate as the key ingredient, and compare the chemical properties shown by sodium bicarbonate and sodium carbonate, respectively.
1) Make a summary of the working mechanisms of leavening agents.
-Upon being heated, the chemical for the creation of leavening agents will decompose and produce non-toxic, odourless gas to puff up the bun.
2) Make a summary of the pros and cons for making leavening agents using sodium bicarbonate as the key ingredient.
3) List a table in which the chemical properties of sodium carbonate and sodium bicarbonate are displayed.
Deepen the comprehension of leavening agents in food production and sodium bicarbonate via a comparative approach.
Application and presentation (in-class and out-of-class activities) Propose Guiding Question 10, and guide students to apply what they have learned to real-life scenarios.Q10: How would you choose the compound baking agent if you were the owner of a cake shop?Please design a baking compound which can be used to make your own pastry.
Design and make a leavening agent using the materials available.
-Add lemon juice to sodium bicarbonate to reduce alkalinity, but too much of it may affect the taste of the bun.
Apply conceptual knowledge and creative thinking to the settlement of practical problems.
Display, evaluate and feedback to students' work.
Present and introduce their own works.Deepen the understanding and experience the joy obtained from application and problem solving. https://doi.org/10.33225/jbse/23.22.130
Figure 1
Figure 1 Flowchart of the MPBL Process and Activities Figure 2 Data Collection Methods and Processes Figure 3 Descriptive Analysis of Crucial Learning Skills Survey Responses in the Pre-Test Figure 3 Descriptive Analysis of Crucial Learning Skills Survey Responses in the Pre-Test
Figure 4
Figure 4 Descriptive Analysis of Crucial Learning Skills Survey Responses in the Post-Test
Figure 4
Figure 4 Descriptive Analysis of Crucial Learning Skills Survey Responses in the Post-Test Figure 4 Descriptive Analysis of Crucial Learning Skills Survey Responses in the Post-Test Figure 5 Mean Score Differences between Groups of Communication and Collaboration Figure 6 Mean Score Differences between Groups of Information Integration Figure 7Mean Score Differences between Groups of Independent Learning Figure 8Mean Score Differences between Groups of Problem-Solving Introduction (In-class activity) Instruct students to read the list of ingredients for a leavening agent and propose guiding Question 1. Q1: What is the main ingredient of the leavening agent? 1) Read the ingredient list and identify sodium bicarbonate as the main ingredient.2) Think over the role played by sodium bicarbonate.Get students engaged in the process and encourage them to explore.Understanding the task (In class activity) Guide students to read the materials and propose Guiding Question 2 & 3. Q2: What problems occur in the bun-making process?Q3: What are the causes? 1) Read the material and launch a discussion on the processes of Lu and his mother's steam-bunmaking.-Lu's mother added yeast to the flour and the steamed buns tasted alkaline and looked unshaped.
Table 2
One-sample Kolmogorov-Smirnov Test Results
Table 3
Independent Sample t-Test Results ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/ reason why the experimental group got improved to such a great extent on the 9 th item (I9: I communicate with group members to complete the task). https://doi.org/10.33225/jbse/23.22.130 (I hope)the teacher can give us a little more time to solve the problem.By the time the teacher asked us to make a report, our group had not reached an agreement yet.Then we just asked one group member to share her individual opinion.So hopefully, the teacher can give us a little more time to explore and discuss.
If the teacher had helped us divide the work within the group in advance, our group would have been more efficient.I hope the teacher will facilitate and supervise every student in the classroom in the future.https://doi.org/10.33225/jbse/23.22.130 1) Design and conduct experiments for the verification of the properties shown by sodium bicarbonate hypothesized.://doi.org/10.33225/jbse/23.22.130ISSN 1648-3898 /Print/ ISSN 2538-7138 /Online/ https | 2023-02-24T17:24:21.232Z | 2023-02-15T00:00:00.000 | {
"year": 2023,
"sha1": "b69a81c379875313c3934bd22978f3f00e78a8ac",
"oa_license": null,
"oa_url": "https://oaji.net/articles/2023/987-1676961214.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d3fdc61de473a8961f6ac23bee6bc6576c5e41d1",
"s2fieldsofstudy": [
"Chemistry",
"Education"
],
"extfieldsofstudy": []
} |
6764308 | pes2o/s2orc | v3-fos-license | i fi cation of fracturemechanism for polymer gels with controlled network structure
Recently, polymer gels have drawn much attention as scaffolds for regenerative medicines, soft actuators, and functional membranes. These applications need tough and robust polymer gels as represented by the double network gels. To fully understand this mechanism and develop further advanced polymer gels, we need to fully understand the molecular origin of fracture energy for conventional polymer gels, which is inhibited by the inherent heterogeneity. In this paper, we show the experimental results on the fracture of model polymer gels with controlled network structure, and discuss the mechanism of the fracture of polymer gels.
I. Introduction
Elastomers are important materials for a variety of applications, including tires, shock absorbing materials, so actuators, and biomaterials.To tailor the physical properties for each application, the molecular understanding of physical properties is of particular importance.However, at this point, we do not have denite pictures for the molecular origin of physical properties.For example, the elastic modulus (G) is explained by the relationship with the number density of elastically effective chains. 1,2We have two popular models for the elastic modulus, including the affine network 1 and phantom network 3 models; however, their validity or applicability has not been claried.This problem is caused by the difficulty in the control of the network structures and in the direct observation of the network structure.Because the elastomers have complicated and heterogeneous structure, 4 including heterogeneous distribution of crosslinking, chain entanglements, dangling ends, and loops, we cannot estimate the structural parameters of elastomers from the feed condition.In addition, we cannot observe these complicated substructures with direct visualization by TEM and SEM.To understand the molecular origin of physical properties, as a rst step, we need to perform a series of studies on elastomers with controlled network structure.
Recently, we succeeded in fabricating homogeneous polymer network system from a novel molecular design in prepolymer architecture and reaction system. 5,6We synthesized two kinds of tetra-armed prepolymers with mutually reactive end groups, which are amine and activated ester.We can fabricate polymer gels by mixing two aqueous solutions of the prepolymers at around the physiological condition.The unique tetra-armed prepolymers completely inhibit the self-biting reaction, and form the elastically effective chains efficiently.The ionic equilibrium of amine species allows us to control the reaction rate and to mix two prepolymer solutions, resulting in homogeneous network structure. 7We named this polymer network swollen in aqueous solution as Tetra-PEG gel, and validated the homogeneity by means of the small angle neutron scattering 8,9 and nuclear magnetic resonance. 10These measurements claried that the heterogeneity of Tetra-PEG gel was strongly suppressed over the other structure-controlled polymer networks, including "model networks."Using Tetra-PEG gel as a model system, we observed for the rst time, the crossover of elastic modulus from the phantom to the affine network models with an increase in the polymer volume fraction, 11 which was originally predicted by Flory in 1977. 12These experimental results strongly suggest that Tetra-PEG gel is a promising model system contributing to the molecular understanding of the physical properties of elastomeric materials.
In this paper, we focus on the fracture of polymer gels.In general, the most important parameter related to the fracture is the fracture energy (T 0 ), which is the energy required to propagate a unit length of crack.The Lake-Thomas theory describes T 0 as the energy needed to break the chemical bonds per unit cross-section on the fracture surface as follows: 13 where n is the number of elastically effective chains per unit volume, L is the displacement length, N is the degree of polymerization of network strand, and U is the total bond energy of monomeric unit.Originally, the value of L is related to the strand length of the network as L z R 0 z aN 1/2 (a: monomer length); however, there is an ambiguity in L because there is no quantitative validation of the Lake-Thomas model.Although the quantitative validation has been inhibited by the heterogeneity, the qualitative validation has been done by Gent et al. 14,15 They observed the relationship between T 0 and G as, T 0 $ G À1/2 , which is given from eqn (1), when we assume that n $ G and N $ G À1 .However, in general, when we use R 0 in eqn (1), we need to add the enhancement factor (k) as follows: 14,16,17 The value of k strongly depends on the system and is in the range from unity to thousands.9][20] Because the effect of k is so strong, the molecular origin of k is of great importance for designing the advanced materials.
To investigate the fracture energy, we prepared Tetra-PEG gels with variable degree of strand polymerization between crosslinks (N c ), polymer volume fraction (f 0 ), connectivity (p), and heterogeneous distribution in strand length.Through the analysis of Tetra-PEG gels, we discuss the effects of structural parameters on fracture energy based on the Lake-Thomas model, and nally discuss the enhancement factor.
Equimolar quantities of TetraPEG-NH 2 and TetraPEG-OSu (f 0 : 0.034-0.12)were dissolved in phosphate buffer (pH 7.4) and phosphate-citric acid buffer (pH 5.8), respectively.The ionic strength of buffer solution was varied to maintain the pH of the solution.In order to tune p, the TetraPEG-OSu solution was incubated at 25 C for a series of times (t deg ).Aer the incubation time, TetraPEG-NH 2 and TetraPEG-OSu solutions were mixed, and the resultant solution was poured into the mold.At least 12 hours were allowed for the completion of the reaction before the following experiment was performed.The detailed experimental conditions are listed in Tables 1-3.
C. Infrared (IR) measurement
The gel samples were prepared as cylinder shape (diameter: 15 mm, height: 7.5 mm), and immersed in H 2 O for 2 days in order to remove the sol fraction.
Aer drying in air, the samples were cut into thin lms (thickness: 40 mm) using a microtome (SM2000R, Leica).These dried samples were swollen in D 2 O, and then soaked in a mixture solvent of D 2 O and oligo-PEG (M w ¼ 0.40 kg mol À1 ) (v/v ¼ 1/1).The IR measurements for these samples were performed using a JASCO FT-IR-6300 equipped with a deuterated triglycine sulfate (DTGS) detector, in which 128 scans were coadded at a resolution of 4 cm À1 for the samples.
D. Tearing test
The tearing test was performed in air using a stretching machine (Tensilon RTC-1150A, Orientec Co.).The samples were cut into the shape, which has standardized JIS-K 6252 as 1/2 sizes (width: 50 mm, length: 7.5 mm, thickness: 1 mm, the length of initial notch is 20 mm) using a gel cutting machine (Dumb Bell Co., Ltd.).The two arms of the test sample were clamped and one arm was pulled upward at a constant velocity (40 and 500 mm min À1 ), while the other arm was maintained stationary; moreover, the tearing force F was also recorded.
(i) Conventional Tetra-PEG gel (Fig. 1a): 21 Equimolar amounts of TetraPEG-NH 2 and TetraPEG-OSu with the same molecular weight were used as prepolymers.The molecular weights of prepolymers (M w ) were tuned in 5, 10, 20 and 40 kg mol À1 and resultant gels were named as named as 5k, 10k, 20k and 40k Tetra-PEG gel, respectively.The polymer volume fraction (f 0 ) was tuned from 0.034 to 0.12.
(ii) p-tuned Tetra-PEG gel (Fig. 1b): 21 p-tuned Tetra-PEG gels are fabricated by using partially hydrolyzed TetraPEG-OSu as a prepolymer.Prior to the reaction, Tetra-PEG-OSu was dissolved in the aqueous buffer solution and allowed to hydrolyze for a certain period of time (t deg ¼ 0, 20, 80, 120, 160, 200, 240, 320 and 400 min).The values of M w and f 0 were xed as 20 kg mol À1 and 0.081, respectively.
p was estimated by FT-IR measurement for gel samples. 11p was estimated from the peak intensity of ionized carboxyl group (1555 cm À1 ) and that of amide bond (1624 cm À1 ) as follows: where I amide and I carboxyl are the peak intensity of the amide bond and the ionized carboxyl group, respectively.The ratio of the molar absorbance coefficient of ionized carbonyl group to amide bond was estimated to be 1.0 : 1.0.The values of p were almost constant against f 0 in all Tetra-PEG gels, and they were 0.82-0.95for 5k, 10k and 20k Tetra-PEG gels, 5k-10k, 5k-20k and 10k-20k Tetra-PEG hetero gels and Tetra-PEG bimodal gels, whereas p was 0.71-0.81for 40k Tetra-PEG gel (Fig. 2).
B. Estimation of fracture energy
To investigate the fracture energy (T 0 ), we performed the tearing measurement for trouser-shaped specimens. 11The gel specimens were used in as-prepared state, not in equilibriumswollen state.The two legs of a sample were cramped and the upper leg was pulled upward at a constant velocity of 40 mm min À1 , while the other leg was maintained stationary, and the load (F) was recorded.We performed the tearing measurements at a different velocity of 500 mm min À1 , and conrmed the rateindependent tearing behavior in this experimental condition.The similar rate-independent tearing behavior was observed for other gel systems.Fig. 3 shows the tearing behavior of conventional Tetra-PEG gels.In the beginning, the load value monotonously increased with an extension, where crack did not propagate.Aer starting the crack propagation, the load uctuated with an extension.In the whole region of 5k and 10k, and the low f 0 region of 20k and 40k Tetra-PEG gel, the steady tearing, where the degree of uctuation is relatively small, was observed.On the other hand, the stick-slip tearing, where the degree of uctuation is large and clear peaks were observed, was observed for 20k and 40k Tetra-PEG gel with the higher f 0 region, and the tendency became prominent with increases in M w and f 0 .
We estimated different values of T 0 from the average of local maximum, the simple average and the average of local minimum values of F as, where h is the thickness of the gel samples.It should be noted that only the values of F aer starting the crack propagation were accumulated.The values of T 0 computed from the local maximum and average values of F were strongly inuenced by the magnitude of stick-slip behavior, while T 0 computed from the local minimum values was negligibly affected by the magnitude of stick-slip behavior and systematically changed against the feed conditions.Although we realize the importance of the understanding of the stick-slip behavior, in this study, we focus on T 0 computed from the minimum values of F, which indicates the minimum energy required to propagate a crack.
C. p-tuned Tetra-PEG gels
Prior to the investigation on conventional Tetra-PEG gels, we focus on the p-tuned Tetra-PEG gels. 21This is because the connectivity defects of the network can inuence the structural parameters including n and N. When we accept the abovementioned scaling relationships G $ n and G $ N À1 , the decrease in p will decrease n, leading to an increase in N. If the value of N was inuenced by p, we could not use the values of N directly calculated from the molecular weight of prepolymers as where m PEG is the molecular weight of PEG per monomeric unit.
In addition, the p-tuned Tetra-PEG gels allow us to investigate the pure effect of p on T 0 , while maintaining f 0 and M w unchanged.
By hydrolyzing the activated ester prior to the reaction, the values of p were successfully tuned from 0.55 to 0.92 (Fig. 4).The effect of p on n can be directly predicted according to the treelike approximation as follows: 24,25 where r (¼ 1.129 g cm À3 ) is the density of PEG, P N is the probability that an arm does not lead to an innite network, and x y is the usual notation for the number of combinations of x items taken y at a time: x!/y!(x À y)!.Notably, in the previous reports, we have checked the validity of n calculated by the treelike approximation through the experimental study on elastic modulus of the p-tuned Tetra-PEG gels. 26ig. 5 shows the values of T 0 against n.As mentioned above, the change in n was purely originated from the change in p.The dashed line represents the scaling prediction of the Lake-Thomas model, T 0 $ n.As clearly shown in Fig. 5, the experimental data obeyed the Lake-Thomas prediction in the region n > 4.0 (p > 0.65).This agreement in this region indicates that n calculated by the tree-like approximation is applicable to the Lake-Thomas model, and that the term LNU does not depend on p.In the region p > 0.65, we can use the values of N and U calculated from eqn ( 5) and (8), respectively.
On the other hand, in the region of n < 4.0, T 0 deviated upward from the guideline.This region corresponds to the region where the elastic modulus cannot be predicted by n calculated under the tree-like approximation.A massive amount of dangling chains may inhibit the mean-eld-like treatment in this region.In the following analyses, we use the N and U calculated from eqn ( 5) and ( 8), because all the values of p shown in Fig. 2 are higher than 0.65.
D. Conventional Tetra-PEG gels and hetero Tetra-PEG gels
In order to investigate the effects of N and f 0 on T 0 , we evaluated T 0 of the conventional Tetra-PEG gels with different N and f 0 .the slope (LNU) is independent of f 0 , but dependent on N. From the linear t, the slopes are estimated to be 0.45, 1.28, 3.15 and 9.18 for 5k, 10k, 20k and 40k Tetra-PEG gels, respectively.Here, we show the data of hetero Tetra-PEG gels (Fig. 1c). 22Notably, each hetero Tetra-PEG gel has a monomodal network strand length dened by the molecular weights of prepolymers (M 1 and M 2 ) as N ¼ (M 1 + M 2 )/4m PEG (e.g. for 5k-10k Tetra-PEG hetero gel, N ¼ (5000 + 10 000)/4/44 ¼ 85.2).As shown in Fig. 6 (lled symbols), we can observe the linear relationships for hetero Tetra-PEG gels, and the slopes increased with an increase in N; the slopes were 0.88, 1.68 and 2.17 for 5k-10k, 5k-20k and 10k-20k hetero Tetra-PEG gels, respectively.Using the constant values of N and U calculated from eqn ( 5) and (8), we estimated the values of L from the slopes according to eqn (1).Fig. 7 shows L of conventional Tetra-PEG gels (open circles) and Tetra-PEG hetero gels (lled circles) against N. L increased from 13 to 34 nm with an increase in N. In the original Lake-Thomas model, L corresponds to R 0 (zaN 1/2 ) of virtual network chains with polymerization degree of N (dotted line in Fig. 7).8][29] The values of N k are derived as N k ¼ 0.68 N, where we assume that the bond angles are 109.5 , and bond length of C-C and C-O are 0.154 and 0.145 nm, respectively.The values of L and R 0 have similar magnitude and N-dependence, but are different from each other.According to eqn (2), we estimated k for each N and plotted it against N (Fig. 8).The values of k were almost constant and approximately 3 in the range examined.These data indicate that network strands within 3R 0 from the crack tip are extended at the fracture, and the length is determined only by N, regardless of p and f 0 .
E. Bimodal Tetra-PEG gels
Finally, we investigated effect of another heterogeneity, i.e., heterogeneous distribution in strand length. 23This heterogeneity was introduced by mixing three kinds of 5k and 20k Tetra-PEG prepolymers with the molar ratio being tuned (Fig. 1d, r ¼ (20k Tetra-PEG prepolymer)/(total prepolymer)).We tuned r with xing the molar concentration of prepolymers to 8.0 Â 10 À3 mol L À1 (Fig. 1d).The molecular weight of shorter strands is 2.5 kg mol À1 , whereas that of longer strands is 10 kg mol À1 .Although the difference in the molecular weight is not so large, we can assess the effect of heterogeneous distribution in strand length on T 0 .Fig. 9 shows the T 0 of the bimodal Tetra-PEG gels.
As shown in Fig. 9, T 0 increased with r.The calculated values of n were almost constant against r, reecting the constant p.
Considering the constant n against r, the increase in T 0 is originated from other parameters such as N and L. Here, we assume that the value of N in the bimodal Tetra-PEG gels is represented by the number-average degree of polymerization of prepolymers (N ave ¼ (5000 (1 À r) + 20 000r)/2m PEG ).Fig. 10 shows the N ave -dependence of L and guide of R 0 calculated from N ave (dotted line).The values of L and R 0 have similar N avedependence, but are different from each other.We also estimated k using eqn (2), and plotted against N in Fig. 11.Although the values of k were slightly smaller than those for conventional and hetero Tetra-PEG gels, the values were similar with each other.These data indicate that the heterogeneous distribution in strand length does not signicantly inuence T 0 in the range of this study.
IV. Conclusions
In this paper, we discussed the fracture energy of polymer gels with controlled network structure.The fracture energies of Tetra-PEG gels with tuned structural parameters are fully explained by the Lake-Thomas model with k ¼ 3. The value of k is universal in the range of this study, and both the changes in f 0 and N and the connective heterogeneity and heterogeneous distribution in strand length did not affect the value of k.These data suggest that the enhancement factor estimated in this study can be applicable to the conventional polymer gels with the similar concentration range regardless of the degree of heterogeneity, although there is a possibility that the macroscopic heterogeneity ($mm), which was not observed in Tetra-PEG gel system, affects the fracture toughness.In other words, the network homogeneity does not strongly contribute to the enhancement of fracture toughness.The constant k also suggests that it is difficult for conventional polymer gels to achieve the enhanced fracture toughness.As proposed by Gong, chain entanglements may play an important role for enhanced k. 19,20,[30][31][32] Here, we must point out the limitation of this study; i.e., we investigated T 0 computed from the minimum values of F in the tearing measurements.We ignored the effect of tearing behavior: steady state or stick-slip.Because fracture starts at the maximum values of F, the molecular understanding of T 0 calculated from the maximum value of F is also important.Thus, to fully understand the fracture behavior, we also need further investigate the tearing behavior.
Fig. 6 (
open symbols) shows the values of T 0 as a function of n; the values of T 0 of the same N were on the same line proportional to n.Because the difference in n for Tetra-PEG gels with each N was purely originated from f 0 , these data indicate that
Fig. 4 p
Fig. 4 p as a function of t deg .The dashed line is the guideline shows the relationship, p $ exp( À t deg ).(Reproduced from Akagi et al. 21with permission from the American Institute of Physics.)Fig. 5 T 0 as a function of n in p-tuned Tetra-PEG gel.(Reproduced from Akagi et al. 21with permission from the American Institute of Physics.)
Fig. 7 L
Fig. 7 L as a function of N in Tetra-PEG gel (open circles) and hetero Tetra-PEG gel (filled circles).
Fig. 8 k
Fig. 8 k as a function of N in Tetra-PEG gel (open circles) and hetero Tetra-PEG gel (filled circles).
Fig. 9
Fig. 9 T 0 as a function of r in bimodal Tetra-PEG gel.
Fig. 10 L
Fig. 10 L as a function of N ave in bimodal Tetra-PEG gel.
Fig. 11 k
Fig. 11 k as a function of N ave in bimodal Tetra-PEG gel.
Table 1
Experimental conditions for the conventional Tetra-PEG gels and p-tuned Tetra-PEG gels
Table 2
Experimental conditions for the hetero Tetra-PEG gels Combination of TetraPEG's (g mol À1 ) f 0 This journal is © The Royal Society of Chemistry 2014 Soft Matter, 2014, 10, 6658-6665 | 6659 Paper Soft Matter Open Access Article.Published on 09 June 2014.Downloaded on 16/10/2017 00:59:10.This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.View Article Online
Table 3 .
Experimental conditions for the bimodal Tetra-PEG gels | 2018-04-03T05:13:04.585Z | 2014-08-13T00:00:00.000 | {
"year": 2014,
"sha1": "8de6be5ab7b9b6f385671157856cbcd023c0802e",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2014/sm/c4sm00709c",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "efbc1cc7a9bec5e0e897d617370a8dcd97908af4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
258241568 | pes2o/s2orc | v3-fos-license | New insight into the microbiome, resistome, and mobilome on the dental waste water in the context of heavy metal environment
Object Hospital sewage have been associated with incorporation of antibiotic resistance genes (ARGs) and mobile genetic elements (MGEs) into microbes, which is considered as a key indicator for the spread of antimicrobial resistance (AMR). The compositions of dental waste water (DWW) contain heavy metals, the evolution of AMR and its effects on the water environment in the context of heavy metal environment have not been seriously investigated. Thus, our major aims were to elucidate the evolution of AMR in DWW. Methods DWW samples were collected from a major dental department. The presence of microbial communities, ARGs, and MGEs in untreated and treated (by filter membrane and ozone) samples were analyzed using metagenomics and bioinformatic methods. Results DWW-associated resistomes included 1,208 types of ARGs, belonging to 29 antibiotic types/subtypes. The most abundant types/subtypes were ARGs of multidrug resistance and of antibiotics that were frequently used in the clinical practice. Pseudomonas putida, Pseudomonas aeruginosa, Chryseobacterium indologenes, Sphingomonas laterariae were the main bacteria which hosted these ARGs. Mobilomes in DWW consisted of 93 MGE subtypes which belonged to 8 MGE types. Transposases were the most frequently detected MGEs which formed networks of communications. For example, ISCrsp1 and tnpA.5/4/11 were the main transposases located in the central hubs of a network. These significant associations between ARGs and MGEs revealed the strong potential of ARGs transmission towards development of antimicrobial-resistant (AMR) bacteria. On the other hand, treatment of DWW using membranes and ozone was only effective in removing minor species of bacteria and types of ARGs and MGEs. Conclusion DWW contained abundant ARGs, and MGEs, which contributed to the occurrence and spread of AMR bacteria. Consequently, DWW would seriously increase environmental health concerns which may be different but have been well-documented from hospital waste waters.
Introduction
Hospital wastewater is a major "breeding" ground for various pathogens, antibiotic resistance bacteria (ARB) and antibiotic resistance genes (ARGs), and has generated continued environmental health concerns (Rizzo et al., 2013). The major reasons for the concerns are that the wastewater facilitated ARG-exchange events among bacteria and generation of multidrug resistant (MDR) bacteria (Bondarczuk et al., 2016). However, similar concerns for dental waste water (DWW) have not been specifically investigated.
The DWW has some specific differences from that of hospital sewage. For example, certain oral bacteria (i.e., Pseudomonas. gingivalis) were associated with development of oral and gastrointestinal cancers (Ahn et al., 2012;Olsen and Yilmaz, 2019). DWW contains non-infectious toxic wastes that include acrylic resin scraps, metal alloys, porcelain, gypsum and dental amalgam, as well as abundant heavy metals, such as mercury, silver, tin, zinc, and copper which have toxic properties (Clarkson et al., 2003;Jones, 2004;Kao et al., 2004;Vandeven and McGinnis, 2004). Consequently, most investigations on health hazards from DWW have been focused onto amalgam and other metals (Al-Khatib and Darwish, 2004;Muhamedagic et al., 2009), and acrylic resin filling materials (Binner et al., 2022). However, investigations using holistic and more sophisticated technologies have not been reported yet.
Metals and biocides may co-select for antimicrobial resistance (AMR; Gelalcha et al., 2017;Pal et al., 2017). Metal contaminations have been reported to significantly influence the diversity, abundance and mobility potential of a broad spectrum of ARGs in urban soils (Song et al., 2017;Zhao et al., 2019). In addition, co-selections of antibiotic-and metal-resistance have been associated with arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), copper (Cu), mercury (Hg), nickel (Ni), lead (Pb), and zinc (Zn) (Pal et al., 2015;Song et al., 2017;Zhao et al., 2019). Another study showed that metal contamination in soil increased the potential for horizontal gene transfer (HGT) of ARGs via co-selection of ARGs and MGEs, thereby generating a pool of high-risk mobile ARGs (Martínez et al., 2015). With the presence of heavy metals in DWW as opposed to hospital waste water, DWW may involve novel mechanisms for ARGs evolution, HGT development and transmission of AMR. Unfortunately, there has not been a report on such investigation, especially using resistome and mobilome.
Standard handling and disposal of potentially infectious and toxic DWW has been implemented. Many dental clinics have chair-side primary and secondary filter traps which remove approximately 60% of large particles from discharges (Westman and Tuominen, 2000;Johnson and Pichay, 2001;Adegbembo et al., 2002). In addition, membrane bioreactors (MBR) in combination with biological degradation and membrane separation, have been used to remove infectious and non-infectious agents from effluents (Diehl and LaPara, 2010;Ju et al., 2016). To our knowledge, there has been no reports simultaneously identifying the bacterial communities, resistome and mobilome in DWW. Thus, the overall aim of our study was to investigate the abundance and components of bacterial communities, ARGs and MGEs in treated and untreated DWW from a single dental department. The investigation utilized advanced metagenomic and bioinformatic methods to provide in-depth characterizations of the DWW. Our investigation provides novel information on AMR evolution under high metal pressure and on environmental health concerns.
Dental waste water treatment and sample collection
The DWW samples from each washbasin or dental chair in the department were discharged via pipes with filters, to remove large particles, and then into a regulating pool in a tank outside the department. In the tank, the discharged water was homogenized and when the accumulated DWW reached a certain level, it triggered a high voltage discharge which produced ozone and activation of a lift pump which circulate the waste water. After the lift pump stopped working, ozone disinfection continued for another 20 min. In addition, the tank was regularly disinfected once a week by adding chlorine dioxide tablets 5-10 tablets/time (chlorine content 10%) for 30 min.
DWW samples (untreated and treated) of 1 liter each were collected from the specific discharge from the dental department (without mixing with discharge from other sources), weekly from June to July 2021. A total of nine samples were collected in sterile bottles and delivered on ice to the diagnostic microbiology laboratory within 1 h. In the laboratory, each sample was centrifuged at the speed of 10,000 rpm for 5 min at 4°C. The sediments were stored at −80°C until further analysis.
Metagenome sequencing (DNA extraction and identification)
Microbial DNAs from the sewage sediments were extracted using the E.Z.N.A. ® soil DNA kit (Omega Bio-Tek, Norcross, GA, United States). DNA concentrations were measured by using the Qubit ® dsDNA Assay KitinQubit ® 2.0 Fluorometer (Life Technologies, CA, United States), and about 1 μg of DNA (OD: 1.8-2.0) from each sample was used to construct a library. Sequencing libraries were generated using NEB Next ® Ultra™ DNA Library Prep Kit for the Illumina (NEB, United States) analysis, and libraries were analyzed using the Agilent 2,100 Bioanalyzer and quantified using PCR. The thermal cycling conditions consisted of initial denaturation at 98°C for 30 s, 12 cycles of 98°C for 10 s, 65°C for 75 s; and a final extension of 5 min at 65°C. Clustering of the index-coded samples were performed on a cBot Cluster Generation System. After the cluster generation, the library preparations were sequenced on an Illumina platform, and paired-end reads were generated. The bacterial genomic sequences were deposited in the NCBI Sequence Read Archive with an accession number (PRJNA869027) which can be shared with readers.
Raw sequence pre-processing
The raw data obtained by sequencing using the Illumina sequencing platform has a certain percentage of low-quality data, and in order to ensure accurate and reliable results for subsequent analysis,
Bioinformatics analyses
Short-read sequencing data were used to identify MGEs and ARGs by the Comprehensive Antibiotic Resistance Database protein homolog model version 1.1.2 (CARD; McArthur et al., 2013) and the ResFinder version 2.1. The MGE database is available from https://github.com/ KatariinaParnanen/Mobile Genetic Element Database. Once ARGs and MGEs were identified within assembled contigs, the next step involved identifying which contigs contained both ARGs and MGEs. Co-occurring placements within a single contig were considered as evidence for putative genomic colocalization (Paetzold et al., 2019). Reads were assembled individually into contigs by using MEGAHIT (v 1.1.1), with the following parameters: -k-list 39, 49, …, 129, 141 -mincontig-len 1,000. The qualities of assemblies were evaluated by using QUAST (v 5.0.2;Gurevich et al., 2013). The ORFs on ACCs were annotated or retrieved in the CARD database by using Bowtie (2-2.2.9). According to the result of CARD annotation, MGEs which were located on ACCs were identified in the MGE database by using Bowtie (2-2.2.9). Annotations were categorized as MGEs based on string matches to one of the following keywords (Langmead and Salzberg, 2012).
Network analyses
Network analyses were performed with R using the Vegan and Hmisc packages, and visualizations were conducted on the interactive platform of Gephi 0.9.2. Ggplot2 and pheatmap packages were used to draw a clustering heatmap of ARGs abundance in the samples, and the Hmisc package was used to calculate the correlation matrix for making the network map (Feng et al., 2015). Spearman's rank correlations were used to construct the co-occurrence networks between ARGs and MGEs, ARGs subtypes and microbial communities that occurred in at least 80% of all samples (Karaolia et al., 2021). A correlation between any two items was considered statistically significant if Spearman's correlation coefficient (ρ) was ≥0.7 and the value of p was <0.001.
Results
Diversity of bacterial community, ARG, and MGEs in the DWW Characteristics of bacterial communities in both the untreated and treated groups of DWW were determined. Alpha diversity including Shannon index/diversity, Simpson index/diversity, richness index and evenness index showed a similar trend between the two groups of DWW. Thus, the Shannon diversity was selected as representative of the alpha diversity (Leiviska and Risteela, 2022) and there was no significant difference between the two groups of DWW samples (p > 0.05). For example, the diversity of bacteria and ARGs was insignificantly higher in the treated sewage than in the untreated group, while the diversity of MGEs was insignificantly lower in the treated than that in untreated sewages ( Figures 1A-C).
Beta diversity was used to reveal differences in species complexity. The principal coordinate analysis (PCoA) based on Bray-Cutis distance was used to analyze the Beta diversity of OTUs, ARGs and MGEs in both DWW samples, and the PERMANOVA analysis to check whether there was a significant difference in community composition structures between the two groups. The results show that there was no significant difference in bacteria, ARGs and MGEs composition between the two groups. The PCA analyses show that the standard treatment of DWW did not appear to have significant impact on the microbial communities in the waste water (Figures 1D-F).
To provide more accurate determination of changes in bacteria between the untreated and treated groups of DWW, LEfSe analysis was used to identify taxa with differential abundance based on bacteria with LDA threshold >2. Our results revealed 29 taxa with significant differences in both groups: 28 were in the untreated samples, mainly Alphaproteobacteria, Gammaproteobacteria, and Actinobacteria, while only Betaproteobacteria Acidovoraxavenae in the class β-Proteobacteria among the treated samples.
The relative abundance of 20 ARGs were compared in untreated and treated samples: ARGs of Carbapenem and Phenicol were reduced, while ARGs of Cephalosporin were increased by treatment. The ranges of change were larger than that of other types of ARGs although these changes were not significant. Specifically, subtypes of IND, CGB-1, Paer-catB6, and catB8 had higher clearance rate (>70%) through treatment. On the contrary, subtypes of APH(6)-Id, APH (3″)-Ib, AAC(3)-IIa, and AAC(3)-IIc were slightly increased in abundance after treatment ( Figure 3B; Supplementary Tables 2-5).
The differential ARGs and MGEs were evaluated based on LEfSe analysis. The LDA histograms of ARGs and MGEs were presented in Figures 4, 5. The lengths of the bars represent the contribution from different species (LDA Score). The featured ARGs (LDA > 2) were mainly TEM (84 subtypes), tet 39, tet 41, AAC-3 (2 subtypes), AAC-2, VanB, and dfrC in the untreated samples while only dfrA12, dfrA13, and OXA-209 were detected in the treated samples. Among MGEs, only tnpAa (LDA > 2) showed the biggest difference between the treated and untreated samples. The ARG classifications before and after treatment were shown in Figure 5.
Frontiers in Microbiology 06 frontiersin.org Changes in relative abundance of bacteria, ARGs and MGEs in the untreated and treated dental waste waters and their clearance rate. Light blue and dark blue columns show the mean relative abundance of bacteria (A), ARGs (B), and MGEs(C) in the two groups. The gray columns show the corresponding clearance rates.
Discussion
Studies on hospital waste waters have shown strong associations between their contaminants (ARG, MGE, and antibiotic resistant microbes) and environmental health problems (Su et al., 2017;Quintela-Baluja et al., 2019;Cai et al., 2021). DWW may contain the similar types of contaminants as found in hospital waste but also a substantial amount of heavy metals which may influence interactions among ARG, MGE and microbes. Since metals increase the potential for ARGs spread via co-selection of ARGs and MGEs, co-existence of the metals and ARGs would make the DWW to be a novel niche for studying AMR emergence and environmental health concerns. However, there have been very limited reports on environmental health hazards with DWW, especially with new technology such as resistome and mobilome in our investigation. By using a metagenomics approach, DWW samples from one major dental department were found to have resistome which included 1,208 types of ARGs belonging to 29 antibiotic types/subtypes. The most abundant ones were ARGs of multidrug resistance, followed by ARGs of Aminoglycoside, Cephalosporin, Fluoroquinolone, Tetracycline, Peptide, Cephamycin, Phenicol, Glycopeptide, Carbapenem, Diaminopyrimidine, Rifamycin, Macrolide, Penam, MLS, and Fosfomycin. Importantly, all of the mentioned resistance was to antibiotics which were commonly used in clinical practice in the hospital but were less frequently used in the dental department where the waste water samples were collected. Our results are intriguing as well as meaningful because DWW was thought to be rarely involved in the transmission of AMR. Furthermore, a wide variety of ARGs were unexpectedly found in DWW which might have been influenced by the abundant metals. These unique features need to be further investigation in order to better understand mechanisms and to develop more effective prevention strategies.
Microbiomes have been considered as an important driver for ARG disseminations in the environment (Baym et al., 2016;Jia et al., 2017;Chen et al., 2019;Yu et al., 2020). The source of ARGs in the DWW may come from oral microbiome. Indeed, our collected DWW samples included 1,514 types of bacteria, 37 types of fungi, 11 types of phages, 7 types of Archaea, and 5 types of viruses. Among them, bacteria were the majority while the most abundant phyla and genus were Proteobacteria (76.4%), and Pseudomonas (25.67%), particularly Pseudomonas putida, Pseudomonas sp. LTGT-11-2Z, and Pseudomonas aeruginosa. Importantly, they also belonged to the important Linear discriminant analyses of bacterial species. Each column represents a bacterium, and the length of the column corresponds to the LDA value. The larger the LDA value, the larger the difference. The color of the bar corresponds to the grouping of characteristic bacteria.
Frontiers in Microbiology 08 frontiersin.org pathogens found in dental clinic. A previous study revealed that composition of the microbial community in waste water was associated with 68.2% of the variations in ARGs . Using the association data from our metagenomic analyses, network and binning analyses were conducted as shown in other reports (Guo et al., 2017;Liu et al., 2019;Sun et al., 2021). Our analyses revealed a complicated co-occurrence network which contained 88 nodes and 101 edges, and which involved 27 bacteria and 95 ARGs subtypes. Specifically, Pseudomonas putida, Pseudomonas aeruginosa, Chryseobacterium indologenes, and Sphingomonas laterariae were located in the central hubs of the network and involved with abundant ARGs referring to various antibiotics. Some of the ARBs which were associated with resistance to multiple drugs have been reported to contribute to increased morbidity and mortality among patients (Morrison and Zembower, 2020). Therefore, understanding existence of networks for ARGs and bacteria would provide valuable information in predicting novel ARB and in designing prevention protocols against emerging AMR (Chng et al., 2020).
Mobilome is defined as all detectable HGT elements within a given metagenomic dataset and these elements included plasmids, integrative conjugative elements (ICE), transposons, and insertional repeat sequences (Pal et al., 2015;Ju et al., 2019;Yun et al., 2021). In this study, mobilome of the DWW was composed of 93 MGE subtypes belonging to 8 MGE types. Among them, transposases were the most frequently detected MGEs. With our correlation analyses, most of ARGs showed significant correlations with total abundance of MGEs. The co-occurrence network consisted of 109 nodes and 175 edges. Among all MGEs, ISCrsp1, and tnpA.5/4/11 were located in the central hubs of the network and might serve as links to different ARG types. With the large number of ARGs connected to them, they took on the active role of ARG dissemination. Previous studies indicate that most co-occurring ARGs with metals also co-occurred with MGEs (Song et al., 2017;Zhao et al., 2019).
Heavy metals can promote resistance to antibiotics either via cross-resistance (a single genetic unit conferred resistant to both metals and antibiotics), co-resistance (both metal resistant genes (MRGs) and ARGs are associated with same MGEs), or co-regulation (both metal and antibiotic resistance shared their regulatory systems; Baker-Austin et al., 2006;Imran et al., 2019). Moreover, the relative MRGs and ARGs abundances would increase with increasing metals concentration (Hu et al., 2017). The metal-driven selection of AMR is markedly greater when both MRGs and ARGs are situated on the same MGEs (e.g., plasmids, transposons, and integrons;Di Cesare et al., 2016;Hu et al., 2017). For example, int1 has been closely associated with MRG czcA, coding for cobalt (Co), zinc (Zn), and cadmium (Cd) resistance, and beta-lactamase resistance (Stokes et al., 2006;Gillings et al., 2008), indicating that MRGs and ARGs may be transferred simultaneously to host bacteria via int1 in the HGT process (Gupta et al., 2022). Transformation is the main pathway of HGT, which can take place in more than 80 naturally competent bacterial species with distant phylogenetical backgrounds, even consisting of human pathogens (Thomas and Nielsen, 2005;Maeusli et al., 2020). Due to the prevalence of extracellular ARGs, antibiotics and naturally competent bacteria, the environmental transformation of ARGs is estimated to be quite frequent and is one of the predominant pathways to spread AMR (de Aldecoa et al., 2017). Ag, CuO and ZnO-based NPs/ions could promote the natural transformation of plasmids harboring ARGs , and the promoting effect can occur at clinically relevant concentrations (Hernandez-Sierra et al., 2008;Mohler et al., 2018) or realistic concentrations within aquatic environments (Brunetti et al., 2015). On the other hand, heavy metals pollution has altered bacterial diversity and abundance, as the bacterial population is sensitive to heavy metals (Chen et al., 2018), and the long-term presence of high concentrations of metals in polluted water may increase heavy metal resistance in a ARGs linear discriminant analyses. Each horizontal column represents an ARG, and the length of the column corresponds to the LDA value. The larger the LDA value, the larger the difference. The color of this bar corresponds to the grouping of feature ARGs.
Frontiers in Microbiology 09 frontiersin.org variety of bacteria (Gupta et al., 2022). These observations suggest an underlying metal-driven co-selection process which was linked with existence of cross-resistance Zhao et al., 2019). Furthermore, MGEs are actively involved in HGT of ARGs in neighboring microbial communities (Gupta et al., 2018). Consequently, when DWW are released into the environment, it becomes very difficult to efficiently eliminate the generation of ARB (Rakita et al., 2020). In our study, the main limitation is that metal concentrations were not determined due to our limitations of analytical techniques. Theoretically, a large amount of heavy metals discharged from clinical practice in every day will inevitably lead to a large amount of heavy metals in DWW. On the other hand, previous reports indicate the presence of high levels of Cu, Zn, Hg and MeHg in DWW (Rani et al., 2015). Thus, co-selection and cross-resistance would occur in DWW. If ARGs and MRG is found on the same MGEs, and this physical linkage results in co-resistance. Cross-resistance is another co-selection mechanism which occurred when single genes encoded resistance to both antibiotics and metals Zhao et al., 2019). Better understanding of how metals influence formation of ARGs and MGEs would provide insights into novel mechanisms of HGT and emergence of ARB in the future.
The generation and mobility of clinically relevant ARGs in waste waters post significant risk to human health (Carvalho and Santos, 2016;Liu et al., 2018b;Slizovskiy et al., 2020). Our data clearly show the abundance of ARGs and MGEs in the DWW, therefore more effective treatment of the waste water is of great importance. Unfortunately, few existing processes have been designed to remove ARGs, and our data as well as others indicate that such processes were not highly effective (Mao et al., 2015;Bengtsson-Palme et al., 2016;Di Cesare et al., 2016;Pazda et al., 2019;Vinzelj et al., 2020;Li et al., 2022). The treatment process in our dental department utilized ozone, which reacts directly or indirectly via a hydroxyl radical mechanism to reduce organic and inorganic materials to become more biodegradable, and which efficiently inactivate a wide range of microorganisms (Tripathi and Tripathi, 2011). Our results show that only a few bacteria with clearance rate higher than 30% were observed. On the contrary, some others were increased. Considering that metagenomics only detects bacterial DNA, the data do not represent activity and integrity of bacteria. Thus, clearance or abundance of bacteria should be reconfirmed via bacterial isolates.
As to resistome and mobilome, our data show that ozone treatment had no obvious effects in changing the abundance of resistome and mobilome, although a previous study revealed that antibiotic-resistant hosts and resistant genes were significantly inactivated by ozone treatments (Pei et al., 2016), as well as a significant portion of only MLS and tetracycline genes (Raza et al., 2022). On the other hand, beta lactam ARGs were increased by UV-, chlorine-, and ozone-based treatment strategies (Guo et al., 2013;Alexander et al., 2016;Ferro et al., 2017;Liu et al., 2018a). These different observations are likely due to the use of different experimental designs, sample sizes and technologies. More systematic studies are needed to identify efficacy of waste water treatment protocols.
ARGs and MGEs have been listed as serious and emerging environmental pollutants and health problems from hospital waste waters (Gillings et al., 2008;Xu et al., 2016;Chen et al., 2019). Our data strengthened the addition of DWW to the list of concerns. Our study reveals that DWW harbored a significant and diversity of microbes, ARGs, and MGEs, providing a persistent selection pressure (in the presence of heavy metals) and possibly resulting in the occurrence or emergence of novel antimicrobial determinants. Our observations provide evidence which underscore the need for improved disinfection methods, and for monitoring the waste prior to disposal. Effective effort would lead to reducing the spread of drug resistant bacteria into hospitals and communities.
FIGURE 6
Network analysis of co-occurrence patterns. Each node connection indicates strong (Spearman correlation coefficient ρ > 0.8) and significant (p < 0.01) correlation. The size of each node is proportional to the number of connections. (A) Co-occurrence network of bacteria and ARG. (B) Co-occurrence network of ARGs and MGEs.
Author's note
The proposal was approved by the institutional review board of the First Affiliated Hospital of Shantou University Medical College.
Data availability statement
The dataset presented in this study are deposited and can be found in an online repository. The name of the repository and accession number(s) can be found below: NCBI repository -https://www.ncbi. nlm.nih.gov/, PRJNA869027.
Ethics statement
The studies involving human participants were reviewed and approved by First Affiliated Hospital of Shantou University Medical College. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
XJ and WG contributed equally to this manuscript. YX and WG carried out the sample collection, experimental studies, and drafted the manuscript. FYa, QX, LC, FYu, and PY participated in the innovative design of the study. XL, MZ, YY, XG, and MW, performed the statistical analysis. QZ and XJ conceived the study, participated in its design and coordination, and helped to revise the manuscript. All authors contributed to the article and approved the submitted version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-04-21T13:11:43.269Z | 2023-04-20T00:00:00.000 | {
"year": 2023,
"sha1": "d7c07ba30924887d8606e5d683d9bdbd921fa95b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d7c07ba30924887d8606e5d683d9bdbd921fa95b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
1062823 | pes2o/s2orc | v3-fos-license | Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels
We study the problem of achieving average consensus between a group of agents over a network with erasure links. In the context of consensus problems, the unreliability of communication links between nodes has been traditionally modeled by allowing the underlying graph to vary with time. In other words, depending on the realization of the link erasures, the underlying graph at each time instant is assumed to be a subgraph of the original graph. Implicit in this model is the assumption that the erasures are symmetric: if at time t the packet from node i to node j is dropped, the same is true for the packet transmitted from node j to node i. However, in practical wireless communication systems this assumption is unreasonable and, due to the lack of symmetry, standard averaging protocols cannot guarantee that the network will reach consensus to the true average. In this paper we explore the use of channel coding to improve the performance of consensus algorithms. For symmetric erasures, we show that, for certain ranges of the system parameters, repetition codes can speed up the convergence rate. For asymmetric erasures we show that tree codes (which have recently been designed for erasure channels) can be used to simulate the performance of the original"unerased"graph. Thus, unlike conventional consensus methods, we can guarantee convergence to the average in the asymmetric case. The price is a slowdown in the convergence rate, relative to the unerased network, which is still often faster than the convergence rate of conventional consensus algorithms over noisy links.
I. INTRODUCTION
In a network of agents, consensus refers to the process of achieving agreement between the agents in a distributed manner. Consensus problems, and in particular the problem of reaching consensus on the average of the values of the agents, have been around for a while and are often used to serve as a test case for studying distributed computation and decision making between a group of nodes/processors/dynamical systems ( [1]- [6]). Most of the work in this area assumes that the agents are connected via a fixed underlying graph or network. In many applications, however, the links in the underlying graph are noisy or unreliable. In the context of consensus problems, the unreliability of communication links between nodes has been traditionally modeled by allowing the underlying graph to vary with time.
In other words, at each time instant some of the links are allowed to be erased, and depending on the realization of the arXiv:1204.0301v1 [math.OC] 2 Apr 2012 link erasures, the underlying graph at each time instant is assumed to be a subgraph of the original graph. Furthermore, the distributed algorithm for reaching consensus remains unchanged: the same distributed averaging algorithm is used, except that only the information received at each time is used. An important assumption that is implicitly made in this model is that the erasures are symmetric: if at time t the packet from node i to node j is dropped, the same is true for the packet transmitted from node j to node i. In practical wireless communication systems this assumption is patently unreasonable: the additive noise at the two nodes are independent and, furthermore, communication in the two directions occurs at either different times or over different frequency bands. If standard averaging protocols are performed, this loss of symmetry can prohibit the network from reaching consensus to the true average (standard consensus protocols require that the "update" matrix be doubly stochastic, something that cannot be guaranteed in the asymmetric case).
The goal of this paper is to explore the use of channel coding to improve the performance of consensus algorithms, especially in the asymmetric case. A major impetus for this work is the recently designed tree codes for erasure channels [7], which, as we demonstrate, resolves the problem encountered in the asymmetric case.
For asymmetric erasures we show that tree codes can be used to simulate the performance of the original unerased graph. Thus, unlike conventional consensus methods, we can guarantee convergence to the average in the asymmetric case. As expected, the price is a slowdown in the convergence rate, relative to the convergence rate of the unerased network. Nonetheless, the slowdown is still often faster than the convergence rate of conventional consensus algorithms over erasure links.
II. PROBLEM SETUP
Consider a group of N nodes denoted by N = {1, 2, . . . , N }. We assume that the nodes are connected by an undirected communication graph G = (N , E) which is often referred to as the interaction graph. Throughout the analysis G is assumed to time invariant. Let A = [a ij ] denote the adjacency matrix of G, i.e., a ij = 1 if (i, j) ∈ E and 0 otherwise. Note that a ii = 0. Let x i 0 denote the initial value at node i. The objective is for the nodes to compute the global average r = 1 N 1 T x 0 , where 1 denotes an N -dimensional column of ones and x 0 is the column vector of the x i 0 's. We model the communication links between nodes as packet erasure links. Further, we ignore quantization effects due to packetization. The standard packet sizes in practice justify this assumption. We denote the event of successful packet reception from node j to node i at time k with the Bernoulli random variable X ij k , i.e., X ij k = 1 if the packet is received successfully at time k and 0 otherwise. This notation is summarized in Table I. TABLE I NOTATION y , y ∈ R N N i=1 y 2 i , i.e., the two norm of y N = {1, 2, . . . , N } the set of nodes G = (N , E) the underlying communication graph A = [a ij ] the adjacency matrix of G, i.e., a ij = 1 if (i, j) ∈ E and 0 otherwise ∆ largest degree of any vertex in G x i 0 the initial value at node i x 0 column of x i 0 's r the initial average, i.e., 1 N 1 T x 0 p packet erasure probability X ij k 1 if the packet sent from node j to node i at time k is successfully received and 0 o.w ρ(.) spectral radius of a matrix A • B Hadamard product,i.e., i.e., Kullbeck Leibler divergence between Bernoulli(p) and Bernoulli(q)
III. BACKGROUND
For a fixed communication graph G, a typical algorithm to achieve consensus is of the following form.
W obeys the underlying graph, i.e., for i = j, W ij = 0 if (i, j) / ∈ E. In other words, each node updates its value by taking a weighted sum of its own previous value with those of its neighbors. In short, the equation can be written as Such an algorithm is said to achieve consensus if In such a static setup where the weights and the underlying interaction graph does not change with time, it is well known that consensus is achieved if and only if Further (4) holds if and only if the following conditions hold (e.g., see [8]) 1) W is doubly stochastic, i.e., Note that x k = W k x 0 . Under the above conditions, The convergence rate, µ(W ), of the above consensus algorithm is formally defined as and is given by There is a considerable amount of work that explores different choices of W and how it affects the rate of convergence of the consensus algorithm (e.g., [8]).
For the purpose of this paper and for ease of exposition, we use a specific but natural choice of W (e.g., [1] For such a choice of W , the spectral radius is given by ρ(W − 1 We state this as a Lemma for later reference. Lemma 3.1: The convergence rate, µ, of (1) with W = I − L is So, the conditions 1) and 2) above are satisfied if and only if < 2 λ1(L) . Furthermore, the convergence rate µ is maximized when the two quantities in (7) coincide, i.e., when In particular, any < 1/∆ will work where ∆ = max i ∆ i . We remark that the techniques presented in the paper are independent of the choice of the weight matrix W . Whenever we wish to write closed form expressions for the convergence rates, we use the specific choice W = I − * L for simplicity.
IV. COMMUNICATION MODEL
In practice, the communication links between nodes can be unreliable. Conventionally, this has been taken into account by allowing the interaction topology to change with time. So, at time k, the connectivity between nodes is described by the graph G k where G k can now vary with time. There is a considerable amount of literature on the problem of achieving consensus under such time varying interaction topologies ( [2], [6], [9]- [11]). We model unreliable communication as packet erasures. So, at each time k, the packet transmitted from node i to, say, node j is either received (X ji k = 1) or erased (X ji k = 0). Similary, the packet sent from node j to node i is either received (X ij k = 1) or erased (X ij k = 0). We consider two erasure models 1) Symmetric: X ij k = X ij k , and X ij k , X m k are independent of each other whenever are independent of each other whenever (i, j) = (m, ), in particular X ij k and X ji k are independent.
The literature on consensus over time varying topologies only captures the symmetric case. Even though, consensus under very general conditions has been established, not much appears to be available by way of the rate of convergence.
Under the asymmetric erasure model, the resulting interaction graph is effectively directed. An edge between node i and j is replaced by a pair of directed edges. The effective graph at any time depends on the packets that were erasured in that round. Under this setup, we define the adjacency matrix A = [a ij ] and the Laplacian L as follows; a ij = 1 if (i ← j) ∈ E and L = D − A with D = diag{∆ i } and ∆ i = j a ij . The resulting adjacency matrix and the Laplacian are not symmetric in general. As a result, they are not doubly stochastic either, i.e., 1 T L = 1 T . When the graph G is directed, (Olfati-Saber Murray 2007) prove that average consensus is achieved using a fixed W = I − L if and only if the interaction graph G is balanced, i.e., the in-degree of each node is equal to its out degree (cite Olfati-Saber Murray 2007). But when the link failures are random, the resulting interaction graph will generally not be balanced at every time step. But with coding, one can overcome this problem as we will show later.
V. DOES CODING HELP?
It turns out coding does help. In fact, to study the effect of coding we need to distinguish between the symmetric and asymmetric erasure models. When the erasures are symmetric, i.e., when X ij k = X ji k , this means that node i (respectively, node j) knows what node j (respectively, i) has received. For example, if node i successfully received a packet from node j, it knows that node j also successfully received the packet intended for it; alternately if node i receives an erasure from node j, it knows that the packet intended for node j was also erased. In this case, the links between the different nodes are erasure links with feedback (where the transmitter knows what the receiver receives).
For erasure links with feedback it is well known that the optimal coding scheme is retransmission, i.e., the transmitter retransmits its packet until it is received at the receiver.
When the erasures are not symmetric, one needs a more sophisticated coding scheme (called tree codes). We shall furher explain this below.
When there are erasures and when there is no coding, an iteration of the consensus algorithm at node i is given by The effective adjacency matrix at time k is then
1) Symmetric Erasures:
In this case, note that even without coding, the nodes achieve average consensus albeit at a slower rate depending on the erasure probability, say p. We show that coding (in this case retransmitting untill sucessful reception) results in faster convergence whenever there exists a constant R > 0 such that where µ is as in (7), H(.) is the binary entropy function and Γ is defined in Lemma 8.1.
2) Asymmetric Erasures: Since X ij k and X ji k are independent, they are not equal in general. Note that L k 1 = 1 but 1 T L k = 1 T in general which violates (5). Furthermore, the associated graph is not balanced either, j a ij X ij k = i a ji X ji k , in general. In this case, the nodes will not achieve average consensus. But under very mild conditions, it is well known that the nodes achieve an agreement, i.e., x k → Y 1 where Y is a random variable that does not necessatily concentrate around the initial average r. But tree codes allow us to simulate the original recursions, i.e., (1), and hence guarantee asymptotic average consensus. Before proceeding further, we provide a brief introduction to tree codes.
VI. BACKGROUND ON TREE CODES
The problem of achieving consensus over erasure channels is an instance of the problem of simulating interactive communication protocols between a network of agents over unreliable links. In the specific case of consensus, the interactive communication protocol amounts to executing (1) at every node. In this context, Rajagopalan et al in [12] use tree codes to simulate such protocols with exponentially vanishing probability of error in the length of the protocol (e.g., the length of the protocol is said to be m if one needs to execute m iterations of (1)). Another very important instance of such interactive communication problems is one of stabilizing unstable dynamical systems over noisy communication channels (cite Sahai here). Even though the central role of tree codes in such problems has been identified, there have been no practical constructions until very recently. In [7], [13], the authors proposed an explicit ensemble of linear tree codes with efficient decoding for the erasure channel. Equipped with this construction of tree codes, we can examine more closely how they can be used for specific problems such as consensus over erasure links which is what we do here. Before proceeding further, we will digress a little bit to outline the codes proposed in [13] and list their relevant properties.
A. Linear time-invariant tree codes
A tree code is essentially a semi-infinite causal encoding scheme which has a certain 'Hamming distance'-like property. When decoding using maximum likelihood decoding over a discrete memoryless channel (DMC), such a tree code guarantees exponentially small error probability with delay. In other words, the probability of incorrectly decoding a symbol (or paket) d time steps in teh past decays exponentially in d. If the rate of the code is R < 1, such a causal encoding/decoding scheme with such an exponentially decaying probability of error (exponent β say) is said to be (R, β)−anytime reliable. We will make this more precise below. We will describe the tree codes of (our work) in terms of their anytime reliability rather than in terms of their distance properties, because ultimately it is the exponent and rate that matter when communicating over DMCs. Since communication is packetized, let Λ denote the packet length. Each packet can be viewed as a symbol from F Λ 2 . Suppose information is generated at the rate of nR packets per time instant at the encoder. Then a rate R time-invariant causal linear code is given by . So, at each time, the encoder receives nR packets and transmits n packets. Note that this is essentially a convolutional code with infinite memory. The decoder, at each time t, generates estimatesb τ |t for 1 ≤ τ ≤ t whereb τ |t denotes the decoder's estimate of b τ using the channel outputs received till time t.
Definition 1 (Anytime Reliability): A causal code as in (12) is said to be (R, β)−anytime reliable if for some fixed d o independent of τ, t.
Let p = p 1/Λ . In [13], the authors showed that if the entries of G i are drawn i.i.d Bernoulli (1/2), then almost every code in this ensemble is (R, β)−anytime reliable for R < 1 − p and β < nΛE(R), where E(R) is an exponent that depends on the DMC and that can be explicitly computed. For the packet erasure channel with erasure probability p, E(R) is given by (see [13]) where For the rest of the analysis, we will assume that we are given an (R, β)−anytime reliable code with d o = 0.
VII. MAIN RESULTS
We present the results separately for the case of symmetric and asymmetric erasures.
A. Symmetric Link Failures
Note that the underlying interaction graph G is fixed while each link is modeled as a packet erasure channel. The graph G is assumed to be connected and the links are undirected. If all agents know that link failures are symmetric, then each link is effectively a packet erasure channel with feedback. In each communication round, node i would know that its packet transmission to node j is erased if it receives an erasure from node j in the same round. Recall that the consensus algorithm in the case where there are no erasures is given by In particular, node i performs the algorithm We now define the communication protocol.
1) The Protocol: A communication round is defined as one in which every node in the graph transmits one packet to each of its neighbors. The nodes are said to have completed m iterations if all of them successfully computed m iterations of (17). Note that this will in general take more than m communication rounds. Since each link is effectively an erasure channel with feedback, the optimal communication scheme at each node is to retransmit until successful reception. We describe this more precisely as follows. Let e denote an erasure. For each edge j → i, we associate an input queue, Q ij in , and an output queue, Q ij out . Q ij in,t contains the packets transmitted by node j to node i up to and including communication round t while Q ij out,t contains the packets received by node i from node j.
Consider an instance of the queues at node i. Suppose its only neightbors are nodes 1 and 2. In round 2, node i receives an erasure from node 2 and infers that its own transmission to node 2 must also have been erased. As a result, node i re-transmits x i 1 to node 2 in round 3. Similarly in round 3, node i knows that its transmission to node 1 was erased. Since the erased symbol was only a 'wait', node i does not re-transmit it in round 4. Instead, it checks if it can perform another iteration of (17). In this case, it can and hence transmits the new data x i 2 to node 1. In round 5, node i does not have any new data to transmit to node 2 and hence transmits a 'wait'.
Also let b ij t denote the packet transmitted by node j to node i in communication round t and let z ij t denote the received packet. Then Now if z ji t = e, then node j infers that b ij t was erased and hence retransmits it in the next communication round unless b ij t was a 'wait' symbol which we describe as follows. We say that a node i has 'new data' if it could compute one or more new iterations of (17). During communication rounds where node j does not have any new data to transmit, it transmits a wait symbol which we denote with w. The transmission from node i to node j in round t is described in Algorithm 1. Let N i denote the neighbors of node i, i.e., N i = {j |a ij = 1}.
The algorithm is illustrated through an example in Fig 1. Using such an algorithm, we have the following bounds on the convergence rate of average consensus.
Theorem 7.1: Let P M,R denote the probability that the network requires more than M communication rounds to compute M R iterations of (17). Further suppose that the packet erasure probability is p and that erasures are symmetric. Then In particular, whenever R satisfies P M,R decays exponentially fast in M . Recall that N is the number of nodes and ∆ the maximum degree.
Proof: See Appendix C.
Using Theorem 7.1, we can determine the convergence rate Algorithm 1, µ s c , and it is given by where R is the largest rate such that (20) is satisfied and µ is defined in (7). The superscript and subscript in µ s c denote that it is the convergence rate with coding under symmetric erasures. We will compare this with the convergence rate without coding in Section VIII. Let Then it is easy to see that R(p) > 0 if and only if p < 1/(1 + ∆). This means that the proof technique used here does not allow us to prove average consensus if the erasure proability is larger than 1/(1 + ∆). We can demonstrate how to overcome this. In fact, one can show that average consensus will be acheived for all 0 ≤ p ≤ 1, we will state the result as follows.
Theorem 7.2: Let P M,R denote the probability that the network requires more than M communication rounds to compute M R iterations of (17). Further suppose that the packet erasure probability is p and that erasures are symmetric. Then In particular, whenever R satisfies P M,R decays exponentially fast in M . Recall that N is the number of nodes and |E| is the number of edges in the network.
Proof: See Appendix E Combining Theorems 7.1 and 7.2, we conclude that the the convergence rate of Algorithm 1, µ s c , is given by
B. Asymmetric Link Failures and Tree Codes
Now suppose packet erasures are not symmetric. Since information at each node is generated one packet at a time and since the unit of communication is a packet, the rate of the code is R = 1/n 1 . Here, one round of communication corresponds to every pair of neighbors exchanging n packets each. Then in any communication round, node i does not known which of the n transmitted packets have been received by each of its neighbors. In this case, we use the anytime reliable codes described in Section VI-A.
1) The protocol: Consider the pair of nodes i, j and let b ji t denote the t th information packet destined to node j from node i. Then the data actually transmitted by node i is given by Since the code is (R, β)−anytime reliable, we have P (b ji Since the channel is an erasure channel, the maximum likelihood decoder amounts to solving linear equations. This can be done recursively and efficiently as shown in (our paper). Whenever the equations admit a unique solution to some of the variables, those variables are correctly decoded. We leave the remaining variables as erasures and do not venture a guess about their value. As a result, the decoder always knows whenever it decodes something correctly.
Like in the case of repetition coding for symmetric erasures, for each link j → i, we associate two queues Q ij in,t and Q ij out,t although with a slightly different meaning. The queue Q ij in,t contains all the information packets transmitted by node j to node i till round t. In other words, Q ij in,t = {b ij τ } τ ≤t . On the other hand, Q ij out,t are node i's estimates of the information packets transmitted by node j so far, i.e., Q ij out,t = {b ij τ |t } τ ≤t . Also, it will be evident from Algorithm 2 that Q ji in,t = Q j i in,t for all j, j ∈ N i .
With this setup, the mechanics of the protocol is very simple and is outlined in Algorithm 2 We can now compute the convergence rate of average consensus achieved by the above algorithm and we state it as the following Theorem.
Theorem 7.3: Let P M,R denote the probability that the network requires more than M communication rounds to compute M R iterations of (17). Further suppose that the packet erasure probability is p and that erasures are 1: For each j ∈ N i , compute t,j = max{ | x j ∈ Q ij out,t } and let t = min j ∈Ni t,j 2: Also compute m t,j = max{m | x i m ∈ Q j i in,t } and let m t = min j ∈Ni m t,j 3: if t + 1 > m t−1 then 4: Compute x i mt−1+1 using (17) and set b ji t = x i mt−1+1 for all j ∈ N i 5: else 6: set b ji t = w for all j ∈ N i 7: end if asymmetric. Suppose each node uses a (R, β)−anytime reliable code. Then In particular, whenever R satisfies P M,R decays exponentially fast in M .
Proof: See Appendix D
As in the symmetric case, the convergence rate, µ a c , using tree codes is given by where R is the largest rate such that (28) is satisfied and µ is as in (7). Let Then much like in Section VII-A, it is easy to see that R(β) > 0 if and only if β > 2 log(1 + ∆).
VIII. DISCUSSION -CODING VS NO CODING
When there is no coding, the consensus recursion is given by (9). We begin with the case of symmetric erasures.
A. Symmetric Erasures
The convergence rate of (9) when erasures are symmetric is given by the following Lemma is given by where Γ s = E(I − L 0 )⊗(I − L 0 ) is a deterministic matrix that is a function of , p, L and can be computed explicitly in closed form. The subscript c indicates that there is no coding and the subscript s in Γ s is because the erasures are symmetric Proof: See Appendix A.
Consider the case of coding in the presence of symmetric erasures. From Theorem 7.1 and (8), it is easy to see that the convergence rate is given by µ s c in (21). So, whenever µ s c < µ s c , coding offers an advantage. We state this as a Theorem Theorem 8.2: In the case of symmetric erasures, coding offers a faster convergence than (9) whenever there is a R > 0 such that
B. Asymmetric Erasures
As mentioned in Section V, when link failures are asymmetric, the algorithm of (9) does not achieve average consensus. Nevertheless the nodes reach agreement and the rate of convergence to agreement has been characterized in [14]. Here, we characterize the mean squared error of the state from average consensus.
Here I is an N × N identity matrix and where Γ a is a deterministic matrix that is a function of , p, L and can be computed explicitly in closed form.
Proof: See Appendix B.
Note that 1 T Γ a = 1 T but Γ a 1 = 1. Let c, c = 1 be the right eigen vector of Γ a corresponding to eigen value 1, i.e., Γ a c = c. Then, it is easy to see that lim k→∞ Γ k a = 1 N c1 T . Using this in (34), we get This proves that one cannot achieve average consensus without coding when link failures are asymmetric. So, a major benefit of using tree codes in such cases is to guarantee average consensus. Furthermore, tree codes can be used to implement any distributed protocol over a network with erasure links.
APPENDIX
A. Proof of Lemma 8.1 Note that L k 1 = 0 whether or not the erasures are symmetric. Recall that r = 1 where P k = EY T k Y k . Recall that the erasure process is independent over time and across links. Then we have Since erasures are symmetric, L T 0 = L 0 . Furthermore, we have vec(P k ) = Γ k s vec(I), where I is an N × N identity matrix. Putting (38) and (39) together, we get So, the rate of convergence of the consensus algorithm in the absence of coding is clearly determined by Γ s . Observe that Γ s is doubly stochastic, i.e., 1 T Γ s = 1 T and Γ s 1 = 1. It has one eigen value at 1 and all others are strickly smaller than 1 in magnitude. Let λ 2 (Γ s ) denote the second largest eigen value in magnitude. Then clearly and the rate of convergence is given by Recall that the random variable X ij 0 is defined as X ij 0 = 0 if the link j → i is erased at time 0 and X ij 0 = 1 otherwise. For brevity, we will write X ij instead of X ij 0 .
Then it is easy to verify that one can write L 0 as follows where e i is the i th unit vector. In particular, the underlying Laplacian in the absence of any erasures can be written as L = a ij e i (e i − e j ) T . For any x ∈ R N , we have Furthermore, The last inequality follows from the fact that ρ(I − L) = 1. Combining (44) We will begin by identifying the state of the protocol in Algorithm 1. For the sake of clarity, we will refer to nodes using letters u, v, etc., instead of i, j. Recall that N v denotes the set of neighbors of v. For each node v at time t (i.e., after round t), we associate |N v | variables {n vu (t)} u∈Nv , where n vu (t) denotes the latest iterate of node u that is available to node v at time t. In other words, n vu (t) is the largest integer τ such that x u τ is available to node v. We further define Note that n v (t) is the latest iteration of (17) that node v can compute at time t. In other words, node v has computed {x v τ } τ ≤nv(t) and no more. With this setup, it is clear that Algorithm 1 would have executed min v n v (t) iterations of (17) till time t. Note that the rate of the protocol is then given by R = lim t→∞ minvnv(t) t , which is a random variable for a specific run of the protocol. We now state the evolution of n vu (t) as a Lemma below.
is erased in round t and 0 otherwise. Then the evolution of n vu (t) is given by the following equation Proof: The proof follows from the following simple observations 1) n vu (t) increases by atmost 1 in each step 2) In any round, if node u receives an erasure on a link, it will infer that its transmission on that link was also erased. As a result, node u has knowledge of n vu (t) at all times t 3) In round t + 1, if either the edge (v, u) is erased or node u sends a w to node v, then n vu (t + 1) = n vu (t) 4) Node u sends a 'wait' w to node v in round t + 1 if and only if n vu (t) = n u (t).
. We say that round t got wasted at node v if n v (t − 1) = n v (t), i.e., node v could not perform a new iteration of (17) at time t. The proof idea is as follows: for each node v at time t, we will argue that there exists a sequence of t edges of which at least t − n v (t) edges have failed. We then union bound over all possible choices of such t edges.
Before proceeding further, we define an object which we call the 'trellis', for lack of a better word. Associated to any undirected graph G = (V, E) represented by the adjacency matrix A, we define an infinite trellis T (G) = (V T , E T ) as follows. Associated to each node v in V, there are countably infinitely many copies { k } k≥0 in V T . Let I denote a |V| × |V| identity matrix. Then the nodes V T and edges E T of T (G) are given by The edges in E T are all undirected, i.e., (u 0 , v 1 ) and (v 1 , u 0 ) are treated as a single edge. The trellis for an example network is given in Fig 2. Definition 2 (time-like): Any sequence of edges (or a path), S t , in the trellis T (G) of the type will be called 'time-like' ending in node v t τ −1 ) ∈ E T is said to be erased if there was an erasure on the edge (u (τ ) , u (τ −1) ) ∈ E in round τ .
The time-like sequence S t is said to have erasures if of the t edges in S t were erased. We are now ready to state the key Lemma from which the proof of Theorem 7.1 follows easily. Then, doing a union bound over all these sequences, we get where P R,t is the probability that the network performed Rt or fewer iterations of (17) in t rounds and N is the number of nodes in the network. This is the claim in Theorem 7.1. We will now prove the Lemma.
Proof: [Proof of Lemma 1.2]
For ease of presentation, we will introduce the following notation in the rest of the proof.
a) we will refer to any time-like sequence of τ edges ending in v τ that has τ − n v (τ ) or more erasures as a "witness" at v τ .
The Lemma claims that there is a witness at v t for all v ∈ V and t ≥ 0. We will prove this by induction. The hypothesis is clearly true for t = 0. Suppose it is true for all nodes v ∈ V and all τ ≤ t − 1. Recall that we say that round t at node v is wasted only if n v (t − 1) = n v (t). There are two broad cases, round t gets wasted at node v or it does not.
1) Suppose round t is not wasted, i.e., n v (t) = n v (t − 1) + 1. Then by the induction hypothesis, there is a witness at v t−1 . Appending the edge (v t−1 , v t ) to this witness gives us a witness for v t .
2) It remains to consider the case where round t gets wasted at node v, i.e., n v (t) = n v (t − 1).
We will divide case 2) above into two sub-cases: a) ∃ a u ∈ N v s.t n u (t − 1) = n v (t − 1) − 1 and b) such a neighbor does not exist. a) If there is a neighbor u ∈ N v such that n u (t − 1) = n v (t − 1) − 1, then the witness for v t is obtained by appending the edge (v t , u t−1 ) to the witness at u t−1 .
b) Here n u (t − 1) ≥ n v (t − 1) for all u ∈ N v . Since |n u (τ ) − n v (τ )| ≤ 1 for any τ , we can partition the neighbors We will further divide case b) above into two sub-cases: , there are no bottlenecks in the set of neighbors Z. Observe that a bottleneck neighbor will not send a wait w. Also for any So, the data transmitted by node u to node v in round t is x u nu(t) , i.e., iteration n u (t) of (17). Since round t at node v got wasted, at least one of the edges to a bottleneck neighbor must have been erased in round t. Otherwise, node v would have been able to compute a new iteration of (17) and the round would not have been wasted. Suppose the erasure happened on edge (v, u) for some u ∈ B ∩ Y . Then appending edge (v t , u t−1 ) to the witness at u t−1 will give us the witness at v t .
ii) B ∩Z = ∅, i.e., there is a neighbor u ∈ B ∩Z such that n u (t−1) = n v (t−1)+1 and n vu (t−1) = n v (t−1)−1 = n u (t − 1) − 2. Furthermore, there must be a neighbor u ∈ B ∩ Z whose transmission to v in round t must have been erased (else there must be an edge to B ∩ Y which was erased and we revert back to case i)). Note that n u (t − 2) ≥ n v (t − 1). It follows from Lemma 1.1 that node u must have transmitted iteration n v (t − 1) in round t − 1 as well as round t and both were erased since n vu (t) = n vu (t − 1) = n v (t − 1) − 1. Since this erasure model considers symmetric erasures, the transmission from v to u in round t − 2 is also erased. Appending the edges (v t , u t−1 ) and (u t−1 , v t−2 ) to the witness at v t−2 gives us the witness for v t .
This completes the proof of Lemma 1.2.
D. Proof of Theorem 7.3
We will begin the proof with three preliminary results before moving to the main argument. Recall that an (R, β)−anytime reliable code is one that guarantees P b τ |t = b τ ≤ 2 −β(t−τ +1) . For such a code that is linear, we can say the following.
The trellis T G The probability above is only over the randomness of the channel.
Proof: Without loss of generality, assume that τ 2 > τ 1 . Due to linearity, we can assume without losing generality that the input b i = 0 for i ≥ 0. Let E i denote the portion of the erasure pattern introduced by the channel during the interval [τ i , τ i ] that resulted in the event Y (τ i , τ i ). Then, we claim that P (E i ) ≤ 2 −β|τi−τ i +1| . This follows from the simple observation that if the encoder input in the first τ i − τ i + 1 instants is all zero and the corresponding channel erasure pattern is E i , then Y (τ i , τ i ) implies that at the decoding instant τ i − τ i , the earliest error would have happened at time 0, the probability of which is at most 2 −β|τi−τ i +1| .
Since the intervals [τ 1 , τ 1 ] and [τ 2 , τ 2 ] are disjoint, the erasure patterns E 1 and E 2 correspond to independent channel uses. So we have The result now follows.
For ease of presentation, we introduce the following definition Before proceeding with the rest of the proof, we will recall a Lemma from [15] and state it here for easy reference.
Lemma 1.4 (Lemma 7, [15]): In any finite set of intervals on the real line whose union J is of total length s there is a subset of disjoint intervals whose union is of total length at least s/2 We will now state a version of Lemma 1.3 when the error intervals are not necessarily disjoint.
Lemma 1.5: If {b i } i≥0 are encoded and decoded using a causal linear (R, β)−anytime reliable code, then Proof: The proof follows directly from Lemma 1.3 and Lemma 1.4.
We use an argument very similar to the one used in proving Theorem 7.1. We will define a trellis T (G) exactly the same way we defined T (G) except that the edges E T are now directed and they point forward in time, i.e., downwards w.r.t to the Fig 2(b). In other words, for neighbors (u, v) ∈ V, the edge (v t , u t−1 ) is directed from node u t−1 to node v t and represents the transmission from u to v in round t.
Recall the definition of a time-like sequence of edges, S t , from Definition 2. Let Let B τ be the error interval at decoding instant τ on the edge node (u (τ ) , u (τ −1) ) ∈ E. We alternately call B τ the error interval on the edge (u Then we define |S t | as follows This definition is motivated by the fact that the packet erasure events during an error interval on a given edge, say (v, u) ∈ E, are independent of those in an error interval on a different edge (v , u ) = (v, u) in any round of communication. So, intuitively |S t | captures the number of independent "bad" channel realizations seen by the edges In what follows, we will show a connection between the number of wasted communication rounds at the node u t and the number |S t |.
A witness at node v t is a time like sequence of edges S t such that |S| ≥ t−n v (t). In Lemma 1.6, we will demonstrate a witness for v t for all v ∈ V and t ≥ 0. The technique is very similar to the proof of Lemma 1.2 and hence we will only provide a sketch of the proof. After that we will use Lemma 1.5 to prove that P (t−n v (t) ≥ m) ≤ (∆+1) t t m 2 −mβ/2 for any v ∈ V. Lemma 1.6: If after t rounds of communication, node v has performed n v (t) iterations of (17), then there exists a time-like sequence, S t of t edges in E T ending in node v t with |S t | > t − n v (t) Proof: The proof is obtained by repeating the same argument as in the proof of Lemma 1.2 with the word 'erasure' replaced with the word 'tree code error'. The only case that needs a little bit of clarification is case 2-b-ii, i.e., round t is wasted at node v and B ∩ Z = ∅, where B and Z retain the same meaning as before. In this case, like before, there is a neighbor u ∈ N v such that n vu (t) = n u (t − 1) − 2. From Algorithm 2, it is clear that node the information x u nu(t−1)−1 was encoded and transmitted by node u to node v in round t − 1 or before. Therefore, the error interval on the edge (v t , u t−1 ) ∈ E T contains the interval [t − 1, t]. Let the witness at node u t−1 be S t−1,u .
Append the edge (v t , u t−1 ) to S t−1,u to get a new time-like sequence which we call S t,v . We claim that S t,v is a witness at v t . This proof of this claim follows from the following observations 1) When applying Lemma 1.5, we only to care about error intervals on the same edge at different times 2) The edge (v, u) appears in the time-like sequence S t,v for round t and hence, it can possibly appear again only in S t,v in round t − 2 or earlier. So, the length of the union of the error intervals on the edge (v τ , u τ −1 ) ∈ S t−1,u increases by at least 2 with the addition of the edge (v t , u t−1 ). Hence we have This completes the proof of Lemma 1.6.
Putting together Lemma 1.6 and Lemma 1.5, we have The result now follows trivially.
E. Proof of Theorem 7.2
The bound (1 − p) |E| is intuitively motivated by the following observation, in a given round of communication, (1−p) |E| is the probability that none of the edges are erased. As a result one would expect the fraction of communication rounds in which nodes can perform an iteration of (17) to be approximately (1 − p) |E| . The above observation alone would not render a proof because successful communication could also mean that a node received only 'waits' from its neighbors and hence could not compute an iteration of (17). The proof idea is simple but conveying it requires some uv denote the event where node v transmits a 'wait' to node u in round t. We introduce the following definition Definition 4: Consider nodes v, u, u such that u ∈ N v and u ∈ N u . Also suppose that node v transmits a 'wait' to node u in round τ and node u transmits a 'wait' to node u in round τ + 1, i.e., events W To understand the definition, observe that condition (a) implies that node v is a bottleneck node for node u in round τ and condition (b) implies that node u already knows n u (τ ) after round τ . Node u could not perform a new iteration in round τ since it received a 'wait' from a bottleneck node (in this case v) and hence sent a 'wait' to node u . So, it is natural to blame W (τ ) uv for W (τ +1) u u . Note that Definition 4 is further justified by the observation that a 'wait' in round τ will either have an effect in round τ + 1 or will never. Also note that Definition 4 can be extended to more than two waits by having conditions (a) and (b) hold for every pair of successive 'wait' events.
One will similarly arrive at a contradiction if any other node repeats in {u i } i=1 .
The implication of Lemma 1.7 is clear. If a node v sends a 'wait' in round τ to any of its neighbors, then this 'wait' will not by itself stop node v from performing an iteration of (17) in a future round.
We are now ready to provide the main argument. Let d(v, u) denote the length of the shortest path from node u to node v. So, if v ∈ N u , then d(v, u) = 1 and d(v, v) = 0. Let the diameter of the graph be δ, i.e., δ = max u,v∈V d(v, u).
And for an edge e uu ≡ (u, u ) ∈ E, we define {e ∈ E | d(v, e) = i}. In view of Lemma 1.7, it is not difficult to see that an erasure on an edge in E (i) v in round τ will have an effect (if any) at node v only in round τ + i. Let A i,τ denote the event that there is an erasure on an edge in E (i) v in round τ . Then for τ ≥ δ, it is easy to see that ∩ δ i=0 A c i,τ −i implies that the round τ at node v is not wasted, i.e., node v can compute an iteration of (17). In other words Due to the erasure model, note that the even A i,τ is independent of A i ,τ for (i, τ ) = (i , τ ). Let Then from the above argument X τ = 1 implies Y τ = 1 and {Y τ } are independent Bernoulli random variables. Note that P (Y τ = 1) ≤ 1 − (1 − p) |E| . Let R = nv(t) t , then we have The last inequality follows from a standard Chernoff bounding technique and is true whenever R < (1 − p) |E| . Union bounding over all nodes v ∈ V, we have This completes the proof. | 2012-04-02T02:57:50.000Z | 2012-04-02T00:00:00.000 | {
"year": 2012,
"sha1": "389b80baf74cac06335bbf7deacad35ede2f99ee",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/43106/7/1204.0301v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b30bb823d7cae7f45ec574eb810b6c9ea2c1bbab",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
74871801 | pes2o/s2orc | v3-fos-license | Nickel, molybdenum, and tungsten nanoparticle-dispersed alkylalkoxysilane polymer for biomaterial coating: evaluation of effects on bacterial biofilm formation and biosafety
Biofilm formation on the surfaces of biomaterials can cause severe infectious diseases due to of inefficiency of antibiotics against biofilm-protected pathogens. On one hand, the prevention of biofilm formation is a critical issue in the development of biomaterials. On the other hand, biomaterials require biological compatibility. To achieve these two goals without compromising the original features of the biomaterial, we proposed metal nanoparticle (NP)-dispersed alkylalkoxysilane (AAS) coatings. Here, we chose nickel, molybdenum, and tungsten, components of stainless steel and other alloys for biomaterials, as prospective metals that could regulate biofilm formation without harming human health. To the best of our knowledge, this is the first report to describe the effect of tungsten on biofilm formation. We performed a biofilm formation test using an open laboratory biofilm reactor that utilizes tap water and environmental bacteria from the laboratory air, and we also performed a cytotoxicity test on the human monocyte-derived U937 cell line. Compared with the AAS polymer alone, nickel NP-dispersed AAS polymer exhibited an inhibitory activity against biofilm formation whereas tungsten and molybdenum NP-dispersed AAS polymers facilitated biofilm formation. The effect of tungsten was dose-dependent. At a low metal concentration (0.1 mol%), the NP-dispersed AAS polymers did not affect cell survival. These results showed that for nickel, molybdenum, and tungsten, the NP-dispersed AAS polymers exhibited biosafety and that the nickel NP-dispersed AAS polymer is suitable for inhibiting biofilm formation. To determine whether molybdenum (or tungsten) NP-dispersed AAS polymer is suitable for biomaterial application, further evaluation in an environment that mimics the human body with clinically relevant pathogens is necessary. Correspondence to: Akiko Ogawa, Department of Chemistry and Biochemistry, National Institute of Technology, Suzuka College, Suzuka 510-0294, Japan, Tel: +81-59-368-1768; E-mail: ogawa@chem.suzuka-ct.ac.jp
Introduction
Bacterial contamination sometimes causes severe infectious diseases when an invasive medical device, such as catheter or artificial bone, is inserted or transplanted into a patient. For example, 10-30% of patients with bladder catheters suffer from infection every year in the US [1]. Antibiotics and biocides are therapeutically used to inhibit bacterial growth; however, they are hardly effective against biofilms, thus presenting serious problems in the medical field [2].
A biofilm is defined as an architectural complex of aggregated microorganisms and extracellular polymeric substrates (EPSs) produced by the microorganisms [3]. Biofilm development can be divided into four processes: conditioning film formation, irreversible attachment of bacteria, growing bacteria and secreting EPSs, and dispersion (https:// www.cs.montana.edu/webworks/projects/stevesbook/index.html). Conditioning film formation is the initial state of biofilm development and enhances bacterial attachment to the material surfaces. Bacteria usually exist in a planktonic state or attached sate, and the latter is the most relevant to biofilm formation. When bacteria irreversibly attach to the surface of materials, their metabolism changes to become more suitable for an aggregated community. Then, bacteria proliferate and secrete EPSs to form a mature biofilm. Some populations of bacteria forming a biofilm can detach from the biofilm, disperse, and swim, occasionally finding other environments where they can attach and grow.
In general, bacteria with biofilm phenotype are insensitive to antibiotics by 10-to 1,000-fold compared with planktonic ones [4]. Biofilm matrix is the key factor that leads to resistance to antibiotics. The biofilm matrix can not only interact with certain antibiotics directly but also delay the introduction of antibiotics. Hence, biofilm formation tends to cause severe and/or chronic infectious diseases. To inhibit biofilm formation on the surface of biomaterials, we propose a coating technique with metal nanoparticle (NP)-dispersed silane-based polymers.
Silane-based polymer is generally referred to as an organopolysiloxane, consisting of a siloxane bond as the main backbone and organic groups as side chains. In a previous study, we investigated an alkylalkoxysilane (AAS) polymer that consists of a siloxane bond as the main backbone, alkoxy groups, and partly methyl/ethyl groups as side chains [5]. This polymer is more adhesive and chemically stable than silicone, with a mild flexibility between that of silicone and silicate; this is likely due to the random location of side chains that creates free volume in which metal NPs are packed ( Figure 1). Packed metal NPs leak out gradually from the site when the metal NP-dispersed AAS polymer is immersed in aqueous environments. Therefore, this polymer has some advantages: the effect of metal NPs can last longer than naked metal NPs, and its cytotoxic effect can also be suppressed. When we coated SUS304 stainless steel with silver (Ag) NP-or copper NP-dispersed AAS polymer before immersion in seawater, biofilm formation was significantly reduced compared with that of the control sample (AAS-coated SUS304) under the cooling water pipe model [6]. We found that both Ag NP-and copper NP-dispersed AAS polymer coating were effective in inhibiting biofilm formation.
In this study, we investigated both antimicrobial and cytotoxic effects of several types of metal NP-dispersed AAS polymers for coating. We chose Ag, molybdenum (Mo), tungsten (W), and nickel (Ni) as metal NPs. Mo, W, and Ni are known as components of alloys used for biomaterials, and thus we tested our assumption that these metals might be biocompatible. We used Ag NPs as a positive control that inhibits biofilm formation.
Metal NP coating
Soda-lime glass was cut into 1 cm × 1 cm (1 mm of thickness) and used for a basal plate. Ag, Mo, W, and Ni NPs were purchased from Sigma-Aldrich (St. Louis, MO, USA). The concentration of dispersed NPs was 0.1 mol% (for W, 0.1 and 1 mol% were tested). The coating procedure described by Ogawa et al. [6] was used. Two oligomers, Permeate (MW 360, D & D Co., Yokkaichi, Japan) and KBM-603 (MW 222, Shin-Etsu chemical Co., Tokyo, Japan), were mixed in a 250-mL polypropylene bottle injected with nitrogen gas using an agitator (Toyobo, Osaka, Japan) for 30 min. Metal NPs were dispersed at the same time as mixing of the oligomers. After mixing, the adjusted coating solution was filtered through a nylon mesh #110 (NBC Meshtec Inc., Hino, Japan) then sprayed on the surface of each glass. Coated glasses were incubated at 20°C for 7 days to solidify the coating.
Biofilm formation
Biofilm formation was achieved using our in-house laboratory biofilm reactor (LBR). The LBR mainly consists of four parts: column, water bottle, pump, and fan ( Figure 2). Coated samples were held on an acrylic board by an acrylic connecting pin, with the coated surface on the upper side. The acrylic board was inserted into acrylic column connected via vinyl chloride pipes. One side of the vinyl chloride pipes was joined to a rubber hose connected to a T-shaped vinyl chloride pipe holding two-tier holed plates, and the other one was connected to a water bottle. A fan was placed between the T-shaped vinyl chloride pipe and the water bottle, and wind blew down from the fan to the water contained in the water bottle, which resulted in efficient trapping of air microbes into the water. Tap water (1 L) circulated in the LBR at 6 L/min and kept at 25°C for 5 days. Fresh tap water (0.2 L) was added to the LBR every day.
Biofilm fixation
After 5 days of incubation in the LBR, each sample was ejected from the column and freeze-dried by the following steps: (1) Each sample was immersed in a 30% ethanol solution for 15 min at 25°C; (2) It was transferred into 50%, 60%, 70%, 80%, 90%, 95%, 98%, and 99.5% ethanol in sequence every 15 min; (3) The dehydrated sample was preserved in ethanol-t-butanol (7:3) solution for 15 min at 25°C; (4) It was transferred to 5:5 and 3:7 ethanol-t-butanol solutions in sequence every 15 min; (5) Each sample was soaked in t-butanol and preserved at 10°C overnight; and finally, (6) frozen samples were transferred to a desiccator and placed under vacuum until the frozen t-butanol disappeared completely.
Raman spectroscopy analysis
The freeze-dried samples were analyzed using a laser Raman spectroscopy (NRS-3100; JASCO Co., Tokyo, Japan). Five points (vicinity of center and vicinity of four corners) were randomly chosen in each sample, observed at a 100-fold magnitude by the attached microscope, and irradiated with a laser light, and the Raman reflection was measured at approximately 649 cm −1 (500-4000 cm −1 ) for 10 s. The procedure was repeated three times, and the data were combined. We confirmed that all Raman peaks from the same sample were similar in wavenumbers and that the trend of the relative intensity of each peak was comparable.
Quantitative analysis of biofilm formation
After Raman spectroscopy analysis, each sample was soaked in 0.1% crystal violet solution for 30 min at 25°C. Treated samples were washed with tap water to remove the nonspecifically absorbed dye from the surface. Following drying out for 10 min on a wipe paper, Scotch® mending tape (3M Japan, Tokyo, Japan) was affixed to the polymercoated side. Thirty minutes later, the tape was removed and affixed to a glass slide. The stained area was measured using a color reader (CR-13; KONICA MINOLTA, Inc., Tokyo, Japan) from the opposite side of the tape-affixed glass. White paper was used for calibration. Measured data were described in L*a*b* color system: L* represents lightness (calibration value was 100); a*, the red/green coordinate (calibration value was zero); and b*, the yellow/blue coordinate (calibrator value was zero). If the color is violet, a* assumes a positive value and b* a negative value. First, we calculated as the x-axis value and (100 -L*) as the y-axis one. Finally, we compared the vector values, i.e., , to infer the extent of biofilm formation.
Cytotoxicity assay
U937, a human monocyte precursor cell line, was kindly donated by Prof. Hidekazu Tamauchi of Ehime Prefectural University of Health Science. U937 cells were cultured in RPMI 1640 medium (Nissui Pharmaceutical, Tokyo, Japan) containing 0.1 M HEPES buffer, a 0.2% sodium hydrogencarbonate solution, 5 mM glutamine, and 5% fetal bovine serum. Upon reaching 80 to 90% confluence, U937 cells were transferred to fresh medium at 2-4 × 10 5 cells/mL every third day. We used U937 cells cultured for 2 days (40-50% confluence) for the cytotoxicity assay. First, the coated samples were sterilized in a 70% ethanol solution overnight and then rinsed with the culture medium. Next, samples were transferred to a 12-well plate (Sumitomo Bakelite, Tokyo, Japan), and U937 cells were inoculated into each well at 4.11 × 10 4 cells/mL (the culture volume was 2 mL/well) and incubated in a CO 2 incubator under 5% CO 2 and 36.5°C. After 46.6-hour culture, the cells were resuspended by pipetting, mixed with a 0.4% trypan blue solution (Nacalai Tesque, Kyoto, Japan) and transferred to an hemocytometer (Hirschmann Laborgeräte GmbH & Co, Eberstadt, Germany) to count the number of viable and dead cells under a phase-contrast microscope (Carl Zeiss, Oberkochen, Germany).
Crystal violet staining
We compared the amount of biofilm formation by crystal violet staining assay on Ag, Ni, Mo, and W NP-dispersed AAS polymers. To quantify the biofilm precisely, we measured the color and brightness of debris transferred onto Scotch ® mending tape using a color reader. Because crystal violet pigment tends to nonspecifically remain on rough surfaces such as metal NP-dispersed AAS coating (with an estimated roughness of 0.1-1µm), some overestimation of biofilm was unavoidable. Compared with the AAS polymer coating, dispersion of Ni, Ag, and Mo NPs allowed a similar biofilm formation, but W NPs gave rise to a darker biofilm based on brightness. In addition, the biofilms formed on the Ni and Ag samples were fainter than the biofilms on AAS, whereas Mo and W samples were darker than those with AAS based on color ( Figure 3). Table 1 shows the color and brightness data of all samples after treatment in the LBR. Integrating the results of color with that of brightness and arranging the value of the vectors in ascending order, we obtained the following order: Ni < Ag < AAS < Mo < W (0.1 mol%) < W (1 mol%).
Confirmation of biofilm formation by Raman spectroscopic analysis
We performed Raman spectroscopy to identify whether the sediments on the surface of samples were biofilms. Raman spectroscopic analysis has been applied to detect organic chemical bonds derived from EPSs such as C=O and C-N bonds of proteins, ribose ring of nucleic acids, and C-C bond of lipids. We inspected the debris accumulated at the points for each sample that had been treated in the LBR by microscopy and subjected it to Raman spectroscopic analysis. Figure 4a and Figure 4b show the images of analyzed spots in the LBR-untreated control and LBR-treated sample, respectively. About LBR-untreated controls, the surface of Ag was flat but other ones (AAS, Mo, Ni and W) were rough (Figure 4a). About LBR-treated samples, the surface of Ag was partly covered with rod shaped debris which would be microorganisms. The other surfaces (AAS, Mo, Ni and W) looked more uneven than LBR-untreated ones and they were filled with permeable debris which would be EPSs.
The Raman shifts are summarized in Figure 4c. For the W samples, data of the 1 mol% sample are shown. Regardless of whether the sample was subjected to treatment in the LBR, several specific peaks were detected in all samples at 998-1001 cm −1 (strong peak), 1021-1029 cm −1 (weak peak), 1582-1595 cm −1 (weak peak), 2904-2911 cm −1 (strong peak), 2965-2969 cm −1 (medium peak) and 3051-3052 cm −1 (strong peak). The 998-1001 cm −1 peak and 1021-1029-cm −1 peak were assigned to the Si-O bond of the main structure of the silane coating [7][8][9]. The 1582-1592-cm −1 peak was assigned to the aromatic C-C stretching of the phenyl group of the silane coating. The following three peaks: 2904-2911 cm −1 , 2965-2969 cm −1 , and 3051-3052 cm −1 were attributed to the silane-based coating [6]. We thought that the three strong common Raman peaks derived from the AAS coating (998-1001, 2904-2911, and 3051-3052 cm −1 ) could be used to quantify biofilm formation, and we calculated the ratio of relative intensity of samples before and after treatment in the LBR. Unfortunately, we failed to quantify biofilm formation using these data because there was no correlation between the ratio of relative intensity among these three common Raman peaks. Therefore, we used crystal violet staining to quantify biofilm formation, while Raman spectroscopic data were used for identification of the biofilms. For the LBR-untreated Mo sample and LBR-treated W sample, one small peak was detected at 1112-1117 cm −1 , which derived from the Si-O bond of the silane coating main structure [9]. Several other Raman peaks were detected in LBR-treated samples.
Several peaks were detected in LBR-treated AAS sample as follows: strong lipid-assigned peaks (at 1151 cm −1 related to C-C stretching vibration, 1289 cm −1 related to -CH 3 scissoring and twisting vibration, and 1438 cm −1 related to -CH 2 scissoring and twisting vibration) [10], some protein-assigned peaks (647 cm −1 peak related to C-S stretching with H and 1684-cm −1 peak related to C=O stretching vibration of peptide linkages) [11], and other peaks assigned to polysaccharides or lipids (620 cm −1 and 718-724 cm −1 peaks related to C-H wag vibration) [12] and cyclic carbonyl compounds (1834 cm −1 peak related to C=O out-of-phase stretch vibration) [12]. In addition, several cumulated double bond alkene compounds were detected (at 1986 cm −1 and 2021-2092 cm −1 related to >C=C=CH 2 out-of-phase stretching and -N=C=S out-of-phase stretching, respectively) [12].
For the LBR-treated Mo sample, several lipid-assigned peaks were detected (at 1101-1202 cm −1 related to C-C stretching vibration and twisting vibration, and 1441 cm −1 related to -CH 2 scissoring and twisting vibration). Two protein-assigned peaks were detected at 1624 cm −1 and 1677 cm −1 related to C=O stretching vibration of peptide linkages. At 1320-1360 cm −1 , complex peaks were detected, which were a mixture of -CH 3 scissoring and twisting vibration (assigned to lipid) and tryptophan (assigned to amino acid or part of protein) [10]. A γ-lactone-assigned peak was detected at 1794 cm −1 (related to C=O stretching). In addition, two cumulated double bond alkene compounds were detected at 896 cm −1 and 1950 cm −1 (related to =CH 2 wag and >C=C=CH 2 out-of-phase stretching, respectively) [12].
For the LBR-treated W sample, several lipid-assigned peaks were detected (at 1289 cm −1 related to -CH 3 scissoring and twisting vibration, at 1460 cm −1 related to -CH 2 scissoring and twisting vibration, and at 3252-3280, 3316, and 3580 cm −1 related to C-C stretching vibration and twisting vibration). Two small protein-assigned peaks were detected at 666 cm −1 and 747 cm −1 related to C-S with H and C-S with C, respectively. Amide A (part of protein main chain) assigned peaks were at 3366-3280 cm −1 . DNA-derived C-O bond was at 711 cm −1 , and polysaccharide or lipid-derived CH wag vibration was at 892 cm −1 [12]. The other peaks were related to γ-(CCC) of benzene (at 619 cm −1 ), semi-circle stretching of 2-substituted thiophene (at 619 cm −1 ), cyanuric acid (at 1781 cm −1 ), cyclic C=C stretching bond (at 1883 cm −1 ), >C=C=CH 2 out-of-phase stretching of cumulated double bond alkene compounds (at 1979-1986 cm −1 ) and --N=C=S out-of-phase stretching of cumulated double bonds alkene compounds (at 2073 cm −1 ).
For the LBR-treated Ni sample, one small peak (at 1112 cm −1 ) and one medium peak (at 1356 cm −1 ) were assigned to C-C stretching vibration of lipids and -CH 2 scissoring and twisting vibration of lipids, respectively. Protein-related peaks were detected at 673-755 cm −1 (mixture of C-S stretching with H, C-S stretching with C, and CH wag vibration) and at 3420-3497 cm −1 (amide A). Several other peaks were detected as follows: 1359 cm −1 peak was semi-circle ring stretching vibration, 1801-1840 cm −1 peaks were C=O in-phase stretching of R-CO-O-CO-R, 1976-1977 cm −1 peaks were >C=C=CH 2 out-ofphase stretching of cumulated double bonds alkene compounds, 2092 cm −1 peak was -N=C=S out-of-phase stretching of cumulated double bond alkene compounds, and 2121 cm −1 peak was -OH stretching of phosphorus compounds. One unique sharp peak was detected both before (at 620 cm −1 ) and after (at 621 cm −1 ) LBR. That was assigned to Ni oxide (NiO) from dispersed Ni NPs because Larkin reported that various metal oxides have strong Raman bands from the metal-Ometal group at 200-800 cm −1 [12].
EPS components were detected on all LBR-treated samples; therefore, sediments on their surface were confirmed as biofilms. When the detected EPS components were compared among these samples, lipids were detected in all samples, suggesting that the outer domain of the biofilms mainly consisted of lipids. We reasoned that during the LBR treatment, the samples were exposed to water flow that could possibly have detached or flashed out the biofilm; however, lipids tended to remain on the surface of samples because of their hydrophobic feature. Some other complex compounds such as cumulated double bond alkene compounds and γ-lactones were also detected in the biofilms, quorum sensing-related compounds that are crucial factors contributing to the creation of biofilms [13].
Compared with AAS, Ag and Ni similarly inhibited biofilm formation, but Mo and W enhanced it. Several researchers reported that Ag NPs were effective anti-biofilm agents [6,[14][15][16], thus supporting the results of the current study. In our previous study, we reported that organic Ni compound-conjugated AAS enhanced biofilm formation [17]. However, the current study showed contrasting results: Ni NP-dispersed AAS inhibited biofilm formation. We reasoned that these differences were caused by different concentrations of Ni leaked from the AAS polymer. It is presumed that organic Ni compounds are disassociated into Ni ions and anionic ions and held tightly in the free volume of AAS polymer, and that as a consequence, low concentration of Ni ion are leaked from the polymer. On the other hand, Ni NPs exist on the surface or in the free volume of AAS polymer that can permeate water [5], and therefore, more Ni ions will be leaked from the polymer compared with organic Ni compounds conjugated to the AAS polymer. Perrin et al. reported that at a moderate concentration (100 µM), Ni enhanced biofilm formation of E. coli K-12, and at a high Ni concentration (>200 µM), cell growth was inhibited [18]. These results indicate that Ni concentration influences bacterial proliferation and biofilm formation. In our current study, dispersed Ni NPs at 0.1 mol% corresponded to approximately 1 M, and Raman spectroscopic analysis showed that the surfaces of Ni NPs were oxidized partly/totally because a NiO-derived peak was detected (Figure 4c). Baek and An examined the microbial toxicity of NiO NPs against three different bacteria; they found that NiO NPs damaged the bacteria and that the dissolution rate of the NiO NP was 2-5% [19]. We estimated the amount of Ni ions leaking from the surface of Ni NP-dispersed AAS polymer based on these two reports to reach 20-50 mM if total Ni NPs were exposed to water, a concentration much higher (>100 fold) than the concentration that has a significant effect on biofilm formation. Even if only 1% of total Ni NP-dispersed AAS polymer was dissolved, the estimated Ni ion concentration is 200-500µM, which is equal to the Ni concentration that is sufficient to regulate cell growth.
The Mo NP-dispersed AAS polymer enhanced biofilm formation. Mo is known as a trace element for microbes, plants, and animals, and over 50 enzymes are Mo-dependent [20] to catalyze various redox reactions. In prokaryotes, there are four families of molybdoenzymes [21]. Some environmental bacteria are known to use Mo, such as Rhodopseudomonas palustris [22] and Azotobacter vinelandii [23]. Percival reported that Mo affected the growth rate of Acinetobacter sp. involved in initial biofilm formation in potable water [24]. In the present study, a biofilm formation test was performed using an open LBR system in which tap water and environmental bacteria from the laboratory were used. Therefore, we supposed that some Mo-using bacteria were present in the system and they would grow actively and be involved in biofilm formation, which is a powerful confirmation of our presumption.
We tested 0.1 and 1 mol% W NP-dispersed AAS polymers, both of which enhanced biofilm formation in a dose-dependent manner. To the best of our knowledge, this is the first study to report that W NPs are effective for biofilm formation. Syed et al. reported W NPs inhibited cell growth of two different species of bacteria (Staphylococcus aureus and antibiotic tolerant E. coli) [25]. Their results indicate a completely opposite physiological effect of W on microorganisms. We reasoned that this is due to the usage of different bacterial species; we used the environmental bacteria community, but Syed's group used a pure culture of specific strains of bacteria. W has chemical properties very similar to those of Mo, and it works as an antagonist of Mo in most eukaryotes and prokaryotes [21,26]. Therefore, a similar presumption is considered, i.e., some W-using bacteria were in the LBR system, they grew actively, and were involved in biofilm formation.
Cytotoxicity assay of metal NP-dispersed AAS polymers
U937 cells were co-cultured with each metal NP-dispersed AAS polymer for 2 days. Compared with the cell viability in control culture condition (85%), AAS polymer and 0.1 mol% metal NP-dispersed AAS polymers influenced similar or higher cell viability; however, 1 mol% W NP-dispersed AAS polymer induced a much lower cell viability (53%). These results show that the AAS polymer and 0.1 mol% Ni, Ag, Mo, or W NP-dispersed AAS polymers do not cause acute cytotoxicity and further raised the possibility that the amount of W ions influences cellular functions and high ionic concentrations of W may be harmful to human cells. Assuming that the dissociation rate of W was the same, regardless of the concentration as a constituent, the concentration of leaked W ions from the 1-mol% W sample is estimated to be 10 times higher than that of the 0.1 mol% W sample. Peuster et al. examined W cytotoxicity on three types of human cells (pulmonary arterial endothelial cells, smooth muscle cells, and dermal fibroblasts) and found that at very high concentrations (>0.3 mM), W disabled the cells (of note, W concentration in normal serum is about 1 nM) [27]. This shows that control of W leakage is important to prevent cytotoxicity and that optimization of the percentage of metal NPs is needed.
Conclusion
We investigated the anti-microbial effects and cytotoxicity of Ag, Mo, Ni or W NP-dispersed AAS polymer coatings. Mo, Ni, and W are known components of some stainless steels such as SUS. Compared with AAS polymer coating, both Ni-and Ag-containing samples inhibited biofilm formation, but Mo-and W-containing samples facilitated biofilm formation. The effect of W on biofilm formation was dose-dependent. At 0.1 mol%, NP-dispersed AAS polymers did not damage the functions of U937 cells, but the W sample was toxic at 1 mol%. Metal NP-dispersed AAS polymer can prolong the effect of the metal on biofilm formation compared with that of naked metal NPs because water exchange speed inside a polymer is slower than that in normal solution, resulting in a decreased ionic dissociation of metal NPs. However, optimizing the concentration of metal NPs dispersed into polymers is needed to inhibit biofilm formation, as well as to prevent cytotoxicity against host cells. | 2019-04-09T13:02:34.931Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "910447794899a3919d79a162294fb1ddd33176d8",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/BRCP-2-138.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e276acc355015a1bc40eda68264f7b15258519cb",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
244868031 | pes2o/s2orc | v3-fos-license | Energy-Oriented Production Planning in Industry: A Systematic Literature Review and Classification Scheme
: Scarcity of resources, structural change during the further development of renewable energy sources, and their corresponding costs, such as increasing resource costs or penalties due to dirty production, lead industrial firms to adapt ecological actions. In this regard, research on energy utilization in production planning has received increased attention in the last years, resulting in a large number of research articles so far. With the paper at hand, we review the literature on energy-oriented production planning. The aim of this study is to derive similar core issues and related properties along energy-oriented models within hierarchical production planning. For this, we carry out a systematic literature review and analyze and synthesize 375 research articles. We classify the underlying literature with a novel two-dimensional classification scheme and identify three key topics and five frequently found characteristics, which are presented in detail throughout this article. Based on these results, we state several potentials for further research.
Introduction
Aside from sustainable production in general, one major concern is the use of energy in industrial production. To address this, research differentiates between two ways of increasing energy efficiency within industrial production. One possibility lies in the investment of new energy-efficient production machines, as well as in the design of new production processes. The other way lies in energy-oriented production planning (EOPP). While the former approaches related to technology investments usually go along with high costs, production planning allows improvements regarding energy utilization in the short term and with little investment costs, making it especially interesting for practice and research (see [1][2][3][4]). A consideration of risk management associated with energy-oriented production planning can be found, for instance, in [5].
For industrial production planning, the concept of hierarchical production planning, as stated in [6] and further proposed by [7], is well known in research and industrial practice. Along different planning levels, i.e., aggregate production planning, master production scheduling, lot sizing, and scheduling, the main goal is a harmonization between long-term decisions based on rather aggregated information and short-term decisions that are precise to the second while limited capacities of resources are considered. A detailed description can be found in [8]. In summary, hierarchical production planning addresses various issues in production for time horizons of several years to only a few hours.
Previous research identified over 350 published research papers in the context of energy-oriented production planning dealing with various subjects and circumstances across the different levels of the planning hierarchy (see [9]).
Based on these considerations, in the present work, we look at the following two research questions (RQ): RQ1: Is it possible to narrow down the existing articles on energy-oriented production planning to a small number of key topics that are addressed? RQ2: Can similar characteristics be found in the literature regarding the planning problems that enable improved energy efficiency?
In answering these two questions, the article at hand provides an overview on the state of the art of energy-oriented production planning. Through topic bundling and a detailed analysis on elementary characteristics of planning problems used to improve energy efficiency, a completely new classification scheme is presented. In addition, a large share of the research articles on EOPP found are cited throughout the article and open research questions are discussed.
The remainder of this article is structured as follows. In the next chapter, the scope of this work is presented and distinguished from existing literature articles. In Section 3, we present the review methodology outlined for the literature search and analysis. Section 4 presents a classification scheme derived on the basis of 375 research papers. Thereby, we describe 171 of these articles and link them to the classification scheme. After introducing key topics in Section 4.1, commonly found characteristics within energy-oriented production planning models are discussed in Section 4.2. In Section 5, we provide numerical analyses of the examined literature and provide an outlook on further research potential. This article ends with a conclusion.
Review of Existing Works
The research proposed in this article aims to present an overview of energy-oriented production planning. In the past, several review articles were published on production planning approaches that integrated energy aspects. Ref. [2] presented a review on energy-efficient production scheduling. In their article, the authors classified 87 scheduling approaches published between 1990 and 2014. For this, the authors developed a research framework for analyzing and categorizing scheduling models in terms of energetic coverage, energy supply, and energy demand. They discussed applications along the energy conversion chain that could be influenced by scheduling and aligned scheduling approaches to three interacting systems, namely, external conversion systems (by the energy provider), internal conversion systems, and the production system of a manufacturing company (as the energy user).
The work from [10] focused on sustainable manufacturing and discussed ecological aspects not only of energy in production, but also regarding emissions and waste. After defining sustainable manufacturing operation scheduling with respect to input and output, the literature was classified by means of input (e.g., energy) and output (e.g., waste and pollution), optimization criteria, and scheduling methods (proactive, reactive, or hybrid). Thereby, 33 energy-oriented articles on short-term production planning were included. The authors concluded that mainly energy was addressed in sustainable production planning and suggested, among other points, linking different planning levels and considering multidisciplinary approaches for an improvement in terms of sustainability. The authors of [11] reviewed the literature on sustainable manufacturing and took into account the triple-bottom-line pillars of sustainability-economical, ecological, and social-in production. In total, they analyzed 50 articles that addressed sustainability in production scheduling and classified them in terms of manufacturing model, sustainable objectives, and constraints, as well as model type and solution methods. They defined six pairings that served to evaluate if a scheduling model was sustainable: economicoriented objective and environment-oriented objective, economic-oriented objective and social-oriented objective, economic-oriented objective and environment-oriented constraint, economic-oriented objective and social-oriented constraint, environment-oriented objective and economic-oriented constraint, and social-oriented objective and economic-oriented constraint. Thereby, a sustainable scheduling model must include at least one of these on every article, as well as on those articles not cited in the following, can be found in the online appendix (to obtain the appendix, please contact the corresponding author).
Due to the growing interest in sustainability in academia and industry, the aim of this article is to provide both researchers and practitioners with a general understanding of the underlying topics and characteristics for energy integration into production planning. This article should help researchers to familiarize themselves with the state of the art in energy-oriented production planning and future research directions. For practitioners, the aggregated illustration of energy aspects along with their various specifications is intended to foster the further implementation of energy-oriented production planning in industry.
In summary, the paper at hand gives the reader four major insights. By analyzing and synthesizing the various specifications of energy integration in production planning approaches, we present a comprehensive overview of how production planning can address energy efficiency in terms of objective criteria and constraints. In addition, we describe different requirements and capabilities assumed in the reviewed literature that allow improvement regarding energy utilization. As a third benefit, this article serves as a summary of several open research questions related to energy-oriented production planning. Moreover, we offer an extensive online appendix that contains further information on each of the 375 analyzed articles described throughout this paper.
Methodology
In [9], 923 articles were identified as relevant in the context of ecological-oriented production planning and stored in an online database. These articles were catalogued in terms of planning level, objective function and constraints, modeling and solution approach, shopfloor characteristics, and numerical examples. The authors followed the review methodology published in [14]-summarized in Figure 1-and carried out steps (I) to (III). In the present article, we continue the work from [9] by performing steps (IV), literature analysis and synthesis, and (V), research agenda. We narrow down the focus to articles that discuss energy in the context of industrial production planning. This leads to a total of 375 articles published between 1983 and 2021 that are taken into account for our review work. A PRISMA flow diagram summarizing the literature search process can be found in the Appendix A ( Figure A1).
In step (IV), the literature is analyzed and synthesized with regard to the research questions. First, we examine which various energy-specific goals are presented in the objective functions and how energy is modeled in the underlying constraints. Every specification is studied separately. Throughout an iterative process, we group similar specifications together and derive three key topics:
Supply orientation.
For each key topic, a distinction is made between ecologically and economically motivated integration. We define and differentiate between these topics because each of these classes can be addressed independently in production planning without necessarily taking into account one of the other topics. Secondly, each article is evaluated in terms of how the modeling approach allows improvement regarding the energy topic under consideration. Here as well, the procedure of analyzing and synthesizing every research paper is repeated several times until we are able to group similar conditions together into a small number of frequently found characteristics within energy-oriented production planning. As such, we propose the following five: (a) Various energy utilization factors, (b) Alternative production resources, (c) Heat integration, (d) Multiple energy sources, and (e) Energy storage systems.
As a result, in step (V), we present a two-dimensional classification scheme that is linked to our research questions. Due to the high number of relevant articles, the proposed classification scheme offers the benefit of structuring the literature regarding key topics and commonly found characteristics within EOPP. Although we analyze a very high number of articles and proceed in considerable detail in this work, we cannot exclude the possibility that there are further classes.
In the following, the two dimensions and each individual class are outlined. For every key topic and characteristic, as well as their corresponding specifications, several articles are mentioned as representative examples. However, for a comprehensive listing of all articles assigned to a class, the reader is referred to the online appendix of this article.
Key Topics
To answer research question 1, we develop key topics within energy-oriented production planning through inductive examination of the literature. The key topics state which criterion related to energy is taken into account, either as an optimization goal or as constraints within a production planning approach. Note that the key topics are not disjunctive.
The first key topic, energy consumption, represents the consideration of consumed energy in the production planning context.
As the second key topic within energy-oriented production planning, we identify load management and assign approaches that focus on the energy demand in production to this key topic. While the first key topic considers energy consumption-the amount of energy used over a given period of time-energy demand equals the energy use at a single point of time; in other words, the energetic load. Typically, the energy demand is considered in production planning to stabilize the power grid, to avoid additional generation in peak periods, and to reduce associated costs for balancing energy demand and supply.
The third group of research articles within EOPP considers approaches regarding different energy sources and energy storage systems in the context of production planning. The two key topics of energy consumption and load management mainly address the stage at which energy is used in production and, in the case of economic orientation, take into account predetermined energy prices. This third key topic expands the scope to energy supply, generation, and energy storage. However, a large share of the articles grouped into this key topic consider energy consumption or load management with respect to the two previous key topics as well.
This distinction is graphically represented in Figure 2. It summarizes the relevant fields and derived key topics on energy-oriented production planning. For this, the two core elements-the energy market and the production environment within a manufacturing company, defined as a 'system boundary'-and their linkages are shown (see [13] Table 1 contains the different specifications that are assigned to key topic 1 and described in the following. Ecologically oriented approaches grouped to this key topic either discuss the optimization regarding the energy consumed in a specific time interval or treat energy consumption as a restriction in production. Due to environmental concerns, energy consumption is considered in such research articles with the goal of improving ecology in production.
Energy Consumption
A total of 172 of the analyzed articles address this topic in an ecological manner, while a large share of these articles (158) integrate energy consumption as an optimization criterion in decision making. These approaches can be further differentiated in terms of the extent to which and when energy consumption is considered. While in most of the articles, the total decision-related energy consumption within the planning horizon is addressed (e.g., [15][16][17]), other approaches focus on the energy consumption in specific periods or upon certain events, for example, in demand response events, as in [18].
Other works indirectly consider energy consumption in production planning. By taking into account energy-related emissions as an optimization criterion while a constant emission-to-energy factor is assumed, energy consumption is addressed in several articles, such as in [19][20][21]. Consequently, the minimization of emission quantity with a constant emission-to-energy factor leads to a minimization of energy consumption. Energy consumption is also indirectly taken into account in approaches that maximize time in power-saving mode (such as in [22]), minimize unnecessary heating time (e.g., [23]), or minimize unnecessary waiting time on machines to reduce storage energy consumption (e.g., for holding the temperature of hot products, such as in [24]).
In contrast to these contributions, Ref. [25] minimizes the deviation of the energy consumption per period from the average energy consumption along the planning horizon in order to avoid inefficient conversion processes due to frequent load alternations.
Aside from integrating energy consumption as an optimization criterion, energy consumption is considered in terms of restrictions as well. We find 19 articles that do so, while a large share of these assume a consumption threshold for energy, either for the whole planning horizon (e.g., [26]) or per period (e.g., [27]). Generally, one can distinguish between hard and soft thresholds. The former restricts the objective under considerationhere, energy consumption-to a specific value, while the latter allows a violation of the threshold, typically leading to additional costs. For example, in [28], a soft energy consumption threshold per period is assumed, and the costs for energy consumption above this threshold are minimized.
Similarly to approaches that integrate energy consumption as an optimization criterion by focusing on emissions (with a constant ratio between energy consumption and emission quantity), the authors of [29] considered an emission quantity threshold per period that could be interpreted as an energy consumption threshold per period. While the approaches just outlined assumed an upper limit for energy consumption, Ref. [30] added a lower limit to an energy consumption threshold, leading to a minimum and maximum energy consumption per period. Unlike approaches that constrain energy consumption with lower or upper limits, some articles take the deviation of energy consumption among different periods into account. For example, the authors of [31] integrated a safety threshold for deviation between long-term planned energy consumption and actual energy consumption in their optimization approach. A similar example was presented in [32]. The authors assumed an energy consumption deviation threshold whereby rescheduling was fulfilled as soon as the deviation between expected and actual energy consumption exceeded this threshold. Rescheduling serves for adaptation to dynamic events, while this threshold is used to avoid frequent rescheduling.
In 14 out of 19 articles that integrated energy consumption as a restriction, energy was considered in the objective function as well. So, most of the time, the integration of energy consumption as a restriction served as an additional energy-related factor to be taken into account. The five articles that considered energy consumption solely as a restricting consumption threshold optimized not energy-related but other economic-oriented criteria, such as makespan or tardiness (e.g., [33][34][35]), as well as costs, such as setup costs, inventory costs, idle costs, or production costs (e.g., [36,37]).
Economic Consideration of Energy Consumption
Aside from the ecological consideration of energy consumption within production planning, 191 of the analysed articles integrated energy consumption in terms of costs or savings. Not surprisingly, manufacturing companies are motivated to reduce costs for energy consumption, since utility costs can represent a large share of production costs (see [38]). Thereby, not only energy-intensive companies, but also non-energy-intensive enterprises offer considerable potential for cost savings (see [39]). In this manner, in the article from [40], for example, it was assumed that production costs are constant due to fixed production, but energy costs can be minimized through the presented production planning approach.
The most commonly found monetary representation of energy consumption is the consideration of the costs for total energy consumption. Typically, the overall energy consumption that is assumed to be influenced by production planning is taken into account and multiplied with the energy price. As long as there is a constant energy price, an optimization regarding energy consumption costs equals an optimization regarding energy consumption. We found 47 articles that assumed a constant energy price while energy consumption costs were minimized or included as constraints. For example, the authors of [41] presented a single-machine scheduling approach for minimizing energy consumption costs and tardiness costs and assumed a constant energy price. Minimization of energy consumption costs was achieved by minimizing the total energy consumption within the planning horizon. In [42], net revenue was maximized in parallel-machine scheduling while a specific budget limited energy consumption costs. Due to a constant energy price, this budget achieved the same effect as an energy consumption threshold.
In contrast to constant energy prices, more articles that focused on energy consumption costs within production planning assumed varying energy prices that were either time-dependent or quantity-dependent. However, by considering varying energy prices, an optimization regarding energy consumption costs does not necessarily correspond to optimizing production planning in terms of consumed energy. This link strengthens the need for the outlined distinction between ecological and economic consideration of energy consumption. Similarly to the optimization goal of minimizing energy consumption costs, Ref. [43], as well as [44], included the maximization of cost-related energy consumption savings in their approaches.
A second way in which energy consumption is economically addressed is the avoidance of possible penalty costs that occur due to violation of specific energy consumption thresholds. In these approaches, additional energy costs occur due to penalty costs every time a consumption threshold is violated. Regarding this, for example, the authors of [28] presented a generalized critical peak price concept in which additional costs occurred as soon as the energy consumption in production and maintenance exceeded a critical value. With this, the manufacturer is encouraged to shift energy consumption to avoid consumption peaks. In another manner, penalty costs related to energy consumption were indirectly addressed, such as in [45]. The authors discussed a reheating furnace scheduling problem in steelmaking. The presented approach involved the minimization of penalty costs for inefficient heating, as well as the minimization of changeover costs and penalties that arose from the deviation of the actual residence time of slabs in a furnace from the desirable residence time.
Aside from costs and penalties regarding the consumption of energy in production, a further way to address energy consumption monetarily was found in [46]. The authors considered additional postponement costs that occurred when a job and, therefore, the necessary energy consumption were shifted to later periods due to limited capacity in a period. For every shifted energy consumption unit (kWh), a cost rate was assumed.
In addition to [42], one further article was found that integrated energy consumption costs as a restriction. Ref. [47] outlined parallel-machine scheduling and focused on the minimization of makespan and total completion time, while a threshold for energy consumption costs was given.
Load Management
In Table 2, the different specifications that are assigned to key topic 2 and described in the following are summarized.
Ecological Consideration of Load Management
A total of 51 of the analyzed articles on EOPP took load management into account from a non-monetary perspective, whereby 11 articles considered load management in terms of objective criteria, 38 articles as constraints, and two articles as both an objective criterion and constraint simultaneously.
Those articles that addressed load management in their objective functions pursued this in three different ways. On the one hand, the energy demand was minimized in production planning, such as in [48].
In other approaches, it was not the entire energy demand that was minimized, but the maximum energy demand or the maximum average demand during the planning horizon. For example, in [49], the maximum energy demand was minimized in a single-objective scheduling approach. In a similar way, Ref. [50] assumed an energy demand threshold and presented an approach that minimized this energy demand threshold, as well as the total completion time.
On the other hand, as a third way of considering load management as an objective criterion, a predefined load curve was assumed and the deviation between actual energy demand and prescribed energy demand was minimized (e.g., in [51,52]). As long as the contracted load curve was followed, the energy supplier received planning certainty and cost savings, leading to possible discounts on the energy price for the manufacturing company. The manufacturer was encouraged to avoid energy demand above or below the load curve due to possible penalty costs.
As mentioned before, a larger share of the analyzed articles integrated load management in terms of one or more constraints in production planning. The most frequently integrated constraint regarding load management was an energy demand threshold (found in 34 articles). In addition, in this case, similarly to energy consumption thresholds, a distinction can be made between hard and soft energy demand thresholds. For instance, Ref. [53] provided a flow shop scheduling approach that minimized makespan, while a hard energy demand threshold had to be kept during the entire planning horizon. Apart from that, for example, in [54], a soft energy demand threshold was assumed in job shop scheduling, and exceeding it resulted in additional energy costs.
Mostly, a fixed value was specified for an energy demand threshold along every period (e.g., [55][56][57]). However, some articles assumed a variable period-specific energy demand threshold, such as in [58,59]. While these thresholds typically have to be met throughout the entire planning horizon, Ref. [60] presented a different case. The authors discussed the commitment of a manufacturer to a demand response program. In this program, a certain power demand threshold had to be met not in the whole planning horizon, but during demand response events that were announced one day before they began.
The majority of the analyzed load management articles focused on the energy demand in an aggregated manner by adding the energy demand of each individual energyconsuming component (mostly the production machines). Against this, some articles considered machine-specific energy demand individually in load management. For example, the authors of [61] assumed a specific minimum and maximum energy demand value for each furnace whereby the power demand in processing the furnace had to lie within this range. Similarly, Ref. [62] presented a flow shop scheduling approach to minimize total completion time in steel production while each furnace had a maximum power limit that could be provided without damaging the furnace wall.
Aside from limiting the energy demand in production to specific values or in specific periods, two further types of constraints were found in the analyzed literature on load management-both taking the possibility of pre-agreed load reduction into account. The first of these constraints addresses the amount by which the load has to be reduced at certain times. In that respect, Ref. [63,64] assumed a minimum value for load reduction during peak periods. Ref. [40] not only included a minimum value, but defined a range with a lower and upper bound for the provided amount in load reduction. The second type considered the number of times the load was reduced or interrupted. In the article by [65], a robust optimization approach was presented in which uncertain power interruptions were included in aggregate production planning. A constraint was integrated that limited the number of such power interruptions to a certain value.
The relation between the first two key topics, energy consumption and load management, was illustrated in the approach by [35]. In this, a power demand threshold for every metering interval was intended. By multiplying the average power demand in every interval and the length of each interval, the power demand threshold was formulated equivalently as an energy consumption threshold for every metering interval.
Economic Consideration of Load Management
Likewise, several articles (48 out of the 375 analyzed articles) considered load management in monetary terms. The costs for energy consumption were not or not exclusively considered, but the costs regarding the energetic load were also included in these articles. Basically, energetic demand is charged by the energy supplier in order to cover its costs for infrastructure and grid balancing (see [66]). However, some articles argued that costs regarding energy demand were derived from the longer term and were therefore not considered in short-term planning (e.g., [67,68]).
In only a few articles, the value of the energetic load was directly priced, such as in [63] or in [64]. A demand charge was assumed, and each individual instantaneous energetic load resulted in demand costs.
More frequently, the load price and the corresponding demand costs refer to the maximum energy demand (found in 22 articles) or to the maximum average energy demand (found in five articles) within a planning or billing period. For example, Ref. [69] presented a scheduling approach in order to minimize energy consumption costs and energy demand costs in the context of additive manufacturing. In addition to time-varying electricity consumption prices, they assumed a demand charge depending on the maximum energy demand, i.e., peak demand. In [70], both energy consumption costs and energy demand costs were considered for a parallel-machine environment. The energy demand costs were calculated by multiplying the maximum average electricity demand within a rolling time window with a demand charge rate.
Aside from the introduction of demand charges for each individual or the maximum energetic load, another way to include load management in a monetary view lies in the consideration of penalty costs that occur when exceeding specific demand thresholds or deviating from a desired load level. In this manner, Refs. [61,71,72] minimized costs in different machine scheduling problems that depend on energy consumption costs, as well as penalties for exceeding a given power value. Penalty costs for load deviation were integrated in some places, such as in [73][74][75][76][77]. For example, [75] assumed that a production company predicts its load curve one day ahead and sends the forecast to the energy supplier. Then, the plant commits itself to this load curve and both over-consumption and under-consumption are penalized. The authors presented a scheduling approach in order to minimize the net electricity costs, lead times of product delivery, and the load deviation penalty costs in steel production. A similar method of addressing the costs regarding the energetic demand was described in [78]. The authors considered a manufacturing company's possibility to participate in a critical peak pricing program. In addition to peak periods and off-peak periods, in the scheduling approach of [78], the company needed to identify a reservation capacity for energy demand when signing a contract with the energy supplier. The company would benefit from lower energy prices during off-peak periods while the energy price was extremely high in peak periods as soon as the energy demand was above the reservation capacity. The resulting energy costs consisted of energy consumption costs and the charge of the reservation capacity (in USD per kW).
A further expression of load management in monetary terms was stated in the research articles from [30,[79][80][81]. Aside from the costs for energy consumption, energy demand, or energy generation, they considered costs that occurred through load adjustment actions as well. In this respect, Refs. [79,80] included any additional operating costs for implementing a load management program. Ref. [30] listed these costs in very general terms as "additional operating costs due to shifting of loads". In [81], start-up and shut-down costs due to load reduction by demand response actions were taken into account.
In a small share of the analyzed articles, costs for energy demand and load management were not or not exclusively considered, but incentive payments and revenue regarding energy demand were. In addition to costs for energy consumption, power demand, load deviation, and energy generation, Ref. [77] presented a scheduling approach that covered an incentive rate that was paid to the manufacturing company for meeting the desired load curve per interval. In a similar manner, Ref. [27] introduced a flow shop scheduling approach for maximizing profit. Energy was addressed in terms of energy consumption costs and an incentive rate for load reduction within a demand bidding program. The other type of payment related to industrial load management was presented in [82]. In their research, a chemical plant received revenue in offering power reserves to the grid on the basis of a demand response program. Their scheduling approach followed the goal of maximizing the revenue from the power reserve and minimizing electricity consumption costs.
Among the 375 analyszed articles, we could not find one approach that integrated costs or revenue regarding load management as a constraint within a production planning approach.
Supply Orientation
Regarding supply orientation, the different specifications assigned to key topic 3 and described in the following are listed in Table 3. In total, we found 28 articles that addressed an ecological viewpoint of supply orientation. In five of these, this topic was addressed in the particular objective functions in an ecological manner: [66,[83][84][85][86]. These articles have in common that onsite generation (OSG) of energy through renewable energy sources (RESs) is assumed and the objective lies in the minimization of energy consumption or energy demand from other energy sources, e.g., the power grid. For example, in [84], energy-related emission quantity and total weighted flow time were minimized through production scheduling. A constant ratio between energy procured from the power grid and emission quantity was assumed, while OSG energy had zero emissions. Consequently, the approach followed the minimization of energy consumption from the grid and the increase in OSG energy. In a similar manner, the research from [86] aimed at minimizing emission quantity and makespan. The consumption of renewable energy was emission-free, and only energy consumption from non-renewable energy sources resulted in emissions. Here as well, a constant emission factor per unit energy consumption was assumed for every non-renewable energy source. Therefore, the objective function could be reformulated to minimize energy consumption from non-renewable energy sources and makespan. Regarding load management, Ref. [66] presented a scheduling model to minimize the net energy demand as the difference between energy demand by production and energy supply through an onsite photovoltaic power system.
In most articles that extend the view on energy in the production environment towards supply orientation, an energy supply-demand balance is modeled with one or more constraints. Basically, this balance ensures that the sum of the demanded energy of every energy consumer is not higher than the sum of the energy supply of every energy source (whereby the standard energy source is the macrogrid/the procurement from energy suppliers).
Among the analyzed articles, we determined different specifications of this supplydemand balance. Regarding the further utilization of unused energy, on the one hand, one can differentiate between approaches that allow the feed-in of energy surplus (back) to the grid and those that include no feed-in possibility. For example, Ref. [87] minimized energy-related costs and makespan in scheduling in an iron-steel plant. The amount of self-generated electricity not demanded in production was fed into the grid by selling the surplus to electricity providers. Similarly, in [76], both the self-generated energy surplus and electricity purchased through long-term and short-term contracts could be fed into the grid while energy costs and production costs were minimized. In addition, in the lot-sizing and scheduling approach from [88], the feed-in possibility of energy surplus was assumed, and the costs for setup, inventory, production, and energy were minimized. However, the amount of energy that could be fed into the power grid per period was limited to a certain value. Ref. [89] allowed feed-in of unused energy to the grid but limited the selling amount by a threshold. Against this, for example, Ref. [90] presented a scheduling model in which no feed-in of energy surplus was possible. With this in mind, the authors restricted the output of the considered onsite combined heat and power (CHP) system to the energy demanded in production.
On the other hand, several articles took energy storage systems (ESSs) into account to further utilize unused energy. With these, energy surplus can be stored in the ESSs and used later on. So, in periods with a higher energy supply than demanded, the ESS serves as an energy consumer and is charged with the unused energy. When charged, the ESS operates as a source of energy. In this respect, Ref. [91] described a job shop scheduling model to maximize profit, including energy costs. They discussed different energy pricing schemes and considered renewable onsite generation with no feed-in possibility, but with batteries as an energy storage system. Ref. [83] assumed a microgrid consisting of renewable energy sources, a rechargeable battery, and the possibility to procure electricity from the grid. In a lot-sizing and scheduling approach, the required energy for production was provided by supply through renewable energy, the battery, and procurement from the macrogrid, while the latter was to be kept minimal.
In summary, onsite generation with and without a feed-in possibility, as well as energy storage systems, can be part of the energy supply-demand balance. We state these components-OSG and ESSs-as characteristics with respect to energy orientation in production planning and discuss further details in Section 4.2.
Regarding energy procurement from the macrogrid while other energy sources exist, in some production planning models, the amount of energy that can be drawn from the grid is limited. For example, [89] considered a total consumption threshold that limited the amount of energy bought from the energy supplier. Similarly, in a number of production planning models, a limitation of energy supply was taken into account due to facility design issues or regulations by the energy supply side. Thereby, the limitation could apply in every period, while the limitation value was time-variable (e.g., [92]) or had to be kept only in some periods along the planning horizon. In this context, Ref. [93] addressed the challenges that resulted through a governmental blackout policy. Due to environmental concerns, the governmental electricity supply to manufacturing companies was interrupted in some days of the week. To overcome this, companies relied on onsite energy generation with higher costs and pollutions. The authors presented a scheduling approach that minimized energy consumption, energy costs, and tardiness with respect to limited energy supply by the macrogrid. Refs. [94,95] combined such energy supply limits with load management and introduced maximum energy demand values for each energy source. Based on these, the contracted load curve was calculated and load deviation resulted in penalty costs.
With an optimal choice of energy source contracts, the energy costs, as well as costs for setup and inventory, were minimized in lot sizing.
As a further constraint, the consideration of a minimum share of renewable energy in the overall production-related energy consumption was identified, as in [96]. In their article, Ref. [96] dealt with an aggregate production planning problem and introduced a "green energy coefficient ρ" for each plant. With this criterion, it was ensured that at least ρ% of total energy consumption was drawn from renewable energy sources, while energy-related costs and costs for production, inventory, backorder, and transportation were minimized.
Economic Consideration of Supply Orientation
As mentioned above, the key topics and their specifications are not disjunctive: 39 articles integrated certain aspects regarding the energy supply side in monetary terms, out of which 18 approaches considered ecological matters as well. Nonetheless, the approaches analyzed with regard to the third key topic largely pursued economic objectives, as described in the following.
In contrast to the economic consideration of energy consumption and load management (as the first and second key topic), in approaches with onsite generation combined with external procurement through the macrogrid, often, not the entire amount of consumed energy is charged. Instead, direct costs for energy consumption and energy demand only relate to external procurement, while the usage of onsite generated energy is not associated with direct costs. For example, [97] proposed a flow shop scheduling approach with a grid-connected onsite wind turbine. In this, only the costs for energy consumption from the macrogrid are minimized and costs regarding the self-generated energy are not included in the goal function. Similarly, Ref. [98] presented a stochastic optimization model that minimizes total weighted completion time and energy costs for flow shop scheduling and onsite generation. In terms of energy consumption costs, only the energy drawn from the grid is charged.
Additionally, as outlined before, unused energy (either from the onsite energy source or from contracted energy supply) can be fed into the macrogrid, resulting in selling revenue for the manufacturer. In addition to other articles, both Refs. [97,98] assumed this possibility and took into account the revenue from selling unused energy to the grid.
Furthermore, in some articles, it was considered possible to procure energy from the grid to charge the ESSs (e.g., [91,99,100]). By storing energy in periods with low energy prices, energy procurement in periods with higher energy prices could be decreased, leading to a reduction in overall energy costs.
Then again, it was not solely the costs related to energy consumption and demand that are addressed as an objective, but the costs for energy generation or energy storage were also included by several articles on EOPP. In total, 27 articles included energy generation costs in their goal functions. Usually, these costs were considered as investment costs, operating costs, and maintenance costs for the generation system. In this manner, for example, Ref. [101] presented an aggregate production planning approach and took into account energy generation costs in terms of investment, operating, and maintenance costs. For multiple production sites, the total costs for energy generation, emissions, production, shipping, inventory holding, and backorder were minimized. Similarly, in [102], master production scheduling was outlined in the context of semiconductor manufacturing in a microgrid. In addition to minimizing energy consumption, energy consumption costs, and energy storage costs, the authors included energy generation costs as costs for equipment and capacity planning, as well as operations and maintenance expenses. In a lot-sizing approach, Ref. [103] integrated average costs of generating electricity from an organic Rankine cycle that was used to recover waste heat generated in production. These costs included investment costs, operation and maintenance costs, and further operation costs, such as fuel costs and insurance expenses, regarding the OSG system. Aside from energy generation costs, in some EOPP articles, costs for energy storage were also considered. In this respect, Ref. [100] included battery operation costs in flexible job shop scheduling. Assuming a microgrid consisting of macrogrid procurement, OSG, and an ESS, they minimized makespan costs and energy costs in terms of costs for consumption from the macrogrid, generation costs, and storage costs. In addition, the authors of [104] addressed energy storage costs in short-term planning. Within a serial multi-stage production system combined with OSG and ESSs, they integrated costs of discharging energy from the energy storage system. These costs represent the investment costs for the ESS divided by the overall amount of energy that can be discharged throughout the lifecycle of the ESS, multiplied by the amount of energy discharged along the planning horizon. Ref. [105] took into account energy storage costs in a mid-term master production scheduling approach for multipurpose batch plants. In this, capital costs for heat storage vessels are included in the objective function, and profit-calculated as product revenue minus energy consumption costs and energy storage costs-is maximized.
Consequently, integrating onsite energy generation and energy storage, as well as the consideration of multiple energy users in production planning, can lead to cost savings and increased revenue for the manufacturing company.
In the next chapter, we present five classes of frequently found characteristics in energy-oriented production planning. These characteristics are identified through analysis and synthesis of the literature, and they represent the assumptions and circumstances by which the different key topics are addressed in production planning; thus, energy efficiency can be increased.
Frequently Found Characteristics within Energy-Oriented Production Planning Problems
Energy efficiency in production planning can be addressed by using several optimization objectives and constraints, as outlined in the previous chapter. In addition, different assumptions, circumstances, and capabilities can be found in the analyzed literature that allow an improvement in terms of energy efficiency. Basically, by including these characteristics in production planning approaches, flexibility for energy-oriented planning is increased and betterment in energy usage and resulting costs can be achieved.
An overview of the characteristics and corresponding attributes found is given in Table 4. In the following, these characteristics and their various specifications are described. Table 4. Specifications of the five derived characteristics within energy-oriented production planning problems.
Various Energy Utilization Factors
One frequently found characteristic is the consideration of various energy utilization factors. In the majority of the approaches to EOPP, the energy of the working production machines is addressed. A total of 170 of the examined research articles solely take the processing energy into account; 172 approaches address processing energy together with other energy states.
Several works argue that processing energy can be neglected in the optimization, since it is unaffected by production plans as long as the power level and process durations are assumed as fixed values for each operation (see, for example, [106][107][108][109]). With this in mind, multiple articles depart from the simplified assumption of constant energy utilization and operation time in job processing (e.g., constant power demand presented as one energy bloc). One approach regarding this is the consideration of variations in an operation's power profile, as in [33,50,110]. In order to provide a more realistic modeling, all three articles divide an operation's power profile into two blocs with a power peak at the beginning of each process and a lower demand for the remaining part of the processing time. For example, in the job shop scheduling approach in [110], this variable power demand in job processing allows the simultaneous scheduling operations to minimize makespan while still maintaining a power demand threshold. The authors argue that by assuming a constant power demand for the whole operation equal to the peak value instead, the energy demand threshold would not make it possible to schedule operations earlier.
In other works, deterioration effects regarding machine lifetime and machine reliability are considered and are set in relation to processing energy. Due to a higher machine lifetime and machine failures, the machine causes higher energy demand when processing or longer processing time, leading to higher energy consumption (e.g., [41,[111][112][113]). Closely related to machine reliability, maintenance scheduling is also considered in some of the analyzed EOPP approaches. In this manner, Ref. [41] presented a single-machine scheduling problem. While taking into account an increased energy consumption rate due to decreased machine reliability, tardiness costs and energy costs were minimized by production scheduling combined with preventive maintenance. Other articles that address maintenance in the context of energy-oriented production planning include maintenancerelated energy consumption and minimize total energy consumption (such as in [114,115]) or energy costs (such as in [28]) for both production and maintenance. Furthermore, with respect to processing energy, in 88 articles, speed-scaling strategies are included. Basically, these approaches assume more than one possible processing speed or rate in production, and each speed results in a different energy demand or energy consumption. Thereby, a higher speed goes along with higher energy demand and shorter processing time. In contrast to articles that discuss machine deterioration effects, in speed scaling, production speed represents a decision variable and an optimal speed can be chosen for job processing. In the analyzed articles, speed scaling is included in different forms. Often, a discrete set of processing speeds for each machine is assumed, and the processing speed cannot be changed while a job is executed on a machine (e.g., [116][117][118]). Another form of speed scaling is temperature adjustment in furnaces, as in [119]. The authors consider a furnace in glass ceramization that can be set to different temperatures. A high temperature level results in a shorter processing time, while a lower temperature increases the processing duration. In a similar way, a variable processing time based on supplied energy is assumed in articles such as [61,62,71,72]. In these, a specific amount of energy has to be provided to the machine. An operation is finished as soon as the machine or the product receives the required amount of energy. By increasing the energy supply rate, processing time can be reduced. Likewise, we assign different process techniques (e.g., for a single machine, as in [120]), different parameter settings (for example, in additive manufacturing, as in [69]), and different energy types for power supply (e.g., a distinction between renewable and non-renewable energy sources leading to variable machine processing times, as in [86]) to speed scaling, since the corresponding appropriate choice is related to the underlying energy utilization and processing time. In addition, the possibility to carry out the same task with different machines simultaneously addresses the idea of speed scaling. For example, the authors of [121] presented a multi-objective approach to welding shop scheduling and allowed multiple welders to conduct one operation simultaneously. In job processing, the loading power for each welder is multiplied by the quantity of the welders used, and makespan, noise pollution, and total energy consumption are minimized.
In summary, as long as there is a linear relation between processing speed and energy demand, the processing energy consumption remains the same, but the corresponding energy demand can be adjusted through speed selection. Based on this, either energy demand can be reduced or energy consumption in other machine states is taken into account, which can be minimized by varying the processing time (e.g., idle energy can be decreased through reduced idle time). Differently from that, a non-linear relation between production speed and energy consumption or energy consumption costs was considered in several works, such as in [122][123][124]. With this assumption, processing energy consumption and associated costs can be reduced by an optimal speed selection as well.
However, in addition to the optimization potential associated with speed scaling in production planning, several works address limitations that go along with this possibility of varying energy demand and processing times. With regard to machine lifetime, Ref. [115] assumed a linear relationship between production speed and machine deterioration: A higher speed results in a faster machine deterioration. Based on that, speed scaling is combined with maintenance scheduling, and total weighted tardiness and energy consumption are minimized in a job shop environment. Similarly, due to potential efficiency breakdowns and corresponding costs, Ref. [125] included the minimization of production rate changes in their objective function. In single-machine scheduling, Ref. [126] took into account earliness, tardiness, and energy consumption. In addition, they minimized penalty costs for compression and expansion, which relate to processing time reduction (compression) and increase (expansion) through speed scaling.
Aside from processing energy, the analyzed articles cover a wide range of different settings in which energy usage occurs instead of or combined with job processing. For example, the authors of [127] pointed out a "massive idle energy waste" in production, which reveals potential for improvement by taking into account energy related to idle time. Assuming constant energy demand and machining time during processing, several articles focused solely on the minimization of non-processing energy and addressed idle energy (e.g., [106,108,128]). In contrast to this, some authors argued that idle energy can be neglected, since it is trivial compared to the overall energy consumption (e.g., [129]). Then again, production planning approaches were presented that did not allow idle time and, therefore, did not address idle energy; for example, the "no-idle flow shop scheduling" described in [130]. However, in 164 out of 375 articles, energy usage associated with idle time was considered in production planning, mostly stated as idle energy or standby energy. The reduction of idle energy was fulfilled either by higher machine utilization along with reduced idle time (e.g., [107]) or by powering down the machines during idle periods (e.g., [131][132][133]). The latter, the power-down strategy, was outlined in different forms in the analyzed literature. For example, the authors of [134] assumed different idle states for production machines: a hot idle mode with a high energy demand and a cold idle mode with a lower energy demand. A machine in the hot idle mode is able to directly start processing. When in the cold idle mode, warm-up time is needed until the machine can begin job processing. This trade-off between idle energy and transition time for state changes is addressed and optimized with regard to flow shop scheduling. In this respect, different idle states combined with different energy rates were considered as "energy hibernation states" in [18]. Similar, this power-down strategy was assumed in several articles, and energy utilization is reduced by turning production machines off and on. Thereby, energy for transition changes (i.e., turning a machine off, turning a machine on) was also taken into account in some approaches (e.g., [135][136][137]), while other researchers argued that this energy can be neglected (e.g., [60,63,64]). As soon as idle time exceeds a breakeven duration (equal to the sum of every necessary transition time) or idle energy is be greater than the energy needed for turning off and restarting, the machine is powered down rather than remaining in idle mode. Related to this, in [27], the power-down strategy depends on the buffer content level. A machine is powered down when either starved (due to an empty buffer) or blocked (by a full buffer). Both turning the machine off and a power-saving idle mode were integrated in [44]. By considering cold shutdowns (complete shutdown of the machine with high energy savings but high re-setup costs) and hot shutdowns (partial shutdown with lower energy savings and lower re-setup costs) in a parallel-machine environment, the presented scheduling approach minimizes total costs, including energy costs. In addition to energy savings, the power-down strategy can also offer benefits regarding social sustainability: With respect to energy consumption and noise pollution in flow shop scheduling, Ref. [132] integrated this possibility as a noise reduction strategy as well.
Although the power-down strategy was addressed in many approaches to EOPP, some articles stated that this method of powering down production machines as an energysaving option is not suitable for every shopfloor without restrictions. Ref. [134] specified that besides necessary warm-up time after machine shutdowns and additional energy consumption for machine state transitions, frequent switching may degrade machine lifetime. With respect to machine deterioration, a maximum allowable number of times that a machine is switched off and on is included in some research approaches that integrate power-down strategies, e.g., in [108,138]. Likewise, some of these works took into account the number of machine state switches in the objective function and minimized them. For example, the authors of [139] presented a flexible job shop scheduling approach that included a power-down strategy. In this, the total number of times that the machines were turned off/on, as well as makespan and energy consumption, was minimized.
Other works addressed energy utilization associated with machine setup (e.g., [140][141][142]). In this sense, the authors of [143] pointed out that production planning should also consider setup energy due to its significance in practice. Ref. [98] stated that the literature on energy-cost-aware scheduling ignored sequence-dependent setups and corresponding energy consumption so far. Therefore, the authors presented a flow shop scheduling model that took into account power demand related to machine setup in addition to processing and idle energy.
As a further energy-consuming machine state, loading and unloading activities were considered in [144,145]. Regarding the energy balance of a heating furnace, Ref. [144] included energy consumption that occurred during furnace door opening. In [145], electrical load for the idle, operation, and basic states was assumed, whereby basic refers to workpiece loading and unloading, positioning, and fixing. In a similar way, some articles took into account energy utilization for storage in terms of heat preservation and influenced these processes along with energy use through production planning. For example, the authors of [61,71,72] included "holding energy", which equals the energy needed for holding the temperature in induction furnaces until the finished job (i.e., melted metal) is removed. Comparable approaches can be found in [16,23,146]. Ref. [147] outlined that energy loss occurs when high-temperature products receive a temperature drop during non-processing time, leading to an increase in total energy consumption in production. They addressed this problem in production scheduling and minimized this extra energy consumption due to temperature drop, as well as earliness and tardiness penalties. To avoid such cooling effects in transportation between production stages, Ref. [68] included transfer time constraints. As a result, quality losses and expensive reheating were prevented. As already described in Section 4.1, Ref. [24] solely considered this energy state and minimized waiting times between two production stages to reduce energy consumption. Similarly to storage energy consumption, in research articles such as [148,149], energy utilization for the "occupied" machine state was included. A machine is occupied when an operation is finished, but the finished part cannot be moved on to the next machine because it is not yet available. A further variant of storage energy was considered in articles that integrated cold stores. For example, in [150], total energy consumption due to cold storage activity throughout the planning horizon was minimized in terms of operational costs.
In addition to energy utilization in different production states and inventory holding, energy related to the transportation phase was taken into account in several articles. For instance, the authors of [151] outlined that energy consumption in the transportation phase causes a significant share of the total energy consumption in production, especially in heavy-duty industrial manufacturing. Therefore, the authors considered energy consumption of crane transportation in job shop scheduling and differentiated between four states in which the crane demands energy: unloaded and standby, unloaded and operating, loaded and standby, and loaded and operational. Ref. [111] included energy consumption for job transportation between different production machines by a forklift. In [121], transportation energy depended on the power demand of the transporter and task-specific transportation time between production stations and the warehouse. Other articles, such as [108,138], addressed energy consumption for a transmission belt that transported the jobs between the processing machines. Thereby, speed scaling was assumed for this transmission belt: The transmission speed could be chosen out of a set of speeds. Similarly to speed scaling regarding processing energy, a higher speed results in higher energy consumption but shorter transportation time and vice versa.
In addition to energy utilization that is directly related to production and transportation, indirect energy usage was also included in several production planning approaches. On the one hand, articles that included such indirect energy in production planning took into account auxiliary energy, e.g., energy for HVAC (heating, ventilation, air conditioning) systems and lighting. In this context, Ref. [43] stated that the two largest energy consumers in a manufacturing plant are the shopfloor and the HVAC system. Production machines and HVAC were scheduled jointly in some approaches. Refs. [60,152], for example, considered manufacturing operations as a heat source, and therefore coordinated production scheduling and HVAC scheduling with each other. Other articles assumed a relationship between auxiliary energy and makespan (e.g., [111,121,153,154]) and included auxiliary energy, such as lighting and HVAC, as a constant energy demand multiplied by makespan. Thus, by reducing the makespan, auxiliary energy was decreased. In contrast to this, several articles took into account auxiliary energy as a constant value that could not be influenced by production planning (e.g., [30,155,156]). However, this energy was considered in decision making, for example, regarding a minimization of total energy consumption costs and energy demand costs, as in [157].
On the other hand, in addition to indirect energy users within production or within a manufacturing company, some articles considered different energy users within a microgrid and took into account these consumers in production planning. In these research approaches, energy usage that is not related to production was additionally included. For example, in [158,159], a microgrid consisting of manufacturing factories and residential and commercial buildings was assumed. The energy demand from residential and commercial buildings is non-shiftable, but influences the electricity price, which is determined by energy supply and demand in the microgrid. Therefore, the manufacturer takes into account the load from residential and commercial users while minimizing energy costs through flow shop scheduling. Similarly, Ref. [160] presented a scheduling problem in a microgrid and incorporated onsite energy generation, energy storage systems, macrogrid procurement, and different industrial, residential, and commercial energy users. Based on these parts within the microgrid, an energy demand threshold has to be met, and energy costs are minimized. By taking into account the overall energy consumption or demand, price mechanisms in terms of energy supply and demand are given more attention, and the associated energy costs are minimized (e.g., [158][159][160][161]).
Consequently, by addressing energy usage in various settings-different machine states, production-related activities, or microgrids with other energy consumers-the flexibility to improve energy efficiency through production planning can be increased. This allows the minimization of energy consumption, energy demand, or associated costs through appropriate production planning.
Alternative Production Resources
Similarly to speed scaling with respect to processing energy, a second characteristic commonly found in articles on energy-oriented production planning involves differences in energy usage for processing the same job by assuming alternative production resources. Basically, the same operation can be performed on more than one machine and processing time and energy demand depend on the chosen production resource. Taking into account such differences between parallel production resources is more realistic than assuming identical parallel machines or factories (see [142]). Typically, both old and new machines are part of the shopfloor and differ in operating speed and energy utilization (see [162]). Furthermore, Ref. [127,163] described that even machines of the same type and size or with the same process parameters can vary significantly in energy usage, e.g., by up to 50%, as outlined in [163]. In addition to processing energy, other energy utilization factors can vary as well (e.g., setup energy in [15]) but are rather rarely considered. Through appropriate planning, production quantities and jobs are assigned to the most energy efficient resource, leading to a reduced energy usage.
We identified this characteristic of alternative production resources in 96 articles on EOPP with minor differences in appearance. Note that several of the analyzed articles did not describe the underlying production layout in detail. We therefore cannot verify, but we assume that more approaches than 96 considered heterogeneous alternative production resources in production planning. The most frequently found variant was the assumption of parallel machines of the same kind within at least one production stage. This was taken into account in parallel-machine scheduling approaches (e.g., [164][165][166]), in flow shop scheduling (e.g., [17,129]), and in articles on job shop scheduling (e.g., [167][168][169]). Thereby, non-identical parallel machines can either be included in one single production stage (e.g., in a steel manufacturing plant, as in [68]), in more than one production stage (e.g., in a flow shop environment, as in [143]), or in every production stage in the job shop (e.g., [153]) or flow shop scheduling (e.g., [170]). In addition to considering heterogeneous parallel machines within one production site, some articles addressed multiple factories, whereby production resources differed between each production site. In this manner, for example, the authors of [158,171] presented scheduling approaches in which jobs could be processed in different factories, each with different production conditions in terms of processing time and energy utilization.
Heat Integration
As a further characteristic within energy-oriented production planning, we identified the possibility for heat integration. In total, we found 16 articles in the analyzed literature that included the possibility of heat integration in production planning. As outlined in [172], the main idea of heat integration lies in exchanging heat energy between hot and cold processes. By using hot process streams to heat up other processes or using cold process streams that cool down other processes, the consumption of external utilities (such as steam and cooling water) can be lowered, resulting in energy savings. In addition to direct heat integration between two tasks, heat exchange can also be executed indirectly by including thermal storages. Heat is exchanged between tasks and storages, and an exchange between hot and cold tasks can be performed in different time intervals. A further advantage of heat integration lies in the reduction of processing time, since heating and cooling times can be eliminated (see [173]).
Articles that allowed heat integration in production planning did this by modeling one or more heat exchange constraints. Similarly to an energy supply-demand balance, the input energy and output energy of every task or production unit were linked and balanced, e.g., in terms of temperature. Regarding indirect heat integration, thermal storages were also taken into account in these constraints.
Typically, heat integration is addressed in approaches related to the process industry, especially with respect to chemical processes. For example, Ref. [174] presented a scheduling approach for an agrochemical plant and considered indirect heat exchange between production units through storage tanks in order to reduce costs for external utility. As outlined in the previous chapter, the article from [105] considered energy in mid-term capacity planning through energy consumption costs and capital costs for energy storage systems. In this manner, heat integration was addressed by linking heat exchange (between different tasks and between tasks and storages) with capacity planning for heat storage vessels. Other works, such as [175,176], included direct heat integration in production and, due to savings through heat integration, optimized profit-related objectives, as well as energy consumption.
In [177], heat integration within a process plant was combined with onsite energy generation-a specification of what we identify as a further characteristic for energy-oriented production planning and describe in the following: multiple energy sources in production.
Multiple Energy Sources
In contrast to articles that assumed one single energy source (e.g., energy procurement solely from the energy supplier), 41 articles in the analyzed literature took into account different energy sources. A large share of these (40) addressed onsite energy generation as a second energy procurement option, such as in [177]. In particular, Ref. [177] considered surplus heat arising in production that was not integrated into other processes through heat integration. This heat could be used to generate steam. As a result, energy efficiency was increased and costs related to energy consumption were reduced.
Generally, onsite energy generation can be divided into schedulable and non-schedulable generation, and articles on EOPP include schedulable, non-schedulable, or both schedulable and non-schedulable energy generation. In articles that address the former type, energy generation can be controlled through appropriate planning and coordinated with production. Examples for schedulable OSG sources are diesel generators (such as in [93,99]), natural gas power plants (such as in [99]), boilers and CHP engines (such as in [178]), or fuel cells (such as in [100]). In terms of non-schedulable onsite energy generation, typically, renewable energy sources are considered in production planning. In this manner, for example, Ref. [179] presented a flow shop scheduling model to minimize energy consumption costs and included onsite photovoltaic systems combined with ESSs and external energy procurement in decision making. In addition to the integration of solar energy in production planning (e.g., [66,89,91,180]), other articles on EOPP also assumed wind energy as an onsite energy source (e.g., [96,97,181]). Similarly to [177], the authors of [99] considered waste heat for energy generation in their approach. As further non-schedulable energy sources, the authors included solar and wind energy. In addition, they took into account schedulable diesel generators, energy storage systems, and an energy-selling option, and they minimized energy costs regarding consumption and generation.
With onsite energy sources, a manufacturing company is able to minimize energy procurement from the grid and, in the case of renewable energy sources, to reduce energyrelated emissions as well. As outlined in Section 4.1, costs for onsite energy generation and feed-in constraints for unused energy were addressed in several production planning models. As a result, the utilization of energy self-generation may be limited. In addition to such costs and trading limitations, some approaches considered further generation constraints regarding the output of energy generation systems. For example, Ref. [77] assumed that onsite generation cannot exceed the minimum out of the given capacity of the onsite generation system or the energy demand of the manufacturing system. In addition to such upper limits, Ref. [182] considered a coal-fired thermal power generation in a scheduling context and included the minimum power output per period. Similarly, Ref. [87] considered the minimum and maximum values for the generator output due to technological constraints of the self-generation equipment.
Regarding the utilization of such onsite energy sources, the hurdles associated with the different types of energy must also be taken into account. On the one hand, conventional energy sources, such as natural gas and coal, cause harmful emissions. On the other hand, the further expansion of renewable energies also faces challenges, such as acceptance in local communities or with respect to efficient installation [183][184][185].
In total, we only found one article assigned to this class of characteristics that did not include onsite generation, but different external procurement options for energy. In a job shop scheduling environment, Ref. [186] considered different energy sources with sourcespecific emission factors and presented a multi-objective optimization approach that aimed to minimize energy-related emissions among other conflicting economic, environmental, and social objectives.
Both onsite generation and different procurement options were addressed in the approach by [76]. The authors considered a schedulable energy generation system and included long-term (base load) contracts, as well as short-term (time of use and day ahead) purchasing of electric energy. Their model minimized production-related and energy-related costs through combined production scheduling and energy procurement optimization.
Energy Storage Systems
In 27 of the analyzed articles, energy storage systems were assumed. As outlined in Section 4.1.3, energy storage systems can serve as both the energy supply side and energy demand side. Thus, they provide the flexibility to utilize procured or generated energy in a different time period. Based on this, we state energy storage systems as a further characteristic with suitable potential for energy-efficient production planning. Apart from various cost rates related to ESSs (i.e., investment costs and wear costs), in the literature found, we derived four main attributes of how ESSs are modeled: storage capacity, charging (discharging) efficiency, charging (discharging) rate, and time of charging (discharging).
In considering ESSs in production planning, several of the analyzed articles took into account storage capacity as an underlying restriction. For instance, Ref. [84] assumed a fixed battery capacity in a single-machine scheduling model. In this, the battery is charged through a renewable energy source and electricity is not procured from the macrogrid as long as the battery has power. Similarly, Ref. [187] included energy storage in a lot-sizing and scheduling problem and considered a microgrid consisting of external energy procurement, onsite energy generation, and an ESS with limited capacity. Due to discharging issues, Ref. [100], for example, not only assumed a maximum storage capacity, but defined a range for the amount of energy stored in the ESS.
Such considerations regarding charging efficiency and charging rates of energy storage systems were also given in several other papers and in a similar manner. For example, the scheduling approach in [188] included charging and discharging efficiencies, charging and discharging rates, and storage capacity for the ESS. Basically, efficiency rates of less than 100% indicate energy loss in charging and discharging periods, while charging and discharging rates greater than zero address the fact that the energy storage is not immediately fully charged or discharged. Ref. [188] indicated these considerations in their case study on steel powder manufacturing, in which both charging and discharging efficiency were equal to 90%, the charging and discharging rates were 50 kWh within one hour, and the ESS capacity was equal to 300 kWh. In addition, the authors assumed that the ESS could not be charged and discharged in a single period. In the article by [181], energy was provided by the power grid and onsite generation through renewable energy sources, and surplus energy could be stored in an ESS or fed into the grid. In a flow shop scheduling model, they assume charging and discharging efficiencies equal to 90%, while complete charging or discharging of the ESS took one hour.
Numerical Insights and Further Research Potential
Throughout the previous chapter, we illustrated our classification scheme and presented different specifications and attributes for every class. The large number of research papers and the precise procedure for analyzing the articles allow us to give some numerical insights in the following. Based on these, we state several possible subjects for future studies.
One possible direction for further research lies in an increased integration of energy into mid-term production planning. So far, 31 out of the 375 articles addressed energy in aggregate production planning or master production scheduling. Because mid-term aggregate production planning and master production scheduling set capacity restrictions and production quantities of product types and final products, the succeeding planning levels, i.e., lot sizing and scheduling, are subject to these determinations. By dedicating mediumterm production planning to energy issues already, the flexibility for energy orientation in short-term planning could be increased. Furthermore, in the 31 existing articles, almost entirely economic aspects (mostly energy consumption costs) were included as energy-related objectives. Only four approaches took into account energy consumption as an ecologicaloriented optimization goal. It would be desirable that future work depart from a purely economic perspective towards an extension in terms of ecological components. This could be linked to capacity planning for onsite energy generation or energy storage systems in midterm planning, which is not yet widely practiced. Moreover, a stronger consideration of other characteristics, such as various energy utilization factors or alternative production resources in the upper planning levels, could bring additional benefits in terms of energy efficiency.
In addition to the further integration of energy into mid-term production planning, especially regarding ecological issues, approaches that address energy along multiple planning levels or the entire planning hierarchy should be part of future investigations. Taking into account the interaction of medium-term and short-term planning could enable a better understanding of energy utilization in production planning. In the analyzed literature, 27 approaches to energy-oriented production planning combined different planning levels, out of which 23 jointly pursued lot sizing and scheduling. To the knowledge of the authors, only four articles linked mid-term planning with short-term planning with respect to energy orientation so far, and these are briefly summarized below. Ref. [189] presented a hierarchical production planning approach in which production quantities and capacities were first considered in aggregate production planning. Then, lot sizing and scheduling were outlined for the current planning period. This two-step approach aimed at minimizing inventory holding costs and energy costs. In [161], aggregate production planning was combined with lot sizing and scheduling. Assuming a microgrid consisting of a job shop manufacturing system, schedulable and non-schedulable onsite generation, energy storage, non-manufacturing buildings, electric vehicles, and macrogrid procurement, the system's total operation costs (capital costs, energy consumption costs, energy generation costs, energy-selling revenue, power demand costs, maintenance costs, production costs) and emission costs were minimized. Ref. [120] carried out master production scheduling and short-term scheduling in a single machine environment. Quantities for inventory, backlog, and production, as well as the production sequences, were determined in order to minimize the resulting operation (inventory/backlog and change-over costs) and energy costs. Thereby, different process techniques, as a form of speed scaling, allowed a reduction of energy consumption costs. In the work from [102], the sizing of a renewable microgrid in terms of master production scheduling was linked to flow shop scheduling. In performing both investment decisions on microgrid capacity and production scheduling, the minimization of costs for energy consumption and energy generation was proposed.
As a third area for further research, the combination of different dimensions regarding energy utilization (i.e., key topics) and different circumstances for improved energy efficiency (i.e., characteristics) should be explored in more detail. Possible economies of scale, as well as competing dependencies, could occur when taking into account several key topics or characteristics of EOPP at once. An example of the latter is the consideration of energy price fluctuations and the simultaneous consideration of a power demand threshold, as presented in [190] as the energy cost dilemma. In a similar way, future studies could analyze possible interdependencies among the different specifications of energy-oriented production planning and provide valuable findings for an efficient consideration of energy in production.
In accordance with [13], the authors see additional research potential in the growing consideration of onsite energy generation and storage systems in production planning as well. Among the 375 articles, 48 articles included OSG and/or ESSs, of which 39 articles presented lot-sizing or scheduling approaches. With regard to the above-mentioned aspects of further research fields, the same applies to the potential for integrating supply-side considerations into mid-term planning.
In general, the authors assume that especially research approaches that can be grouped into key topic 3-in other words, energy-supply-oriented production planning modelshave to and will receive more interest in future studies due to the ongoing shift toward renewable energies and the awareness of resource scarcity. In particular, it is our hope that the economic consideration of energy will not necessarily be the only main component of future research on EOPP at the expense of ecological improvements.
Conclusions
In this literature review, we were able to derive three key topics and five frequently found characteristics within energy-oriented hierarchical production planning. For this, we analyzed and synthesized 375 research articles published between 1983 and 2021 that take into account energy consumption, load management, and energy supply orientation along four different planning levels. Across the reviewed literature, the considerations of various energy utilization factors, alternative production resources, heat integration, multiple energy sources, and energy storage systems was identified as a frequent model property that enables improvements regarding energy efficiency. Based on this two-dimensional classification scheme, a large amount of the existing literature on energy-oriented production planning was systematically structured. Throughout the article, we linked our findings with 171 research papers and described different specifications related to each class in detail. In addition, we outlined four main areas for future research. By providing this work to the scientific community, we intend to foster further research in the context of production planning and energy efficiency. For practice-oriented readers, this article can give a sufficient summary of objectives and opportunities to improve energy utilization through production planning, and it will hopefully serve as a further step towards green production.
Data Availability Statement:
The dataset analyzed in this study can be found here: www.othregensburg.de/sustainable-production-planning (see [9]). The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-04T16:17:08.687Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "8b3f8f26fc59c278fbb20eb043747e24ad23ec59",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/23/13317/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7ea02f94107aebf740360edbe9b6eebb59c8eb0e",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
249628736 | pes2o/s2orc | v3-fos-license | Multidimensional insights into the repeated electromagnetic field stimulation and biosystems interaction in aging and age-related diseases
We provide a multidimensional sequence of events that describe the electromagnetic field (EMF) stimulation and biological system interaction. We describe this process from the quantum to the molecular, cellular, and organismal levels. We hypothesized that the sequence of events of these interactions starts with the oscillatory effect of the repeated electromagnetic stimulation (REMFS). These oscillations affect the interfacial water of an RNA causing changes at the quantum and molecular levels that release protons by quantum tunneling. Then protonation of RNA produces conformational changes that allow it to bind and activate Heat Shock Transcription Factor 1 (HSF1). Activated HSF1 binds to the DNA expressing chaperones that help regulate autophagy and degradation of abnormal proteins. This action helps to prevent and treat diseases such as Alzheimer’s and Parkinson’s disease (PD) by increasing clearance of pathologic proteins. This framework is based on multiple mathematical models, computer simulations, biophysical experiments, and cellular and animal studies. Results of the literature review and our research point towards the capacity of REMFS to manipulate various networks altered in aging (Reale et al. PloS one 9, e104973, 2014), including delay of cellular senescence (Perez et al. 2008, Exp Gerontol 43, 307-316) and reduction in levels of amyloid-β peptides (Aβ) (Perez et al. 2021, Sci Rep 11, 621). Results of these experiments using REMFS at low frequencies can be applied to the treatment of patients with age-related diseases. The use of EMF as a non-invasive therapeutic modality for Alzheimer’s disease, specifically, holds promise. It is also necessary to consider the complicated and interconnected genetic and epigenetic effects of the REMFS-biological system’s interaction while avoiding any possible adverse effects.
REMFS in current literature
The massive proliferation of EMF devices has awakened great curiosity to understand the mechanism of their interaction with biological systems. Recently, numerous researchers have evaluated this interaction [1][2][3][4]. Their results suggest specific conditions of experimental and clinical RF exposure may lead to multi-target effects [5] through activation of several biological pathways [5]. Relevant effects of EMF exposures on the pathways known to be involved in the aging process have been identified by in vitro studies Open Access *Correspondence: fpperez@iu.edu Perez et al. Journal of Biomedical Science (2022) 29:39 ( Table 1) and in vivo studies ( Table 2). These studies have looked at specific techniques involving various field strengths and exposure time (dosimetry). The results listed in Tables 1 and 2 demonstrate that the effects of biological mechanisms influenced by REMFS are likely extensive and may act in multiple distinct pathways. These data support possible therapeutic implications of REMFS on the aging process and age-related diseases, such as Late Onset Alzheimer's disease (LOAD), which is also supported by the results of our prior experiments and theories [39]. In later sections of this paper, we will specify these potential mechanisms.
The consistencies and inconsistencies between the in vivo and in vitro data
As DNA damage is frequently a prerequisite for cancerous diseases, reviews on this topic provide an experimental body of evidence on the effect of EMF on genetic material. Diab's studies on the matter showed contradictory data in in vivo and in vitro experiments [40]. Several studies of both in in vivo and in vitro samples showed a detrimental effect on DNA when exposed to EMF. Some reports showed no injury to DNA in either in in vivo or in vitro experiments. Results from other experiments were inconclusive [40]. These conflicting findings were probably caused by variability in the EMF generators, different experimental methods including time of exposure, and characteristics of the specific samples (age, genetic differences, size, tissue penetration, anatomical differences [41], etc.). It is important to understand that the in vitro results cannot easily be extrapolated to in vivo results. This is because the energy absorbed by an object is dependent on the way the EMF is able to penetrate the object [42]. The physiological characteristics of in vivo specimens differ significantly from that of an in vitro cell, and exposure to the same external field would result in an entirely different internal field. Hence, it becomes important to determine what external fields would produce similar internal fields inside both in vitro and in vivo specimens before we reach any conclusions about the biological effects of a specific EMF frequency or field intensity.
Another concern is tissue penetration, which is inversely proportional to the frequency of the EMF. With in vitro specimens there is no need to calculate tissue penetration, but in vivo samples have complex calculations for tissue penetration due to the presence of multiple tissue layers, the geometry of the tissues, and their specific dielectric properties (conductivity and permittivity). As the frequency is increased, the penetration depths of the tissue layers change such that the largest part of the incident energy may be transmitted at one frequency and absorbed at another. Sufficiently high frequencies should result in a small penetration depth, resulting in superficial penetration.
REMFS delays cell death
In our REMFS initial study, we exposed T-cells and lymphoblasts cultures to a frequency of 50 MHz and a power of 0.5W [43]. We found a 20% reduction (p < 0.05) in LDH over 3 weeks of REMFS treatments. The 30-min REMFS exposures were the most stable between REMFS treatment times, so this was selected as the minimal optimal treatment regimen. We demonstrated a 34% reduction in LDH release in REMFS-treated quiescent T-cells compared to control cells following treatment with 30 min of REMFS for 7 consecutive days (p < 0.01), suggesting a significant protective effect from REMFS. We corroborated the protective effects of REMFS with a Trypan blue exclusion study. Results showed that REMFS decreased T-cell death for 7 days, with maximal benefit using 30 min of daily treatments. These data demonstrated that REMFS of low energy and intensity (50 MHz / 0.5 W) can play a cytoprotective role [43].
REMFS delays cellular senescence
REMFS treatments produced effects associated with cellular senescence [43]. We treated knockout (KO) and control mouse fibroblasts at 100% of lifespan completed or 23 cell population doubling (CPDL) with REMFS at 50 MHz / 0.5 W every 3 days for 14 days. We found that when cells passed from CPDL 3 to 23, they became larger, vacuolated cells with more diverse morphotypes than cells at earlier CPDL. Interestingly, REMFS reversed and delayed senescent morphology, enlargement, and variation of HSF1 + / + mouse fibroblasts, but not HSF1 −/− mouse fibroblasts. These results emphasize the importance of REMFS effects on HSF1. REMFS treated HSF 1 + / + mouse fibroblasts remained smaller in size and more spindle-shaped with more parallel positioning of the cells. In addition, there were less multinucleated cells. We also observed that REMFS prolonged the replicative lifespan to 29 CPDL of murine fibroblast HSF 1 + / + compared to 23 CDLP in non-REFMS treated fibroblasts [43].
We also compared the CPDL tables of HSF1 + / + and HSF1 knockout mouse fibroblasts in treated vs nontreated cultures [43]. Both groups grew at similar rates until CPDL 18, after which REFMS-exposed fibroblasts CPDL appeared to prevent the decline in cell proliferations rates observed in untreated cells. REMFS-treated fibroblasts demonstrated 138 days of proliferative lifespan, compared to 118 days in non-treated cultures. This represented an increase of 17% in lifespan. Similar to prior experiments, HSF1-knockout cultures did not show response to the treatments, achieving only 23 CPDLs with 100 days of replicative life span.
REMFS in aging
Several short-term exposure studies have shown that REMFS increases lifespan in mice, worms, and flies. In a recent study, mice had an increased average lifespan when exposed to REMFS with an alternating magnetic field of 100 nT and 60 μT [44]. In another study with rotating Magnetic Field (0.2 T, 4 Hz), REMFS exposure slowed the aging process and prolonged the lifespan of C. elegans and of Human Umbilical Vein Endothelial Cells (HUVECs). REMFs also improved activity, reduced pigment accumulation, and delayed paralysis induced by Aβ, as well as increased heat tolerance and oxidative stress resistance [45]. A higher frequency study (10 GHz) extended the life span of Basc females fruit fly Drosophila melanogaster [46]. In another high frequency (1-10 THz) study, there was no increase in survival in early life but increased survival in later life [47].
An interesting study of Drosophila melanogaster lifespan showed that population was decreased or increased depending on parameters of the REMFS exposures [48]. There are, however, many other studies that show that prolonged EMF exposures do not increase lifespan in multiple organisms. This is most likely due to the use of different frequencies, powers, times or cell types [49][50][51][52].
REMFS in Alzheimer's disease
AD and Lewy body dementia (LBD) usually emerge during aging, when the proteostasis quality control is unable to prevent the aggregation of misfolded proteins. AD is characterized by Aβ peptides, an APP fragment of 39-43-amino acids [53]. Efficacy and safety of REMFS have been demonstrated in Transgenic (Tg) AD mouse models in vivo. An initial REMFS study prevented or reversed memory loss in Tg AD mouse model (AβPPsw) when a pulsed and modulated RF-EMF at 918 MHz with a SAR of 0.25-1.05 W/kg was applied over a 7 to 9 month period [54]. REMFS exposed Tg mice preserved good cognitive function, whereas control Tg mice showed cognitive decline. Tg mice of advanced age (21-27 months) with daily REMFS exposure for 2 months showed improved memory in the Y-maze task, although not in more complex tasks [36]. These older Tg controls showed high levels of Aβ aggregates with treated mice showing a 24-30% decrease of Aβ deposits. These data suggest a degradation of Aβ deposits with REMFS exposure. In addition, these long-term treatments were found to be safe (daily for up to 9 months) without any toxic effects on multiple health parameters, including oxidative stress, brain histology, brain heating, damage to DNA, or cancer in peripheral tissues [55].
A higher frequency study (1950 MHz) showed decreased AD pathology in Tg-5xFAD transgenic mice, which overexpress APP, and wild type (WT) mice treated with REMFS at 1950 MHz with SAR 5 W/kg for 2 h per day, 5 days per week [56]. This long-term exposure to REMFS decreased Aβ plaques, APP, and APP carboxylterminal fragments in the brain. REMFS also decreases the expression of β Beta secretase 1 (BACE1) to prevent inflammation.
Additionally, REMFS reverses cognitive decline in AD mice. REMFS treatment showed that when compared to WT mice, 5 genes that are all implicated in Aβ processing (Tshz2, Gm12695, St3gal1, Isx and Tll1), are affected in Tg-5xFAD mice treated with REMFS. Specifically, WT showed the same genetic profile to non-REFMS-treated Tg mice, while REMFS-treated Tg mice demonstrated different patterns. Therefore, these data suggest that chronic REMFS treatment influence Aβ processing in AD mice, but not in wild or Tg controls [56].
Altogether, AD mouse studies and human brain cell studies revealed that REMFS exposures reduce Aβ. It also prevents and decreases brain Aβ aggregation without causing any inflammatory reaction as seen in passive immunity treatment trials [57,58]. This represents a potential therapeutic strategy in the treatment of AD patients who already have large amounts of Aβ deposits. Other investigators have demonstrated improved cognitive function that accompanied reduction of Aβ in AD mouse models [36,[54][55][56]. Taken together, these data suggest a potential therapeutic role of REMFS in human diseases, such as LOAD.
Other studies suggested that EMF exposures enhance pathways involved in Aβ degradation through upregulation of the HSF1 pathway [43], the autophagy-lysosome system [26], the ubiquitin-proteasome system [23,25], and a reduction in β-secretase activity following REMFS thus producing a protective effect through reduction of Aβ [56]. Furthermore, REMFS also targets multiple aging and cell defense pathways that are involved in AD, including oxidative stress [19], cytoprotection [20], inflammation [27], mitochondrial enhancement, and neuronal activity [55], thereby making REMFS a potential multi-target therapeutic strategy for AD and other age related diseases.
Low energy challenges
There are two challenges that underly an explanation of the REMFS and bio-systems interaction: (1) the low energy in these EMFs is several orders of magnitude lower than k B T (k B: the Boltzmann constant, T: room temperature). This leads to information being transmitted via a phenomenon known as the k B T paradox in which information is hidden in thermal noise [59]. (2) Classical models of molecular dynamics would hold that the excited state produced by the EMF would promptly dissolve due to the thermal excitations that restart when the EMF exposure is eliminated, which is not seen [60].
REMFS exposure transmits a very low energy that is insufficient to excite electrons and is thereby considered non-ionizing. The photon energy of REMFS at 50 MHZ is 2.0678 eV −7 , at 64 MHz it is 2.64 eV −7 , and at 915 MHz it is 3.7841e-6 eV. Additionally, protein conformational changes cannot occur under direct electric field magnitudes lower than 10 8 V/m [61] and REMFS only produce 16.22 V/m [62]. REMFS energies are incapable of directly causing the dissociation of chemical bonds such as the H-O-H covalent bond of a water molecule (H 2 O) because this type of reaction would require 493.4 kJ/mol or 5.1138 eV [63,64], an exponentially higher amount of energy.
Thus, classical physics is unable to explain the biological responses to REMFS. Nevertheless, quantum physics provides an explanation of how this reaction occurs. Here we consider low energy EMF with frequencies below the THz wavelength. Interestingly, high energy EMF are not able to produce the biological effects of the low energy EMF [65]. In addition, Panagopoulos found that oscillating EMF with frequencies lower than 1.6 × 10 4 Hz produce biological effects, even at very low intensities. Conversely, as the frequency of the EMF increases to more than 1.6 × 10 4 Hz a higher field intensity is required to produce biological effects [66].
One plausible explanation why high EMF frequency is less likely to produce biological effects could be the reduced hydrogen bonding seen in higher temperatures which is directly correlated with higher energy [67]. In an interesting study, THz-exposed cells exhibited some biological responses such as increase in heat shock protein expression. Results suggest that the biological effects imposed by THz radiation appear to be primarily thermal in nature [68]. Conversely, the effects of the RF and microwave are primarily non-thermal, therefore suggesting a different mechanism. This difference could be due to the effects that RF and microwave range correlates to the rotation of polyatomic molecules and higher frequency correlates to the vibrations of flexible bonds [68].
Another possible explanation is the effect of the REMFS oscillation on the H-bond at the quantum level. It may be produced by the fact that the REMFS frequency is much slower (Hz to GHz) than the H-bond frequency (74 THz), thus inducing the H-bond to act as a driven quantum harmonic oscillator under REMFS exposures [69,70] (Eq. 1) in a time-dependent adiabatic perturbation [71]. REMFS is a continuous field exposure (perturbation) which acts slowly enough to allow the quantum system sufficient time for the functional form to adapt [71] (adiabatic process), and consequently become able to cause changes in the probability density and amplitude. Under faster excitatory frequencies the driven harmonic oscillator has no time to follow the excitatory frequency like in the classical solution [72].
Changes in water depend on EMF properties
EMF produce their effects by the electric field [67], rather than the magnetic field. Depending on the frequency and intensity of the EMF, it can change water structure through different influences. Consequently, there are some inconsistencies in the effect of EMF on the H-bond network (HBN) [73] and on the biological response. De Nino demonstrated a decreasing coherent population accompanied by the increase of the intermediate population under high amplitude of field (0.15) [74]. Conversely, Shen found stronger polarization and a higher degree of association in exposed water to low frequency EMF and low amplitude of field (0.15 T) [75]. These data suggest that the effects of EMF on H-bond and its biological effects are determined by the strength of the external field. The values and distribution of the internal fields depend on the frequency, polarization, field strength, field distribution of the external fields, the configuration of the tissue, and its dielectric properties [76].
Potential mechanisms
There is no generally accepted mechanism to explain the role of low frequency EMF in biological systems, though multiple mechanisms have been proposed. There is inconsistency amongst these theories, which can be explained by a lack of consideration for varying energy levels, tissues, or the quantum effects of these fields on water molecules. Below are examples of these proposed mechanisms: 1) RF-EMF alters the structure of the water surrounding some biomolecules, which allows water to store and release a greater amount of energy under EMF [77]. The theory is that RF-EMF exposure induces water auto-ionization to produce hydronium, which in turn protonates biomolecules to activate biological pathways. The energy that would be generated by this proposed mechanism is so high that it would cause an increase in temperature that has not been observed in REMFS exposures. 2) Protein and protein complexes (HSF1-Hsp90) [43], as well as elements of RNA and DNA [78], are EMF sensitive and can behave as EMF-sensors that operate by disruption of their conformation to form secondary structures in response to EMF variations. Structural transitions can uncover or obscure important regions of RNA, such as binding sites, or lead to dissociation of protein complexes which can release active transcriptional factors. These changes, then, affect the translation rate of nearby protein-coding genes to activate biological pathways. It is an unlikely mechanism because we observed that the initial event in our experiments is not DNA activation; heat stress is required to initiate translation. 3) Cells (e.g., neurons) possess the ability to yield constructive interference effects that enhance their intensities at several points [79][80][81], so that an applied EMF could be amplified to produce conformational changes in some proteins, transcriptional factors, and RNA. This hypothesis also requires very high energy to break bonds and would cause a thermal response. 4) Second-harmonic generation increases energy of photons whereby water molecules align during EMF exposures [82]. In this mechanism two photons with the same frequency interact with a nonlinear material to create a new photon with twice the energy of the initial photons. This hypothesis is unlikely to be the mechanism because even this doubled energy of the new photon would be still too low to break any chemical bonds. 5) EMF forces affect electrons in a way that weakens H bonds. This destabilization can act on H bonds holding DNA strands together, thereby affecting transcription. The low electron affinity of the bases, which has been previously identified in electromagnetic response elements (EMREs), are needed for EM field interaction with DNA. This theory is also less likely because we demonstrated that DNA activation is not the initial event in our experiments. 6) Another hypothesis is that of a resonant frequency as the mechanism of this interaction [83]. However, the exposures are billions of folds different [78,[84][85][86][87], thereby creating a wide range of frequencies capable of causing the same biological effects. This makes the hypothesis of a resonant frequency very unlikely and difficult to substantiate. 7) High energy vibrations of ions have are less likely the cause of the RF and biological systems interaction because the mobility of ions is low among these exposures [88].
Summary of our hypothesis
In this paper, we do not intend to produce a systematic review of all the EMF bio-effects; instead, we are trying to develop a theoretical framework for our experimental results. The mechanism postulated here explains the activation of the Heat Shock Factor1 (HSF1) via REMFS. We believe that different EMF frequencies produce different mechanisms of action on biological systems. For example, ionizing or thermal mechanisms produce the high energy required to remove electrons and break covalent bonds. Also, high (THz) frequencies produce their effects mainly by a thermal mechanism [68].
Here, we concentrate on studies performed on the low energy EMF spectrum (Hz to GHz) to describe mechanisms by which non-ionizing, non-thermal, non-modulated, continuous EM waves induce biological effects. We use results collected through our research on human cell cultures and other researchers' recent results on mouse AD models to support these theories.
Our initial REMFS experiment used an EMF frequency of 50 MHz and a specific absorption rate (SAR) of 0.5 W/ Kg. We determined that REMFS activated HSF1 in cell cultures of in lymphocytes and fibroblasts [43], increasing 70-kDa heat shock proteins (Hsp70) chaperone levels and ultimately postponing aging and death in cell cultures. Recently, we demonstrated that REMFS at 64 MHz with a SAR of 0.6 W/Kg for 14 days reduced potentially toxic amyloid-β peptide (Aβ) levels by 46% in cultures of primary human brain (PHB) when compared to nonexposed controls. A decrease of Aβ levels in PHB cultures also appeared with different duration and power protocols. Of note, Aβ precursor protein (APP) levels and non-APP processing pathway products were not altered by the treatments, suggesting enhancement of Aβ degradation as the possible mechanism of Aβ reduction.
We hypothesized that a multidimensional sequence of events explains the REMFS and biological system interaction ( Fig. 1) from the quantum to the molecular, cellular, and organismal levels. The REMFS mechanism is a combination of the oscillatory quantum [89] and molecular [67] effects on the interfacial water HBN surrounding biomolecules; specifically in REMFS, those H-bond's confined to the first layer of the interfacial water in the vicinity of the non-coding RNA Heat Shock RNA-1 (HSR1) [90]. This EMF oscillation causes the H-bond to behave like a driven quantum harmonic oscillator [91], thereby increasing the amplitude of the H-bond vibration [92] within the interfacial water that naturally surrounds nucleic acids. This shortens the length of the H-bond, increasing the probability of proton tunneling [93] and protonation of the nucleic acids [94]. This leads to the formation of tautomers [95] that produce conformational changes in HSR1 [90] to allow binding and activation of HSF1. Subsequently, HSF1 binds to DNA to express chaperones that initiate chaperone autophagy and degradation of abnormal proteins such as Aβ with consequent clinical improvement in Alzheimer's disease (AD).
The main question in the EMG and biological systems interactions is what the target of these fields is and how the target is affected to produce a response. We hypothesized REMFS oscillations on the interfacial water cause a combination of quantum and molecular vibrations responsible for the biological effects under these fields. The mechanism of the EMF and biological systems interaction could be thermal or non-thermal. Previous studies showed that REMFS acts by a nonthermal mechanism [43,96]. A temperature-dependent (i.e. thermal) mechanism produces changes in the rates of biochemical reactions as a result of heat energy transfer to the target receptor. In contrast, nonthermal mechanisms are not associated with a change in temperature, but rather with oscillations of the RF which cause vibrational energy transfer, and ultimately a [97]. We will discuss the changes at the quantum and molecular levels in more detail in the following subsections.
Exposure times and regimens
Exposure time is a very important factor to achieve an effective dose that produces biological effects. To find out possible advantages and mechanisms of REMFS, it would be much more valuable to perform experimental studies to determine the effects accumulated with time and profiles of repetition. In comparison, computational studies are limited by very short exposure times. The time exposure and repetition regimen parameters differ regarding the case under study and depend on the physical and biological conditions of the exposed target.
In the particular case of REMFS and biological pathways associated with protein degradation systems in humans and mice, we observed that the minimum exposure time to produce biological effects was between 15 and 30 min of exposure with a peak response of one hour. We also observed that the minimum regimen was 3 times per week. Marchesi found that briefer EMF exposures of 15 and 30 min did not show significant differences compared to the untreated control when measuring miR-30a expression levels. However, lengthier EMF exposures of 1 h, and to a lesser extent 3-24 h, produced biological effects.
This minimum time exposure is necessary to activate and recruit enough molecules of the HSR1 to initiate protein degradation. A repetition exposure regimen leads to maintaining high chaperone levels and degradation of abnormal proteins; otherwise, cells would continue to accumulate abnormal proteins that form during cellular metabolism [26]. There should also be a balanced process of degradation and protein synthesis, which is obtained by an intermittent regimen. There is evidence that effective exposure times can vary according to the type of cell or organism, biological pathway affected, and the physical conditions of the exposure. For example, a study showed that longer time exposures are needed to obtain the maximal biological effect. They found a minimal effect in mammalian stem cells after 2 h of EMF exposure and maximum effect after 9 h of radiation [98].
Therefore, studies that focus on the minimum exposure times (MET) and minimum exposure regimens (MER) to produce biological effects can shed light on thresholds of cell capabilities. In addition, it is more likely to reduce the complexity of the EMF interaction targets in cell cultures and organisms by lowering the exposure times and regimens, which at least reduces the overall rise in temperature.
REMFS exposures perturbs intracellular quantum systems
The REMFS interaction with biological systems allocated to multiple layers, which themselves also radiate different EM frequencies, represents a complex subject due to the scattering problem and the many-body perturbation theory (MBPT) from layered structures [55,99]. The challenge owes its complexity to the mathematical difficulties in describing electromagnetic interaction within intracellular structures, which can be exceedingly complex due to the extremely heterogenous nature of physical and biological processes. The interactions between multiple biomolecules, atoms, ions, rough interfaces scattering, and the applied EMF exposures poses strong limitations to the development of comprehensive models for the effects of REMFS on biological system. Herein, we will describe the REMFS receptor and biological systems interaction as a consequence of quantum confinement of the interfacial water HBN by electronegative biomolecules and the electropositive nucleus [100,101].
REMFS receptor
The main reason we hypothesized that there is one receptor for REMFS instead of multiple receptors for each frequency is the fact that a wide range of frequencies (Hz to GHz) can induce similar biological responses. One well recognized example of this phenomenon is observed in the heat shock response [24,[49][50][51][52]. The fact that billions of frequencies (from Hz to GHz) produce the same biological effect makes the possibility that there is a receptor for each type of electromagnetic frequency improbable. Rather, it is more likely that some common receptor mediates this effect [102]. This common receptor would have to be capable of responding to the oscillating energy of the EMF exposures within a wide range of frequencies to induce the activation of biological pathways [102]. If so, this receptor must be able to induce conformational changes, specifically biomolecules that result in a secondary structure formation that regulates transcriptional or post-transcriptional mechanisms.
Interestingly several studies have found that the interfacial water is an integral partner of biomolecules and the modulator of their activity [103]. In addition, interfacial water plays an important role in the structure of DNA and RNA, forming bridges between bases within the same strand or between two strands, and in proteins, stabilizing β-sheets and α-helices [104] In RNA, the first layer of the interfacial water is consider a part of nucleic acid structure because it defines structure, folding, and intra-molecular interactions [105,106]. The explanation of the interfacial water as a modulator of biological activity is also proposed by Mentré. He suggested that the key properties of the interfacial water in the cell are that H-bonds are cooperative, currents of protons, osmosis, hydrostatic pressure, density variations, and selective exclusions of ions. These changes make stronger and shorter H-bonds in the interfacial water with higher heat capacity than bulk water because more energy is necessary to break its H-bonds [103].
Experimental studies have demonstrated EMF interfacial effects can produce biological effects [107][108][109]. Beruto et al. grew Chlorella vulgaris with and without the application of a low intensity, low frequency EMF of about 3 mT for 30 days. These exposures produce a significant effect on the exothermic clusterization step. The investigators proposed that this effect is produce by the interfacial water of glycocalyx of the external region of microalgal membrane because it can interact with chemical species present in the environment [109].
The organization of the interfacial water depends on the type of bio-surface electrostatic forces [103]. The first layer of the interfacial water is in special quantum confinement, since the polar and charged groups of the nucleic acid interact with the surrounding H bonds [110] from the interfacial water. This interaction actively affects the structure, function, and H bond of biomolecules [111]. Such polar and charged groups are sources of electric fields [112], which are very important for proton transfer [113]. This proton transfer is based on the electropositive applied electric field (EF) from REMFS (16.22 V/m) on the interfacial water and the intracellular EF rearrangements from RNA HSR1 electronegative attraction of EF (− 30 to 100 kT/e) and the electropositive EF from the cytoplasm (38 × 10 6 V/m).
Data indicates that intracellular EF rearrangements cause an electrostatic confinement of the first layer of the interfacial water, which gives it quantum properties [114]. These interfacial water molecules are 'pseudoimmobilized' and, therefore, confined to sub-diffraction volume. Of special note, this electrokinetic confined or trapped water exhibits a quantum tunneling behavior [115]. These data suggest that the interfacial water is a potential target for the effects of REMFS, which is confirmed by other investigators that proposed that water is a sensor for low energy EMF [116].
H bond under REMFS
The pico-to sub-picosecond lifetimes of H-bonds are too short for experimental techniques such as nuclear magnetic resonance (NMR) and dielectric spectroscopy time window, so it is hard to perform experiments with water under EMF [117]. However, in a recent study applying an intense THz pulse (peak electric field strength of 14.9 MV/cm) to liquid water led to increasing H-bond stretching and bending vibrations. [118] Similar evidence that polarized REMFS radiation causes its biological effects comes from the fact that it induces the dissociation of water into its constituent elements [119]. Rao examined distilled water under a polarized 2.45 GHz exposure. The Raman spectra of the treated water showed significant changes in the O-H stretch bond, which predisposes to proton tunneling and protonation of the surrounding molecules. Interestingly, despite different experimental conditions such as frequency (THz versus Hz), most of the conclusions are consistent with the fact that very different REMFS can produce water dissociation [120].
Furthermore, EMF exposures strengthen and shorten the H-bonds on the surface of biomolecules. For example, when hemoglobin and bovine serum albumin in water solutions were exposed to 50 Hz, samples revealed a significant increase in the absorbance signal of the Amide II band and an up-shift toward the high energies after exposure. These results suggested that EMF exposures strengthened the H-bonds of the secondary structures of these proteins [121].
Although REMFS increases the rotational kinetic energy of bulk water, the effects on the water of the first layer surrounding biomolecules are most prominent. The water of the first layer in the vicinity of biomolecules has a forced orientation and cannot rotate easily. However, under REMFS, it can undergo largeamplitude librational motions [122,123]. REMFS oscillations at the molecular level produce rotation of water molecules within the first layer of the interfacial water as they try to "flip" their polar directions to match the polarity of the radio wave radiation. As a result, the oscillating electric field from the REMFS forces the water dipole moments to reorient themselves [67], which affects the H-bond that connects the first layer of the interfacial water.
This effect can be observed in biological tissues, where all polar molecules, such as water, are forced to oscillate in phase with the field and on planes parallel to its polarization [66,124] This is one of the most important factors for the quantum effects of REMFS: the man-made polarization of the excitatory oscillation of REMFS on contact with the interfacial water of the bio-system. These oscillations have a lower frequency relative to the exposed quantum system. They will change the frequency of the system to the excitatory frequency [65] like a driven harmonic oscillator [91,125]. The system is driven by energy imparted upon the harmonic oscillator continuously by an external force [126]. If the excitatory frequencies are slower, the oscillator frequency is pulled towards the excitatory frequency [126].
REMFS produces proton tunneling
Proton tunneling is a type of quantum tunneling that causes the transfer of a proton in one site to the closest site isolated by a potential barrier. Proton tunneling is commonly related to H-bonds. The hydrogen atoms are linked to two non-hydrogen atoms via an H-bond at one end and a covalent bond at the other.
H-bonds are classified based on energy or on geometry [127]. Several studies have identified the transitions from weak to moderate to strong H-bonds and the physical bases of the main geometry-based H-bond strength classifications. In this study, we use the geometric classification where the hydrogen bond is very strong when the distance between donor and acceptor atoms is in the range 2.2-2.5 Å, strong if it is in the range 2.5-3.2 Å, and weak if it is in the range 3.2-4.0 Å [128].
As mentioned above EMF decrease the water H-bond length and increase the H-bond angles as a function of the large amplitude motions [129] (Eq. 2). These large amplitude vibrations decrease the H-bond length of the first layer of the interfacial water to the oxygen of the nucleic acid (O−H···O) to values below 1.85 Å [130], which are ideal lengths for rising the probability of proton tunneling [131,132]. This effect depends on the spatial location of the molecule with respect to the field [133], affecting the HBN [134] in an anisotropic manner. Thus, as the front water molecule is rotated [135] and hydrogen pairs with net dipole moment, the new configuration changes the hydrogen-bonding energy and distance. This water reorientation is of importance in multiple quantum processes, including proton transfer [136], proton transport [137,138], and hydration of RNA or proteins for their function [139].
The amplitude of the H-bond varied between 0.18 and 0.22 Å for liquid water at 25 °C [129,140]. In ab initio calculations large-amplitude motions caused by EMF exposures affect the bond distances and decrease the barrier distance from the H of the interfacial water to the O from the carbonyl group of the nucleic acid, consequently increasing the probability of proton tunneling [141] (Eq. 3).
Evidence that EMF cause tunneling is shown in a combined experiment and computer simulation demonstrated that the Hydrogen-Oxygen (H···O) distance is critical for tunneling and the rotation of the hydrogen from the confined water toward the O modulates the H···O distance [142]. Transitions arising from both pure rotation and rotation-tunneling can occur [143]. These data indicate thermodynamically balanced motions that control the donor-acceptor distance and site of active electrostatics, developing conformations apt for proton tunneling [144].
Additionally, a mathematical model found that changing the H-bond length by the radius of a hydrogen atom (0.05 nm) changes the transmission coefficient or tunneling current by 210%, suggesting an extreme sensitivity of tunneling to distance changes on the scale of atomic dimensions [145]. Furthermore, dynamical complexity increases with the exposure to the REMFS frequency during barrier penetration of the tunneling process [141]. For example, studies have found evidence for Coherent Proton Tunneling in a HBN at a tunneling frequency of 35 MHz, a frequency somewhat close to the REMFS exposures (64 MHz) used in our experiments [146].
Further evidence that supports tunneling in REMFS is a quantum calculation study that found that the flipping processes of water under quasi-one-dimensional (1D) confinement produces quantum tunneling effects [147]. The confinement in this study is very similar to the electrostatic confinement of the interfacial water of the HSR1 mentioned above during REMFS exposures.
In addition, several studies have found that EMF cause proton tunneling to produce tautomers in nucleic acids. Cerón-Carrasco found that electric fields induce proton transfer which produce tautomers in nucleic acids using Quantum Mechanical (QM) calculations [148]. The electric field decreased the potential barrier leading to the tautomer by 20-55 kJ mol −1 . The study concluded that in the presence of EFs, only Guanine-Cytosine fit the necessary kinetic criteria to be considered a viable route to formation of tautomers.
In a more recent study Cerón-Carrasco investigated a more accurate DNA fragment in a simulation. The study found that at higher electric fields, tautomers are more stable than canonical bases [93]. In a classical molecular dynamics study, Cerón-Carrasco found that a continuous electric field exposure produces conformational changes in nucleic acids in 10 picoseconds [149].
Furthermore, Gheorghiu found that EM effects occur when the electric field is parallel to the H-bond axis [150]. Parallel electric fields were found to have a great influence on the energetics of the Guanine-Cytosine proton transfer tautomerism. It is important to consider that these effects occur under high electric fields like the found in the intracellular interfacial water of the HSR1 as mentioned above.
All these data suggest that REMFS promotes proton tunneling by oscillations that increase the amplitude of the H-bond vibrations of the interfacial water and modulate proton-acceptor distance, which increases the probability of tunneling proportional to the amplitude and the proton-acceptor distance [141,151]. This protonation creates tautomers in RNA and DNA that affect biological changes.
Mathematical model
Our challenge was how to explain why a low energy wave causes biological effects. Therefore, we hypothesized the quantum effects of REMFS. We established a numerical model for the interaction between the REMFS exposures and biological systems at the quantum level [152]. For simplicity, we divided them into three stages, each with its equation (see Fig. 2): Stage 1. REMFS vibrating energy produces a timedependent adiabatic perturbation on the H-bond of the first layer of the interfacial water (FLIFW) [152]. During the REMFS exposures that increase the amplitude of the H-bond vibrations, the H-bond of the FLIFW changes state to a driven quantum harmonic oscillator. REMFS affects the H-bond situated near the oxygen (O) of the guanine of the RNA (GRNA). The following formula (Eq. 1) estimates the amplitude increment of H-bond oscillation as a driven quantum harmonic oscillator (see Fig. 3) system under REMFS [70].
Equation 1:
First, we obtain the time dependent periodic force: Then we obtain the amplitude: where: A = amplitude, ω = vibration frequency of the periodic force, ω 0 = frequency of the oscillator, m = mass of the oscillator, φ = phase of the driving field, t = time of (1.1) 2m ω 0 cos(ωt + ϕ) the exposure, F (t) = time dependent periodic force, and ħ = reduced Planck's constant. [70] In our experiments, F = q (E + v x B), q = 1.062 × 10 -19 C (proton charge), v = 3 × 10 8 [153]. REMFS shortens the distance between H of the FLIFW to O of the GRNA by increasing amplitude of the H-bond vibration. The change in the H-bond distance as a function of the amplitude of the oscillation is calculated in Eq. 2 [129].
Equation 2:
First, we find the average over the inter-nuclear configurations of the interfacial water and RNA nucleic acids: where: r exp a = the average over the inter-nuclear configurations of the interfacial water and RNA nucleic acids, F r (q) = Variation of Bond distance, P(q) = Probability function.
The equation of P (q) in the classical Boltzmann approximation is: Then, the dynamical correction term δ dyn is: where: r exp a = operational parameter determined from the least squares fit to the experimental electron diffraction intensity curves. F r (q min ) = bond distance at the minimum of the potential function.
This equation predicts the shortening of the H-bond of the interfacial water under the time dependent perturbation caused by REMFS.
Stage 3. REMFS oscillation increases the amplitude of the H bond and decreases distance of the H from the FLIFW to the O from the GRNA, predisposing for H tunneling [70]. The probability of tunneling is proportional to the square of the amplitude. The barrier thickness or the decreased distance estimates the quantum tunneling probability by Eq. (3) [141].
Hydrogen energy These three stages and their respective equations have allowed us to develop a numerical model that can predict why the time dependent perturbation produced by REMFS alter the H-bond of the water of first layer that surrounds biomolecules. This H-bond acts as a quantum damped harmonic oscillator to increase the probability of tunneling and allows for the protonation of the nucleic acids of the surrounding RNA that in turn activates biological pathways.
Nucleic acids tautomers cause conformational changes in RNA and DNA
In RNA, nucleic acid bases occur in several tautomeric forms due to protonation of the nucleobases [95]. Tautomers [154] are used by multiple RNA to produce their functions [95,155]. Furthermore, it is well known that tautomeric equilibria are affected by several chemical and physical factors such as metals, temperature [156], pH [156] and recently EMF exposures [93,157]. Also, tautomer inter-conversions can adopt various secondary structures responsible for a variety of functions during biological processes like RNA conformational changes, DNA replication, packaging, and transcription [158,159]. Often, such conformational changes promote binding to activating factors that in turn affect transcription and translation of proteins.
Tautomerism causes conformational changes in the catalysis of self-cleaving RNA [95] to produce a wide array of biological functions [160][161][162]. The importance of tautomerism in the function of a biomolecule is highlighted in all crystal structure changes of TPP dependent riboswitches. TPP is in an extended conformation, with the orientation of the thiazolium and pyrimidine rings within hydrogen-bonding distance [163]. Here, a glutamic acid protonates and stabilizes the imino tautomer of the pyrimidine ring to lead to the ylide conformation bound in either the TPP-dependent riboswitch or a TPPdependent enzyme [164].
Investigators using quantum chemical computations detected all possible protonated base pairs in RNA crystal structures. Data showed 18 different protonated base pair combinations from RNA and proposed a theoretical model for base pair combination [165].
Guanine and cytosine protonation affects RNA structure and function. An interesting QM analyses suggested that the guanine protonation can be a crucial factor in structure and function of RNAs [166]. The different RNA structures come from the changes of single and double bonds in the ring systems of purines and pyrimidines [167]. In a theoretical study, Chaudry calculates the quantum source of the tautomerism in DNA [168]. This tautomerism shows enhancement under EMF exposures.
Evidence suggests that REMFS protonates biomolecules [60], resulting in important tautomeric interconversions and conformational changes [95,169]. Experimental and theoretical studies show that externally applied electric and EMF produce biologically relevant tautomers and conformational changes in RNA. An experimental study showed the effects of electric fields on RNA conformation changes and orientation by ultraviolet absorbance and electric dichroism [170]. In another study they found long-lived conformational changes in RNA by electrical impulses. The researchers applied an electric field of about 20 kV/cm to induce large dipole moments by shifting the ionic atmosphere, which caused strand repulsion and conformational changes in the RNA. Interestingly, the fields used were of the same intensity as those found in nerve transmission.
Furthermore, other biomolecule structures are affected by REMFS. De Ninno observed structural changes in Glutamic acid induced by exposure of REMFS at 50 Hz. The IR spectra contained stretching and bending bands of the protonated COOH which is attributable to the coupled C-O stretch and O-H bend of the COOH group [171]. Another study determined the tautomeric protonation of N-methyl piperazine. They performed theoretical calculations and practical experiments. These results suggest that proton relocation occurs by solvent assistance in water or proton jump. They found that predicted activation free energy was about 10 kcal/mol based on variable temperature nuclear magnetic resonance experiments [172].
All these data suggest that REMFS can cause tautomerism and conformational changes in RNA. This mechanism is similar to the regulation of HSR by RNA thermometers [173] in bacteria [174].
REMFS activates HSF1 and chaperone expression
REMFS, heat, alcohol, hypoxia, metal ions, peroxide, amino acid analogs, and other stressors activate HSF1 and the HSR [175]. Most of these factors cause denaturation and accumulation of abnormal proteins, which induce the HSR [176,177]. However, REMFS exposures are not likely to produce protein denaturation, so the mechanism must be related to an EMF-sensitive biomolecule such as HSR1. EMF exposure also increases HSF1heat shock element binding activity, thereby directly contributing to the activation of HSF1 and the stressinduced Hsp70 [175] transcription and translation in cells exposed to REMFS [178,179].
HSF1 is a transcriptional factor that is a master regulator of stress gene expression (molecular chaperones) [180]. Recently, in addition to chaperone expression, accumulating evidence indicates multiple additional functions for HSF1 beyond chaperone production. HSF1 acts in diverse stress-induced cellular processes and molecular mechanisms, including the endoplasmic reticulum, unfolded protein response and ubiquitinproteasome system, multidrug resistance, autophagy, apoptosis, immune response, cell growth arrest, differentiation underlying developmental diapause, chromatin remodeling, cancer development, and aging [181].
REMFS produces biological effects through HSR1 [182] which activates HSF1. HSR1 employs a similar mechanism as that of bacterial RNA thermometers to sense temperature and energy changes in the cell and ultimately regulate the translational machinery [183]. HSR1 is a long non-coding RNA that undergoes conformational changes from a close to an open structure under thermal radiation exposure (THz to GHz frequencies). These conformational changes in HSR1 are required for the binding and activation of HSF1 [90]. Computer simulations reveal that HSR1 is composed of an extensive secondary structure that changes predictably within a physiological range of temperatures [90] and EMF exposures without heating [184].
Another important co-factor in the activation of HSF1 is the translation elongation factor (eEF1A), which is a key component regulating the actin cytoskeleton architecture in the cell [185]. A full HSF1 activation requires a combination of purified HSR1 and eEF1A in-vitro at physiological concentrations [186]. Under normal conditions HSR1 is present in an inactive "closed" conformation. During heat shock or EMF exposures, HSR1 "switches" to the "open" conformation that activates HSF1 and releases it from its repressor Hsp90, while after a stress a massive release of eEF1A from cytoskeleton collapse (from misfolded proteins) can then fully activate the newly freed HSF1 [90]. In contrast, under REMFS exposure alone, there is no cytoskeleton collapse [43]. The role of REMFS in this process is to promote binding of HSR1 to HSF1, with a subsequent release of HSF1 from its repressor Hsp90 [43].
Triggering the HSR by stressors after REMFS treatment produces a fast and vigorous expression of Heat shock proteins (Hsps) [187]. Protein aggregation is an important factor in the progression of aging and age-related diseases such as AD [188]. Several pathways are associated with abnormal protein clearance, including molecular chaperones, the ubiquitin-proteasome system, and autophagy pathways [189]. The production of these chaperones depends on the activation of HSF1, an event attenuated by the aging process [190]. HSF1 is repressed by the Hsp90 complex and released to get activated under several cellular stresses [191].
Similarly, REMFS exposure also releases HSF1 from the Hsp90 complex [43]. Once released from the Hsp90 complex, it trimerizes spontaneously to bind DNA, an event that produce increased amounts of Heat shock proteins such as Hsp70, Hsp90, etc. Once chaperones are produced, they bind abnormal proteins. Excess Hsp90 binds to HSF1 trimers and causes them to dissociate and revert once again to the inactive, monomeric state [192].
The HSF1 effects on Hsps play a role in aging [193] and protein accumulation diseases [188]. The role of HSF1 in the aging process and age-related diseases such as AD suggests a deeper relationship between the molecular mechanisms of these two processes. HSF1 activation prevents the decline in proteostasis, the primary contributor to aging, thus delaying the aging process [39,194,195]. This suggests a potential role for HSF1-based therapeutic tools, such as REMFS, in the treatment of a wide array of age-related diseases [196].
Hence, it is here where molecular chaperones such as Hsp70 take on an essential role by deterring protein aggregation [197]. Two protein degradation pathways, macroautophagy and chaperone-mediated autophagy (CMA) [198], undergo age-dependent decline probably subsequent to the age-related attenuation of the HSF1 [199,200], which is an early molecular event in the aging process [201].
Hsp70 binds to APP by the KFERQ motif (see Fig. 1). HSP70 transports APP to lysosomes for CMA or endosomal microautophagy (eMI) for degradation to reduce Aβ oligomers levels [215]. Additionally, many pathogenic proteins including tau, α-synuclein, and huntingtin are degraded by CMA [216][217][218]. Hsps binds to these proteins and degrades them through the CMA or proteasome system [219]. In an interesting study, they modified Aβ as a substrate for CMA and eMI (termed as Hsc70based autophagy) by tagging its oligomers with multiple CMA motifs. This method significantly reduced Aβ oligomers in induced pluripotent stem cell (iPSC), which are cortical neurons derived from AD patient fibroblasts [220].
Hsp70 also suppresses oligomerization of Aβ by binding to the hydrophobic region to modify their conformation [219]. Structural changes in oligomers occurred when Hsps interacted with oligomers and fibrils. However, Hsps did not cause any direct effect on fibrils, suggesting that Hsps suppress the early stages of selfassembly [221][222][223][224]. Taken together these studies confirm that the activation of HSF1 and subsequent increase in chaperone levels, especially Hsp70, by EMF exposures influence Aβ degradation pathways [225][226][227] by autophagy [228,229] to lower Aβ levels.
REMFS decreases Aβ levels in primary human brain cultures
We recently utilized REMFS to lower Aβ levels in cell cultures of primary human mixed brain (PHB). REMFS treatment decreased Aβ40 and Aβ42 levels without evidence of toxicity. Treatment started on day 7 in vitro (DIV 7). After 14 days of REMFS, we measured levels of Aβ40 peptide in exposed and non-exposed cells. The REMFS parameters were frequency of 64 MHz with a SAR of 0.6 W/Kg for 1 h daily; this treatment achieved a 46% reduction in Aβ40 levels (p = 0.001, g = 0.798), compared to the non-treated cultures. The same REMFS parameters achieved a 36% decrease in Aβ42 levels. Subsequently, we demonstrated that REMFS at 64 or 100 MHz with a lower SAR of 0.4 W/kg for 14 days achieved a comparable reduction in Aβ40 and Aβ42 levels. Furthermore, when we increased the exposure time from 1 to 2 h, there was a similar reduction in the Aβ levels. Additionally, when we increased the frequency from 64 to 100 MHz, we found a comparable difference in Aβ levels. The results of our experiments suggest that REMFS at 64 MHz with a SAR of 0.4 W/kg for 1 h (typical of that already utilized in clinical MRI contexts) would be the minimum energy needed to produce bio-effects in human neurons, specifically a reduction in levels of toxic Aβ peptides.
REMFS safety
In contrast to our experiments some studies have shown possible harmful effects of REMFS on biological systems which resulted from longer exposure times (minutes vs days), higher power (0.5 vs 5 W/m) and higher SAR (0.5 vs.10 W/kg) [230]. The energy produced by REMFS is extremely low, making it unlikely that our studied REMFS exposures would lead to adverse health effects, especially as we did not observe any evidence of cellular toxicity or morphological changes in our cell culture experiments [43]. Furthermore, long-term mouse AD experiments (daily for up to 9 months) and a recent phase 1 clinical trial with REMFS were safe, with no toxic effects observed on multiple safety factors evaluated.
Specifically, there were no toxic effects on brain oxidative stress, brain histology, brain heating, DNA in circulating blood cells, and changes in peripheral tissues [35,36,55]. Additionally, our experiments use the same REMFS frequency that has been used by MRI machines for decades. Since their implementation for clinical imaging, MRI exposures have had no demonstrable negative health impacts [231]. Lastly, a recent phase one clinical trial in AD did not find any behavioral side-effect, pain, tumor growth, hemorrhage, or abnormal physiological responses after 2 months of treatment with REMFS.
Future perspectives
Any organ that shows functional decline, including the brain, kidneys, joints, liver, or heart, may benefit from engineered REMFS exposures to induce protein disaggregation by activation of the HSF1 pathway and autophagy. Therefore, we will initiate human head exposure to treat the protein aggregation caused by AD. The major technical difficulties for developing an exposure system are the human head geometry, the multiple tissue layers of the head, and development of an antenna that produce a homogeneous SAR on the whole brain.
Therefore, before clinical trials are considered we must determine the best electromagnetic settings for human exposures such as power output, power deposition, far field, antenna type, distance from antenna, electric field, magnetic field, etc. that will produce homogeneous internal fields when applied to a human brain with a target SAR of 0.4-0.9 W/kg. We will start with determining by mathematical and computer modeling the REMFS exposures in our biological studies that deliver safe thermal and SAR measurements to the human head [62].
Using these results, we will develop a virtual exposure system by numerical model and computer simulation. We will design a virtual antenna that delivers a SAR of around 0.6 W/kg to a simulated phantom of a human brain. With these simulations we will find the REMFS parameters that will deliver a homogeneous radiation to a human head in clinical trials [232]. In the near future, we will experimentally confirm these results using an appropriate practical antenna to expose a Specific Anthropomorphic Mannequin (SAM) human head phantom [233] with internal and external probes oriented vertically to determine the EMF parameters that will provide an effective and safe SAR for future AD treatment. Data suggest that the ideal environment for these treatments should be an anechoic chamber to prevent RF wave reflections and provide a uniform exposure to the subjects [234]. The final step will be to initiate phase 1 clinical trials in patient with early AD to determine safety and efficacy of this new potential therapeutic strategy.
Conclusion
The current study proposes a multidimensional mechanism at the quantum, molecular, cellular, and organismal level inside a theoretical framework that may explain the results of our experiments, and those of other investigators. The proposed quantum tunneling mechanism here is the first to provide an explanation of how low energy radio-frequency radiation may induce a biological response. Quantum tunneling allows for an understanding of events occurring between single photons and biomolecules that would otherwise be extremely difficult to visualize in experimental studies. Hence, it is by way of quantum tunneling that we finally understand the intimate relationship between REMFS and the HBN of the interfacial water of biomolecules. The process is a time dependent adiabatic perturbation of the HBN that is set into motion as a photon carried along an EM wave (with a frequency lower than the H bond frequency) that forces the H bond to change its frequency to that of the EM wave, thereby increasing the amplitude of the H bond vibrations in a process similar to a driven quantum oscillator [42]. The increased amplitude will decrease the H bond donor-acceptor distance and result in an increased probability of proton tunneling [235]. Consequently, interfacial water will donate its hydrogen toward protonation of nucleic acids, and the tautomeric interconversions that ensue result in structural changes in biomolecules and RNA, namely HSR1. The secondary structure produced will then bind to HSF1 and cause its dissociation from the multi-chaperone complex, freeing it from inhibition. Once activated, the HSF1 monomer undergoes trimerization and accumulation, inducing the expression of Hsp70 and thereby activating Aβ-clearance pathways to delay cellular senescence.
Data suggests HSR attenuation goes hand-in-hand with aging, and may even be the initial event in the aging process [201], presumably due to a decrease in HSF1-DNA binding [199]. Hence, the failure of proteostasis associated with aging may be the initial event in the development of AD [39]. Consequently, it is plausible that a treatment that enhances or restores HSF1-DNA binding would improve the loss of the proteostasis observed with aging. Therapeutic implications could also be expanded to involve other age-related diseases associated with protein accumulation, such as AD, PD, LBD, and frontotemporal diseases.
The theory herein comprises an important framework that lays the foundation for understanding the interactions between EMF and organisms and provides a valuable contribution to the foundational principles that should underlie any discussion on the biological effects of EMF stimulation. Importantly, the results of our literature review and research also point to the capacity of REMFS to influence various networks within known biological systems dysregulated in AD. The potential implications of REMFS as a therapeutic modality are likely to be far in the future, but the ability of RF-EMF to significantly reduce Aβ40 and Aβ42 levels in human neurons, coupled with animal model results, indicate a pathway worth further exploration. These results in cell and animal systems are likely achieved through a combination of efficient Aβ degradation, autophagy-lysosome system [26], and proteasome system activation [23], as well as the reduction of β-secretase activity [56]. Yet, we must note that quantum tunneling-based model could explain the conformational changes of molecules involved in other biological pathways not mentioned here.
The use of REMFS as a non-invasive therapy for the management of AD holds promise and the results of a recent phase 1 clinical trial confirm its safety in humans [236]. Nevertheless, it remains necessary to take into account the complex network of genetic [237][238][239] and epigenetic [240] effects occurring under REMFS. As regulation pathways triggered by REMFS have yet to be clearly elucidated, current knowledge of EMF-biological systems interaction and possible adverse effects remain limited. Quantum and classical molecular computer simulations, complemented by in vitro and in vivo laboratory studies as well as clinical trials, are needed to investigate the initial and late effects of REMFS. These studies will help develop the conditions useful for its therapeutic use while avoiding any possible adverse effects. Finally, there is a need to perform mathematical modeling and computer simulation that elucidates the appropriate EMF settings for human treatments [62,232]. Regardless, the theories that we have proposed provide the framework for observed outcomes in several cellular and animal studies that prove the potential therapeutic implications of REMFS on age-related diseases in humans. | 2022-06-14T13:51:32.917Z | 2022-06-13T00:00:00.000 | {
"year": 2022,
"sha1": "3d463f1e2f844d04afbd62b2e3575c186a657cef",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9fd5cd0a09c0470d6be25cfe10c42213d314bb3b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222133458 | pes2o/s2orc | v3-fos-license | The Potential Role of Dyslipidemia in COVID-19 Severity: an Umbrella Review of Systematic Reviews
Objective The aim of this study was to analyze the available knowledge about the potential association between dyslipidemia and the severity of coronavirus disease 2019 (COVID-19) as reported in previous published systematic reviews. Methods In this umbrella review (an overview of systematic reviews), we investigated the association between dyslipidemia and COVID-19 severity. A systematic search was performed of 4 main electronic databases (MEDLINE, Embase, Scopus, and the Cochrane Library databases) from inception until August 2020. We evaluated the methodological quality of the included studies using the A MeaSurement Tool to Assess systematic Reviews (AMSTAR) 2 tool and used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system to assess the quality of evidence for the outcome. In addition, we evaluated the strengths and limitations of the evidence and the methodological quality of the available studies. Results Out of 35 articles identified, 2 systematic reviews were included in the umbrella review. A total of 7,951 COVID-19-positive patients were included. According to the AMSTAR 2 criteria and GRADE system, the quality of the included studies was not high. A history of dyslipidemia is likely to be associated with the severity of COVID-19 infection, but the contrary is the case for cholesterol levels at hospitalization. Conclusions Although existing research on dyslipidemia and COVID-19 is limited, our findings suggest that dyslipidemia may play a role in the severity of COVID-19 infection. More adequately powered studies are needed. Trial Registration PROSPERO Identifier: CRD42020205979
INTRODUCTION
The recent coronavirus disease 2019 (COVID-19) outbreak has spread rapidly and has affected the world for almost a year, causing immense economic and social difficulties. Despite efforts to develop vaccines and new treatments, preventing COVID-19 remains challenging and no clear treatment options exist. As of September 11, 2020, roughly 28 million people worldwide have been infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes COVID-19, and more than 900,000 people have died. With some exceptions, most deaths are thought to be related to underlying comorbidities. 1 Therefore, identifying the risk factors related to severe COVID-19 is important to enable stratification of risk in advance, to optimize the reallocation of medical resources, and to improve patients' overall prognoses.
As SARS-CoV-2 primarily attacks the respiratory tract, several studies have investigated the relationship between chronic obstructive pulmonary disease, 2 asthma, 3 or smoking, 4 and the severity of COVID-19. However, it has also been found that patients with underlying cardiovascular disease or cardiovascular disease risk factors have a high risk of a severe course of illness or mortality due to COVID-19. 5-7 COVID-19 can also have various cardiovascular manifestations such as myocardial injury, arrhythmias, acute coronary syndrome, and venous thromboembolism. 8 Several observational studies and meta-analyses have shown that underlying cardiovascular disease, diabetes mellitus, and hypertension clearly increase the severity and mortality of COVID-19. [9][10][11] However, unlike diabetes and hypertension, relatively few studies have been conducted on dyslipidemia, one of the most important risk factors of cardiovascular disease. Several observational studies have reported an association between high-density lipoprotein (HDL) cholesterol levels and the severity of COVID-19 12,13 ; however, the results are inconsistent.
To date, multiple systematic reviews and meta-analyses have been published analyzing the potential link between presence of dyslipidemia and the severity of COVID-19. However, to our knowledge, no attempt has been made to summarize the evidence from these systematic reviews. Therefore, systematically and comprehensively re-evaluated the evidence to provide an overview of the association between dyslipidemia and COVID-19 severity. Specifically, we conducted an umbrella review to evaluate the findings of systematic reviews and/or metaanalyses that investigated the relationship of dyslipidemia and severity of COVID-19 infection and to assess the evidence regarding potential limitations and the consistency of findings.
MATERIALS AND METHODS
An umbrella review was performed in this study. An umbrella review provides a summary of existing published meta-analyses and systematic reviews and determines whether authors addressing similar review questions have independently reported similar results and arrived at similar conclusions. 14 We applied the Cochrane Collaboration methodology 15 and available methodological guidelines for overviews of reviews. 16,17 The study protocol was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO Identifier: CRD42020205979).
Search strategy
The literature search aimed to identify systematic reviews that evaluated the association between dyslipidemia and COVID-19. To identify relevant systematic reviews and metaanalyses, an electronic search was conducted of 4 databases (MEDLINE, Embase, Scopus, and the Cochrane Library) from inception until August 2020. These databases are frequently updated when new research is disseminated in peer-reviewed publications and archive services become available.
The search strategies were developed by H.K., who has expertise in systematic reviews.
The search was conducted using index terms (e.g., MeSH and Emtree terms) and free text words and word variants. The titles and abstracts from the literature search were screened by 2 independent reviewers (G.J.C. and H.M.K.) to identify whether they contained relevant content, and duplicate studies were excluded. Additionally, the same reviewers conducted citation tracking or manual searches of all references of all included studies and all included systematic reviews. Only English-language publications that presented a quantitative or qualitative review regarding the relationship between dyslipidemia and COVID-19 were retrieved.
Study selection criteria
The following criteria were applied to identify the articles to be included in the present umbrella review: (1) systematic reviews and/or meta-analyses; (2) studies involving adults who tested positive for COVID-19; and (3) studies reporting the association between dyslipidemia and COVID-19 infection. Two authors (G.J.C. and H.M.K.) screened the titles and abstracts of the articles independently to evaluate eligibility for inclusion. If a consensus was reached, articles were either excluded or moved to the next stage for full-text review. If a consensus was not reached, the article was moved to the full-text review stage. The full texts of the selected articles were critically appraised to determine their eligibility for inclusion in the umbrella review. Disagreements were resolved by discussion with a third author (H.K.) until consensus was reached.
Data extraction
Two authors (G.J.C. and H.M.K.) independently identified the studies to be included in this umbrella review according to the pre-specified inclusion criteria. Discrepancies in assessment were resolved after discussion with a third author (H.K.). The following information were extracted from eligible articles: (1) authors, journal details, and year of publication; (2) descriptive information, including the databases searched, the number of studies included, the outcomes of the studies included, the total number of patients, and patients' age range; and (3) the results of the data synthesis.
Quality assessment
Two authors (G.J.C. and H.M.K.) independently evaluated the methodological quality of the included studies using the A MeaSurement Tool to Assess systematic Reviews (AMSTAR 2) tool. 18 Inconsistencies were resolved through a discussion with a third author (H.K.). AMSTAR 2 is a reliable, valid and critical assessment tool developed from the initial AMSTAR in 2017. [18][19][20] It contains 16 checklists (7 critical checklists and 9 non-critical checklists) for assessing systematic reviews and meta-analyses, including randomized controlled trials, observational studies on exposure, or both ( Table 1). The rating criteria of AMSTAR 2 are as follows: the presence of 0-1 non-critical weakness is defined as high quality; more than 1 non-critical weakness is defined as moderate quality; 1 critical flaw with or without noncritical weaknesses is defined as low quality; and the presence of more than 1 critical flaw with or without non-critical weaknesses is defined as critically low quality. The author responsible for the methodology of this study (H.K.) completed the online AMSTAR 2 checklist available on the AMSTAR website (https://amstar.ca/Amstar_Checklist.php) and a final categorization of each systematic review was generated to classify them as high, moderate, low, or critically low quality.
Data analysis
Two authors (G.J.C. and H.M.K.) independently extracted the outcomes on the relationship of dyslipidemia or non-dyslipidemia and lipid profile with COVID-19 infection severity from the identified systematic reviews and meta-analyses. We recalculated the weighted mean difference (WMD) or risk ratio (RR) and the corresponding 95% confidence intervals (CIs) using the data of the primary studies included in the published meta-analyses. We used the chi-square test for homogeneity and the I 2 test for heterogeneity. A level of 10% significance (p<0.1) for the χ 2 statistic or an I 2 greater than 50% was considered to indicate considerable heterogeneity. A fixed-effects model was selected if the p-value for the χ 2 test was >0.10 and the I 2 value was <50%. If the I 2 value was >50%, a random-effects model was used. We conducted this meta-analysis using the Revman 5.3 software provided by the Cochrane Collaboration Network.
Assessment of the quality of evidence
In this umbrella review, we used the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system to evaluate the quality of evidence for each outcome. 21 The GRADE system includes 5 factors for downgrading and 3 factors for upgrading the quality of evidence. The baseline quality of evidence of health outcomes depends on the design of the primary study. When a serious or very serious defect could occur because of downgrading factors, the evidence quality is downgraded by 1 or 2 levels, respectively. If the effect is large (RR/odds ratio [OR] either >2.0 or <0.5) or very large (RR/ OR either >5.0 or <0.2), the evidence quality is upgraded by 1 level or 2 levels, respectively. If there is evidence that the influence of all plausible confounding would reduce a demonstrated effect or suggest a spurious effect when the results showed no effect, the evidence quality is upgraded by 1 level. The rating criteria of GRADE are as follows: the primary evidence quality of an observational study is considered low, and the evidence quality is downgraded to very low if it is downgraded by 1 level, upgraded to moderate if it is increased by 1 level, and upgraded to high if it is increased by 2 levels. As a result, the GRADE system classifies the evidence quality of outcomes from eligible articles as high, moderate, low, or very low. The GRADE classification was independently performed by 2 authors (G.J.C. and H.M.K.). Any discrepancy was resolved via a discussion, and all discrepancies that could not be resolved through a discussion were arbitrated by a third author (H.K.). Table 1. Checklists for assessing systematic reviews and meta-analyses according to the AMSTAR 2 tool Item Checklists (7 critical* and 9 non-critical checklists) 1 Did the research questions and inclusion criteria for the review include the components of PICO? 2* Did the report of the review contain an explicit statement that the review methods were established prior to conduct of the review and did the report justify any significant deviations from the protocol? 3 Did the review authors explain their selection of the study designs for inclusion in the review? 4* Did the review authors use a comprehensive literature search strategy? 5 Did the review authors perform study selection in duplicate? 6 Did the review authors perform data extraction in duplicate? 7* Did the review authors provide a list of excluded studies and justify the exclusions? 8 Did the review authors describe the included studies in adequate detail? 9* Did the review authors use a satisfactory technique for assessing the RoB in individual studies that were included in the review? 10 Did the review authors report on the sources of funding for the studies included in the review? 11* If meta-analysis was justified did the review authors use appropriate methods for statistical combination of results? 12 If meta-analysis was performed did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis? 13* Did the review authors account for RoB in individual studies when interpreting/discussing the results of the review? 14 Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review? 15* If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review? 16 Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review? AMSTAR, A MeaSurement Tool to Assess systematic Reviews; PICO, population, intervention, comparison, and outcome; RoB, risk of bias.
Search results
The literature search performed using the keywords shown in Table 2 yielded several studies. Of the 35 articles identified, 3 duplicates were removed. After exclusion of 14 articles through title and abstract screening, the full texts of 18 articles were assessed. Fourteen studies that did not address our topic of interest 22-35 were excluded. Two studies were excluded because their focuses were on lipoprotein (a) 36 and obesity, 37 respectively. Two studies remained after full-text screening using the eligibility criteria; these were the studies by Hariyanto and Kurniawan 38 and Zaki et al. 39 Finally, 2 studies including a total of 7,951 COVID-19-positive participants were included in the current overview. The process followed in the selection of eligible studies is summarized in Fig. 1. Table 3.
Quality assessment
Both of the included reviews had critical weaknesses according to the AMSTAR 2 criteria. They had in common 2 critical flaws in terms of assessing the risk of bias (RoB) in individual studies that were included in the review and accounting for RoB in individual studies when interpreting/discussing the results of the review. Additionally, the study of Hariyanto and Kurniawan 38 showed critical flaws in terms of: using a comprehensive literature search strategy; providing a list of excluded studies and justifying the exclusions; using appropriate methods for the statistical combination of results if a meta-analysis was performed; performing an adequate investigation of publication bias (small study bias); and discussing its likely impact on the results of the review if a quantitative synthesis was carried out. The latter 2 factors were not considered in the study of Zaki et al. 39 because it did not contain a meta-analysis. According to the rating criteria of AMSTAR 2, both of these studies showed critically low quality.
In terms of the GRADE system, we evaluated only the study of Hariyanto and Kurniawan, 38 which conducted a meta-analysis regarding severe outcomes of COVID-19 infection between non-dyslipidemia and dyslipidemia groups. Zaki et al. 39 performed a systematic review without a meta-analysis and included various types of studies such as clinical studies, in vitro cell studies, and a review; therefore, it was difficult to apply the GRADE system to each outcome in their study. Consequently, as the primary evidence quality of observational studies is considered to be low in the GRADE system, the systematic review was initially graded as low. Since there were no issues requiring down-grading, the study of Hariyanto and Kurniawan 38 regarding the relationship between dyslipidemia and the severity of COVID-19 infection was finally graded as providing a low quality of evidence.
Findings
There were no overlapping studies in the systematic reviews included in this umbrella review. Since data were available from 1 study 41 included in the review of Zaki et al. 39 on the relationship of dyslipidemia with COVID-19 severity, we performed a meta-analysis combining data from the 2 reviews. Our meta-analysis showed that dyslipidemia was ; Fig. 3).
Regarding the other issues, since the limited number of eligible reviews restricted our ability to synthesize and interpret their findings, a narrative synthesis of the evidence is presented in the Discussion section. The 2 reviews included in current umbrella review were more suitable for providing qualitative summaries of the evidence than for conducting a meta-analysis.
DISCUSSION
Considerable evidence has been amassed that a history of cardiovascular disease or cardiovascular risk factors such as hypertension and diabetes is closely related to COVID-19 infection and severity. 5,6,11 Since dyslipidemia is one of the most important cardiovascular risk factors, dyslipidemia is also likely to be closely related to COVID-19. However, research on the role of dyslipidemia in the risk and severity of COVID-19 is relatively lacking. One cause may be that the definition of dyslipidemia itself is rather complicated, unlike diabetes mellitus, hypertension, and obesity. Furthermore, numerous reports have stated that rapid changes in the lipid profile appear to occur in response to COVID-19 infection and the progression of the disease. 46 Therefore, there may be a major difference between defining the presence of dyslipidemia before being diagnosed with COVID-19, and defining dyslipidemia at the time of hospitalization after a patient is diagnosed with COVID-19.
In the meta-analysis conducted by Hariyanto and Kurniawan, 38 testing or the use of lipid-lowering drugs. Four of the 7 studies were conducted in China, and 1 study each from the US, France, and Korea was included. The prevalence of dyslipidemia in these studies differed dramatically. In particular, in the studies conducted in China, Hong Kong, and Korea, 47,48,50-52 the prevalence of dyslipidemia was much lower than that in the other studies (1%-10%). In the US study, 45 the number of patients with dyslipidemia was 1,714, with a prevalence of 32.5%, and in the French study, 49 34 patients had dyslipidemia, with a prevalence of 28%. Since there were no overlapping studies in each systematic review included in this umbrella review, we performed a new meta-analysis to analyze the relationship between dyslipidemia and the severity of COVID-19 quantitatively. However, of the 3 studies included in the review of Zaki et al., 39 only 1 study 41 described the presence of underlying dyslipidemia. In our new meta-analysis of a total of 8 studies, combining the 7 studies 45,47-52 included in the study of Hariyanto and Kurniawan 38 and the single study 41 included in the review of Zaki et al., 39 the relationship between dyslipidemia and severe COVID-19 was confirmed. However, a relevant limitation may be that all studies included in the 2 systematic analyses could not be analyzed. Nevertheless, given the results so far, the prognosis of COVID-19 seems poor in patients with dyslipidemia, so dyslipidemia should be regarded as another important factor in risk stratification models for COVID-19. Prior to the COVID-19 pandemic, extensive research explored the link between cholesterol and viral infections. In general, cholesterol in the cell membrane plays an important role when a virus enters the host cell, 43 and the efficiency of viral infection is significantly reduced when cholesterol deficiency is induced in the cell membrane. 44 After a viral infection has already occurred, increased levels of LDL cholesterol itself can interact with macrophages in atherosclerotic plaques or engage in inflammasome activation and increase the secretion of proinflammatory cytokines. 53,54 Furthermore, low HDL cholesterol may cause dysregulation in the innate immune response, a first-line defense mechanism against COVID-19 infection. 55 Lastly, LDL cholesterol or triglyceride accumulation may cause endothelial dysfunction, increasing the risk of cardiovascular complications, leading to severe outcomes. 56 However, the systematic review of Zaki et al. 39 presented completely different results. In their study, total cholesterol, HDL cholesterol, and LDL cholesterol levels were consistently lower in the COVID-19-infected group than in the control group, and HDL cholesterol was significantly lower in patients with a high severity of disease. Zaki et al. 39 performed a systematic review of the associations of various comorbidities, such as hypertension, diabetes, high cholesterol, and cancer, with COVID-19 severity. A total of 54 articles were included, and the authors stated that 8 analyzed high cholesterol levels, but only 3 observational studies were finally included. All 3 of those studies were conducted in China. [40][41][42] Two studies did not confirm the presence or absence of dyslipidemia from patients' medical records, 40,42 and in 1 study, the presence of dyslipidemia was described, but its relationship with severity was not investigated. 41 Instead, in all studies, various lipid profiles were tested at the time of admission, and in 1 study, 40 tests were performed serially at intervals of 2 days for more than 2 weeks to analyze the relationship between lipid parameters and COVID-19 severity. Among the 7 studies included in the analysis of Hariyanto and Kurniawan 38 described earlier, 1 study 45 specified that patients' baseline lipid profile was analyzed. We found that higher total, LDL, and HDL cholesterol levels showed an inverse association with severe COVID-19 in our new meta-analysis. Consistently with our results, Fan et al. 57 and Wei et al. 58 reported similar findings from 2 small observational studies. In studies of 21 and 597 patients, patients infected with COVID-19 had lower total cholesterol and LDL cholesterol levels than healthy controls. Severe cases had lower total cholesterol, LDL cholesterol, and HDL cholesterol levels. The authors suggested that their findings may be explained by changes in cholesterol metabolism due to COVID-19 infection, changes in lipid metabolism due to hyperinflammation, leakage of LDL cholesterol due to increased vascular permeability, and accelerated lipid degradation. Changes in lipid metabolism are an early step in atherogenesis and can cause vessel injury through coagulopathy and endothelial dysfunction. Thus, the authors emphasized the role of cholesterol in vasculopathy caused by Related to this topic, Zhang et al. 60 reported that in a retrospective study of 13,981 patients, statin therapy (the main treatment for dyslipidemia) significantly reduced the severity and mortality of COVID-19. In an analysis of all participants' baseline characteristics, subjects with statin therapy had less dyslipidemia, defined as a LDL cholesterol level more than the upper limit of the normal range according to the criteria at each hospital, and even after propensity score matching, there was a slight statistical difference between the two groups. However, even after correcting for various risk factors including LDL cholesterol, the mortality rate within 28 days was 5.2% and 9.4% in the statin and non-statin groups, respectively, with an adjusted hazard ratio of 0.58. The possibility that these results may reflect the pleiotropic effects of statins, including their anti-inflammatory and antioxidant effects, rather than their LDL cholesterol-lowering effect itself, has attracted attention. In particular, protease inhibitor-based antiretroviral and immunosuppressive drugs are used to treat COVID-19, 61 and these drugs may exacerbate hyperlipidemia, so statins may be particularly effective in patients receiving these medications. 62 Furthermore, in a study in which proteomic analysis was performed on various tissues obtained from autopsy samples of patients who died from COVID-19, 63 Niemann-Pick C1 (NPC1) was significantly upregulated in most organs, including the lung, spleen, and heart. Considering that lipids and lipid metabolism play an important role in the process of viral replication, 64 NPC1 may be a potential drug target for COVID-19. Prospective randomized clinical studies are needed to obtain an accurate answer.
An important limitation of this overview is that the number of studies included-reflecting the number of reviews that included data regarding the association between dyslipidemia and COVID-19 infection-was limited, precluding a stringent analysis of this end point. Moreover, the methodological quality assessment showed that both reviews had multiple critical flaws, making the current umbrella review limited. The situation regarding the COVID-19 pandemic is changing so rapidly, with such serious consequences, that numerous reports are published without a sufficient evaluation of the data quality. This may explain the low methodological quality of the reviews included in the present study. Consequently, we conducted this overview as a narrative synthesis of the evidence. In addition, we only analyzed articles in English. If the search had been done in other languages, more studies could have been included. Despite these limitations, however, this is the first umbrella review investigating the association between dyslipidemia and COVID-19 infection. Furthermore, we applied a rigorous methodology to conduct an umbrella review of systematic reviews. Hence, the evidence from this study is important from the perspective of aggressive treatment and disease prevention.
In conclusion, we conducted an umbrella review to identify the findings and contents of systematic reviews and/or meta-analyses of the relationship between dyslipidemia and COVID-19 infection severity. According to the AMSTAR 2 criteria and GRADE system, the studies did not show high quality, and the primary evidence quality of both the observational studies and the systematic reviews was considered low. Although it is difficult to draw a definitive conclusion, patients with a history of dyslipidemia are likely to be at risk for severe COVID-19 infections, but the contrary finding was shown for cholesterol levels at hospitalization. These findings suggest that dyslipidemia may potentially play a role in the severity of COVID-19 infection. These results provide clinically valuable evidence in this pandemic situation because existing research on dyslipidemia and COVID-19 is limited. More adequately powered studies are needed to obtain reliable results on this issue. It is very important to ensure that future research is well-designed, since conflicting results may occur depending on when dyslipidemia is defined or laboratory tests are performed. | 2020-10-06T05:05:16.913Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "ebd3560062c23900ae0067b48f0723f422f57ffb",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12997/jla.2020.9.3.435",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebd3560062c23900ae0067b48f0723f422f57ffb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
140453538 | pes2o/s2orc | v3-fos-license | Host Presidents ’ Address : A Discussion on Ways Catholic Higher Education Institutions Can Assist Catholic Elementary and Secondary Schools
As part of the third Catholic Higher Education Collaborative Conference (CHEC), an event cosponsored by Boston College and Fordham University, the host university presidents, Rev. William P. Leahy, S.J., and Rev. Joseph M. McShane, S.J., addressed conference attendees and discussed ways higher education institutions can assist Catholic elementary and secondary schools. This article contains a transcript of their remarks. Fr. Leahy, the 25th president of Boston College, has a keen interest in Catholic schools, understanding their importance for the nation and for handing on the Catholic tradition to the next generation. Through his efforts, the Center for Catholic Education at Boston College, now the Roche Center for Catholic Education, was established. Fr. McShane, the 32nd president of Fordham University, shares a strong commitment to Catholic education. He has become one of the most powerful voices in New York City speaking on behalf of the need for Catholic schools. The work of Fordham’s Graduate School of Education and Center for Catholic School Leadership with Catholic schools throughout the metropolitan region has received support from Fr. McShane.
meet all needs and respond to all requests.
But there are things that we can do and are doing.Critical to any collaboration is an assessment of needs and capacities, and that applies both to the Catholic colleges and universities and to Catholic elementary and secondary schools.I think we have to be honest with one another, not overpromising and certainly not underdelivering once a promise has been made.In my experience, most Catholic college and university leaders are willing to offer expertise and assistance to parochial schools, but there has to be a willingness on the part of the diocese or archdiocese to involve them.There is still a level of suspicion in some quarters about Catholic institutions of higher education that they are not orthodox, or that somehow they are not really part of the Catholic community.
We can help with strategic planning and with fundraising, but it is important to acknowledge at the outset that the Catholic community has not always invested sufficiently and wisely in Catholic education.In too many instances, we have been running off our past educational reputation.Potential donors increasingly have to be convinced that a gift to a Catholic school will be worthwhile.Those who have given money over the years see a number of Catholic grade schools closing.So they ask, is their support a good use of resources?
I also think we can do a better job at planning the scope and timing of assistance, especially if there are a number of Catholic colleges in a metropolitan area.Curriculum workshops and coaching of teachers can be done, but needs scheduling.Someone has to take the initiative to deliver teachers to these workshops.Catholic colleges and universities have the expertise to provide programs on language arts, math, and sciences, and to acquaint teachers about new curriculum developments and models of instruction.
We all know there is a huge need for the next generation of principals in Catholic elementary and secondary schools.We do have teachers today in our Catholic schools who could be very effective principals.We first have to identify them, encourage them to think about becoming leaders at the assistant principal and principal level, and then help them with the education and mentoring that will allow them to assume leadership roles.I was talking with the head of school at a local Catholic elementary school recently, asking him about his faculty.He has roughly 20 teachers and has just been there 3 months, yet in that time he has seen leadership potential in his faculty.He said, "I have my eye on three teachers that I think could become principals."We at the Catholic colleges and universities can create training programs for these future leaders.There would have to be some Host Presidents' Address tuition assistance from the diocesan or archdiocesan office, and I think the individual getting the degree should invest as well.All involved should have a stake in mentoring and leadership development programs.
In addition, I think we overlook in many instances the willingness of our students, faculty, and alumni to volunteer their talent and services for Catholic schools.It is possible to have MBA students supervised by a faculty member to review the financial operations of schools, devise needed systems, and help standardize financial reporting mechanisms.Alumni could also do facilities assessments and help develop long-term plans for the physical plants of our grade schools and high schools.I believe there are numerous retired nurse alumni from Catholic colleges and universities who would be willing to serve as volunteer nurses 1 or 2 days a week in our grade schools.Finally, I think graduates of our schools of social work could help families in our grade schools that need assistance with immigration status or just coping with American life.
These are numerous practical ways in which Catholic higher education institutions can assist Catholic elementary and secondary schools.We should also remind ourselves of the importance of Catholic education and why all members of the Catholic community should be involved in supporting Catholic schools.We are all part of the Catholic culture, Catholic life; whatever we can do to assist schools at any level redounds to the good of all of us in the Catholic community and to wider society.It is part of our mission and the mission of the Church to evangelize and improve the quality of life not only in the United States but throughout the world.
We also must take on the question of why we do not have more Catholic grade schools in suburban communities.We have pastors who are not keen on being involved with grade schools.We also have a Catholic community that says unless the schools are really great, we are not going to send our children.I am convinced the Catholics will support quality elementary and secondary schools.It is interesting-the Jewish community here in Newton, Massachusetts, has been starting new day schools and expanding the ones that it currently has.A member of the Jewish community said to me, "Isn't it ironic?You Catholics are giving up your parochial school system, and we Jews are starting ours."They are doing that because it is a way of handing on faith, of sustaining a culture.
No doubt, we have a lot to do, but these challenges can be met with vision, hard work, and the grace of God.
Father McShane:
Where Fr.Leahy ended is really where I am going to be speaking; I am not going to be talking about practicalities.I am really going to be talking about more fundamental things.As we all know the Catholic school system, or more precisely the Catholic schools systems, in the United States face tremendous challenges at this time.Among the challenges that they face include the following: declining number of religious women and men to staff them; increased cost of attendance; a seemingly lessening commitment to the schools on the part of the Catholic community, parishes, and bishops; nostalgia for what was remembered as the "golden age" for Catholic education; the suburbanization of the Catholic American life, which is one of the great triumphs and one of the great crosses of the American Church; increased competition that our schools face in the suburbs; and the acceleration of secularization in the American life.Now, faced with all of these challenges-faced with these challenges and all the ones that you know, lesser souls retire from the field and declare that the American Church has outgrown its need for the schools, the schools that have been such a distinguishing feature of the Catholic American life.One of the bragging points of the American Church is that there is no national church outside the United States that has had such an extraordinarily large and effective system of schools.In other places you have academies for boys or girls run by churches and congregations.But here in the United States, it is a unique system, or series of systems, and it is one that we should be proud of.In spite of the challenges, you are convinced that the schools are worth preserving.I admire you for that, and I share your sense of conviction.I take heart from the fact that this conference is taking place, as I am sure someone this morning has told you, on the 27th of September, which is the anniversary of the papal approval of the Society of Jesus by Pope Paul III in 1540.So it is, I think, a fitting day for us to gather.Why?The fact that we are gathered on this anniversary date reminds us that a small group with focused energy and a vision-a vision of service to the Church-can make a difference, can change worlds.Filled with that sense of optimism that comes from the anniversaryand I am told it is also the 350th anniversary of the Birth to Eternal Life of St. Vincent de Paul-let me address a few points.
First is the need to convince the Catholic community of their support of the schools into the future.Now, you may think that this is a largely marginal topic, or tangential one, given that you intend to talk over the course of this day about, to your minds, more pressing things.I would contend that unless we are able to rally the Catholic community-and this is what Fr.Leahy was just saying-to support the schools, all the high-flown plans that we have drawn up Host Presidents' Address will be for naught.Second, in order to rally the support of the Church around the schools, we have to address the primary challenge: articulating quite clearly a compelling vision for Catholic education, a vision that will both set Catholic schools apart from all others, both public and private, and invite people to invest in them, and to invest quite heavily in them.I am going to suggest some items that I think are important to this vision: First, you have got to be distinctive; and second, you have got to convince people to invest, and you have to say that this is not a cost that you are paying this year-this is an investment for life.We used to be able to do that.There was an assumption that sending a child to a Catholic school was not just this year's expense; it was an investment for life.We have to do that again.Third, on the basis of the shared vision that we articulate, we have to convince all segments of the Catholic family to create or to renew a community of shared concern devoted to the survival and renewal of American Catholic education.Finally, we have to embrace the need to think globally, as we look at what the schools can be in the future, but act locally when it comes to nurturing the communities of concern that will, in turn, nurture the schools.Now, I know that sounds very 1960s, but in point of fact, I think that is the way we have to proceed, with a general vision-a global vision-which is compellingly beautiful, and that is shared nationally, but then locally you have to listen to the local community.Now, with your permission, I would like to address the four points in brief.I want to begin with the second point: articulating of a vision.In light of the fact that our schools exist and must survive in a very competitive environment-and I know all of you experience that, I know Fr.Leahy and I experience that-we have to be savvy.It is essential that we wrestle with the question of what makes or what can make Catholic schools attractive to families and students.I would suggest the following characteristics, which would have to figure prominently in the vision statement or in the mission statement that should guide the schools: first, creating and sustaining of a student-centered, nurturing environment in which students are cherished and challenged at the same time.Because we are faith-based schools, I think it would be essential for us-and the only honest thing for us to do-to trace the impetus behind the creation of such an environment back to the Gospel.Second is an honored and unwavering commitment to academic excellence.This is essential for all of our schools, whether they are in the inner city, in the suburbs, or in the most affluent areas of our cities, suburbs, towns, and villages.After all, no family is going to invest in Catholic education unless the schools will be able to prepare their students, their sons and daughters, for success in life.It is, therefore, important that we make it clear that our schools do not insult their sons and daughters with low expectations.That is true no matter where you are.It is one of the great things about the NativityMiguel and Cristo Rey schools.It is part of their mantra: We will not insult you with low expectations.The same can be said for the schools in the most affluent suburbs.Why would I send my child to this Catholic school in Newton, Massachusetts?Because we are not going to insult them with low expectations.We are going to push them.Excellence has got to be there.
Third is a deep commitment to values education.This is also essential.If we do not have a value-added dimension to the education that we offer, there is once again very little reason for families to invest in sending their children to our schools.Again, it would be important to trace the impetus for our commitment to values education back to the Gospel, where we really get our guidance and our inspiration.It is the impetus for all that we do.Of course, what I have not said is that I believe that we should be quite up front about claiming that our schools are devoted to character development.Now, I have gone from values education to character development.These are, I think, two things that are linked, and I think that they really do set us apart from all other schools, public and private.There is a sense that you are investing in something that is going to form character; you are investing in something that is, therefore, going to set your son or daughter on a good road for life and for life everlasting.I really do not hesitate to say that.It is an important part of who we are.Finally, we must include an unstinting devotion to faith education.This devotion, of course, is alluded to in all the other characteristics that I have just outlined.We have to be quite clear about the fact that we believe that our schools have a transcendent dimension that makes them worthy of support.It is not just a transactional dimension: Pay your money, get an education.There is a transcendent dimension.It would be easy and nice to say that this transcendent dimension is the foundation, but that is not what it is.It is both the foundation and the spirit that suffuses everything.That is something that we have to be up front about.
The second point I wanted to make is the case for the schools.If schools are marked by commitments to personal care, excellence and rigor in the classroom, values in education, and formation in both faith and character, I think that they will be what we want them to be: schools of distinction that are worthy of support, indeed, enthusiastic support.That is, of course, easy for me to say.But let me be honest and admit that it may not be easy to rally all the segments of the Catholic community to embrace the cause of the schools at this point in our history.Why?There are lots of reasons why.You have 90 grammar schools in the Archdiocese of Boston.New York has 220 schools, educating over 115,000 students per year.The Archdiocese of Brooklyn, up until 2 years ago, had 46,000 students.Together they would probably be the seventh largest school district in the United States.But let us be honest, bishops have other things to worry about.There is competition for the attention of the bishop.Without broad support, the schools will fail.Therefore, what can and should be done to convince the entire community to support the schools?Quite bluntly, we have to engage in a sophisticated and sustained public relations campaign.It is as simple as that.We call it capital campaigns in admissions work and in higher education but it is public relations.The campaign has to be about convincing bishops, pastors, parish councils, families, and students that the schools are governed by the vision that I referred to a little while ago and guided by the characteristics contained in that vision that are investments that last a lifetime, indeed, that last more than a lifetime if they transmit the faith.We have to sell the schools as a new evangelization.In the process, we have to make it clear that the schools will prepare students for success in life, and prepare them with the faith that will enable them both to make sense of life, in the light of transcendent beliefs, and to live lives filled with purpose.It is a daunting task, but one that I believe we have to give ourselves over to.If we do not, the schools will fail.If the schools fail-and this goes to Fr. Leahy's point-the Church will be immeasurably impoverished.If you want to see what the American Catholic Church will look like without the schools, which prepare and have prepared generations of faith-filled leaders of the Churchand I say this as a historian and with objectivity-look at mainstream Protestantism.These are churches that are crying out for life and looking for ways to bring generations along in the faith.I think we have to learn from them, from their sense of loss and poverty.
If we are able to sell the schools, we can and will create communities of concern that will enable the schools to survive.Now, who would the members of these communities of concern be?Well, the usual suspects: pastors, parish councils, lay leaders, religious communities, but, most importantly, you must have the families.These are the communities of concern.There are two different kinds of communities of concern.One is institutional-bishops, chanceries, religious communities-and the second is the neighborhood.I am not talking nostalgically here, although when I grew up my life was completely defined by my neighborhood, which was completely defined and dominated by the parish, which had the Church and the school, the convent, the brothers' house, and the rectory-and my parents.That is the way my parents felt-that they, at one and the same time, were part of it.They listened to it, but they owned it, too.Ownership is a big thing here.Tantalizingly, we cannot just cultivate the institution or the neighborhood.We have to do them both, and we have to cultivate them both at the same time.We have to convince both the institutional and the familial neighborhood communities that their futures depend on the schools.Therefore, we return to the need for public relations.
Moreover, these campaigns have to be bilingual, but not in the sense that you think.Rather, to the Church, the institutional community, we have to speak in terms of religious faith and point to the need to use the schools to ensure the faith will survive.To the neighborhood communities we have to speak of the more worldly value of the schools, as instruments that will enable children to succeed in life, to make it up the ladder, to achieve the American dream.Let us be honest, this is what American Catholicism has always been about in the work of its schools-two things at once: preserving the faith and making it possible for children to succeed.The Church never shied away from connecting the two.I do not shy away from it either.I sell Fordham by basically saying to prospective students and their parents: Come here!And what will happen?You will be prepared for life.These are the alumni that prove that we prepare people for life, and for success in life.And, you'll also be bothered for life by the sense that there is injustice in the world, and that there are things that you do not know.We have always put the two together.If we can make the case that the schools will serve the self-interest of these communities, then the communities of concern will grow up, if theyand they will-in turn, take responsibility.Ownership is big; responsibility is big.But you have to convince people it is worth it.
The last thing is thinking globally and acting locally.Here, I take a page from history.The schools have to be responsive to the needs and interests of the communities they serve.If you look back at our history, this was one of the things that was terrific about the Church.The Church pursued a centripetal strategy: It sought to bring all the ethnic communities together by pursuing a centrifugal local strategy.That is to say, if you had a German parish, you did not bring in Irish nuns.You brought in German nuns.You listened to the needs of the local community, you catered to the needs of the local community, and what did you do?You bonded the local community more deeply to the Church.Over time the Church was one.As we go forward, the same is true into the future.We must have a general vision of what Catholic education is all about, but we have to recognize that you cannot have a cookie-cutter pattern.Host Presidents' Address American Catholic education in the inner city is going to look very different in the South Bronx than it is going to look in El Paso, Texas.It is going to look very different in downtown St. Louis than it does in Southie (South Boston).That is not bad, as long as the basic vision and characteristics are present in all the schools: student-centered care, nurturing environment, rigor and excellence, values education, and the transcendent devotion to the faith.That is what made our schools successful: that they have been neighborhood schools owned by the neighborhood and because they listened to the neighborhood.That is the way my life was structured when I was a kid.I am sure many of you had the same structure.The ownership is important.
You will notice that I did not mention anything about the biggest problem, namely money.You will think that I am a coward for not addressing that problem.Well, I am and I am not.The American Catholic Church has always lived by the seat of its pants.It has always addressed challenges as they have arisen, and it has always been able-and Fr.Leahy was hinting at this-to convince the people in the pews that the work of the Church, because it was work that was done in response to need, was not only worthy of support, it really spoke to their hearts, and money came.I think that if we are smart enough we will be able to move forward.Also, I have to say, as a Jesuit on this Jesuit anniversary, I really do remind myself regularly that by any sane notion or measure the Society of Jesus should have gone out of existence shortly after it was founded.What saved us was the fact that our founder was visionary, given over to discernment; and, therefore, he was committed to reading the signs of the times and responding appropriately.That is, I think, what the Church is always called to do, and what the American Church has done remarkably well.The American Catholic Church has been successful precisely because it has been able to change; it has been able to answer new challenges.Right now I think what we have to do is use our assets-sometimes sell them-so that the central mission, preserving the faith and making it possible for kids to succeed in the schools, continues. | 2018-12-02T01:43:43.361Z | 2011-08-22T00:00:00.000 | {
"year": 2011,
"sha1": "5b52bb393116ffd5726666be94e2e2ad8cd43469",
"oa_license": "CCBY",
"oa_url": "https://digitalcommons.lmu.edu/cgi/viewcontent.cgi?article=1710&context=ce",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a16f5e6b4b85ce6dc2267730b7949256ed8c37fb",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
227251655 | pes2o/s2orc | v3-fos-license | Foam/Film Alternating Multilayer Structure with High Toughness and Low Thermal Conductivity Prepared via Microlayer Coextrusion
Multilayer membranes prepared via microlayer coextrusion have attracted wide attention due to their unique properties and broad applications. In present study, the foam/film alternating multilayer sheets based on ethylene-vinyl acetate copolymer (EVA) and high-density polyethylene are successfully prepared via microlayer coextrusion. The cells in the sheets are single-cell-array along the foamed EVA layers with uniform cell size. In addition, the effects of layer number and foam relative thickness on morphology, mechanical properties, damping and heat insulation properties are investigated. The cell size decreases significantly with increasing layer number due to the enhanced confine effects. The tensile strength, elongation at break, and heat insulation also increase significantly. However, the mechanical damping properties change little in the observed frequency. Meanwhile, with higher relative thickness of EVA foam, the sheets have lower tensile strength and lower thermal conductivity, while the damping properties are enhanced in a specific frequency scope. The elongation at break of the optimized sample comes to 800% and the thermal conductivity decreases to 61 mW·m−1·K−1, which shows high toughness and low thermal conductivity, indicating a possible method for preparing materials with high toughness and heat-insulating properties.
INTRODUCTION
Polymeric foams are widely used in packaging, insulation, aeronautic, automotive, and construction industries due to their light weight, high specific strength, low thermal conductivity, and high performance cushion compared to non-foamed polymers. [1−6] However, the conventional bulky foams for thermal insulation usually have poor mechanical properties, which are seldom used alone. In daily applications, polymeric foams often need to be glued or bonded with other high strength materials in order to distribute load over broad areas. The process is time-consuming, complex and costly. [7] A technique based on the coextrusion method to produce foamsandwiched structural composite with polymeric foams as core layer and solid plastic as skin layers was proposed, which shows higher strength, better tear resistance, and improved impact resistance. However, the foam-sandwiched structure obtained by this technique only contains single or several foam layers due to the limited number of extruders. [8,9] Microlayer coextrusion refers to two or more polymer melts extruded in dies to form multiple layers by superimposing, splitting, and converging. The number of layers can reach as high as thousands with two or three extruders, [10,11] and the thickness of a single layer can reach even nanoscale close to the lamella thickness of polymeric crystal. [12−14] This technique allows to combine the desirable properties of different polymers into one structure, which has been widely used to produce high barrier packaging, electromagnetic shielding, high-efficiency sound absorbing, high-capacity information storage media, advanced capacitors, biomedical scaffolds, microporous filtration, and lithium-ion battery separators. [15−17] When the thickness of the single layer comes to nanoscale, the nano-confine effect would be generated, and the molecular chains and crystals are organized differently from normal state. [18,19] In terms of performance, the multilayer material shows completely different properties in the fields of mechanical, [20,21] optical, [22,23] insulation, [24−26] and electromagnetic properties. [27,28] Microlayer coextrusion has also been employed to prepare multilayer alternating foamed structure. [29−31] Baer et al. firstly reported multilayer foam/film structure by microlayer coextrusion. [29] The structure consisted of alternating soft foam layers and solid film layers. The comprehensive behavior of 8−64 layers polypropylene (PP) foam/PP film structures was similar to that of cork; the tensile and compressive properties of the material increased with increasing number of layers, which greatly improved the mechanical properties of the foamed materials. The optimal sample broke at 8 MPa and 12% strain. Rahman et al. developed low-density polyethylene (LDPE) multilayer foam/film systems with viscosity contrast between film and foam layer polymers. [30] They found high viscosity contrast contributed to layer integrity and intact closed-cells. With higher layer number, the confine effect on cell growth increased significantly, resulting in single cell alignment in the foam layers. Sun et al. [31] used linear lowdensity polyethylene (LLDPE)/poly(ethylene-octene) elastomer (POE) blend as the film layers and cross-linked POE as the foam layers. They found the sound absorption coefficient of multilayer foam/film structure was 2−3 times higher than that of conventional single-layer material in selected frequency range. Moreover, they optimized single layer thickness and ratio of foam/film layers to obtain the most efficient sound absorption at specific frequency.
As mentioned before, most work on foam/film multilayer structure in recent years mainly focused on the acoustic and mechanical properties of such structure, while other aspects of physics properties were rarely paid attention. In the present study, effects of layer number and foam relative thickness on the heat-insulating and mechanical damping properties of foam/film alternating multilayer structure are investigated. In addition, major literatures chose polyolefin as foam material, which limits property exploration and potential applications. How to control the foam stability in polyolefin melt is the most important. Other than the method of cross-linking, using the higher viscosity melt as the confining film layer and lower viscosity melt as the foam layer in microlayer coextrusion may become a useful method for preparing alternately multilayered foam/film structure. In this work, ethylene-vinyl acetate copolymer (EVA)/high-density polyethylene (HDPE) foam/film alternating multilayer structure was prepared via microlayer coextrusion, which may greatly improve the elongation at break of the material, and overcome the disadvantage of easy breaking of the conventional foam structure, providing a new direction to prepare materials with high toughness and low thermal conductivity.
Preparation of EVA Foam/HDPE Film Alternating Multilayer Structure
EVA foam/HDPE film alternating multilayer structure was prepared on a home-made microlayer coextrusion device. [13,32] The previously prepared master batch of EVA, nucleating agent (CaCO 3 ), and chemical blowing agent (azodicarbonamide) was fed into one extruder, while HDPE was fed into the other extruder. These two melt streams were merged in the feed block to form two parallel layers, and then multilayer structure was achieved after flowing through the section of layer multipliers. During extrusion, the temperature of the extruders and multiplier elements was set at 180 °C, and that of the foaming die was set at 200 °C. The screw rotating rate of the extruder for HDPE was fixed at 10 r·min −1 , while that of the extruder for EVA was decided by volume ratio of foam/film. The schematic diagram of the preparation for foam/film alternating multilayer material is shown in Fig. 1. The formulations with different layer numbers and volume ratios of foam/film are listed in Table 1. For comparison, the EVA/HDPE blends were prepared in the mixer. The neat EVA, neat HDPE, solid blend sheets, and blend foams were prepared on the hot press. The proportion of EVA/HDPE blends and the blend foams is shown in Table 2.
Characterization
The foam/film layered structure in each sample was observed by optical microscopy (KEL-XMT-3100 from Nanjing Kyle Instrument Co.). The samples were cut into approximately 50 μm thickness by sharp blades perpendicular to extrusion direction at room temperature. The cell distribution and the layer stratification in the samples were observed by transmission mode. At the same time, more than 30 cells were counted by Damping properties of the samples were evaluated using dynamic thermomechanical analysis (Q800, TA Instruments). Rectangle samples of 60 mm × 10 mm × 1.2 mm were used in all tests. Measuring condition was set to dual cantilever mode, amplitude was 25 μm, and the frequency was set to 1−50 Hz. All tests were performed at 30 °C. Loss tangent (tanδ) is used to measure the energy loss during the vibration.
Thermal conductivity of the samples was measured on a Lambda heat flow thermal conductivity analyzer (HFM446, Netzsch Co.) under steady-state heat flow condition. The upper plate temperature was set at 10 °C as the hot plate and the lower plate temperature was set at −10 °C as the cold plate, with a temperature difference of 20 °C. All tests were repeated three times to get the average value. The thermal conductivity (λ) can be calculated using Eq. (1) as follows: where N is a calibration factor, L is the sample thickness, q T is the heat flux and ΔT m is temperature difference.
To measure the apparent density (ρ) of the samples, three specimens for each sample were cut in width of 120 mm and length of 120 mm. The weighting method was used. All tests were repeated three times to get the average value. The apparent density can be calculated by Eq. (2) as follows: where m is the mass of the sample and V is the volume.
RESULTS AND DISCUSSION Preparation and Morphology of EVA Foam/HDPE Film Alternating Multilayer Sheets
EVA foam/HDPE film alternating multilayer materials were prepared via microlayer coextrusion, as shown in Fig. 1. EVA, nucleating agent (CaCO 3 ), and chemical blowing agent were previously prepared into the master batch, which was then fed into one extruder, while HDPE was fed into the other one. The multilayer structure was obtained after flowing through the section of layer multipliers, in which the blowing agent started to decompose and produce a large amount of gas to develop foam layers at the same time. Effects of the content of AC blowing agent on morphology of foam/film alternating multilayer materials are shown in Fig. 2. The dark regions are EVA foam layers and the bright parts are HDPE film layers. In EVA/HDPE multilayer system, adjacent layers have obvious delamination and good interlayer adhesion; the peel strength between layers of sample 8L, 0/1 is shown in Fig. S1 (in the electronic supplementary information, ESI). From Figs. 2(a)−2(c), the cells are distributed in single-cell-array along the EVA layers. The surface EVA layer has no cell distribution due to the fact that there is only one single HDPE film layer adjacent to the surface EVA layer, which cannot cause confine effect to limit cell growth and prevent gas escaping. With AC blowing agent increasing from 0.5 wt% to 1 wt%, the number and size of cells increase. When blowing agent comes to 0.8 wt%, the number and size of cells are moderate, and the multilayer structure is not damaged. Therefore, the content of blowing agent was fixed at 0.8 wt% in the following foaming samples. Fig. 3 shows the cross-section of EVA foam/HDPE film alternating multilayer sheets with different foam thicknesses. The foam/film alternating multilayer sheets exhibit a completely different multilayer structure from EVA/HDPE blend foam or unfoamed EVA/HDPE multilayer sheet. The relative thickness of EVA foam also has a significant influence on the size of the foam cells. Fig. 4 is the cell diameters of different samples counted from Figs. 2 and 3. As shown in Fig. 4, with the thickness of single layer decreasing, the average diameter of the cell decreases significantly from 235 μm at 8 layers to 89 μm at 32 layers. The reason is that the closed environment between the film layers has a strong confine effect on the cell growth. The higher the layer number is, the more enhanced the confine effect will be. The cell diameter decreases obviously. At the same time, the cell diameter increases from 89 μm at V foam :V film =1:1 condition to 120 μm at V foam :V film =3:1 condi- tion. More gas generated by the decomposition of the blowing agent and weaker inter-layer confine effect of the films both lead the cells to grow more easily, which contributes to larger cell diameter. Table 3. In EVA foam/HDPE film alternating multilayer sheets, solid HDPE films play a major role in mechanical bearing. Therefore, the tensile behavior of the system as a whole is mainly contributed by the tensile properties of solid HDPE. In each stress-strain curve, the strain initially shows a short linear elastic deformation stage and then gradually increases as the stretch increases due to work hardening effect. In the stretching process, the cells would bend and stretch, and the densification would appear after cells collapse. Compared with 8 layers unfoamed sample, tensile strength and elongation at break of 8 layers foamed sample reduce significantly. With the increase of multilayer number from 8 layers to 32 layers, the elongation at break improves from 600% to 800% and the tensile strength at break increases Tensile behaviors of EVA foam/HDPE film alternating multilayer sheets. from 6.5 MPa to 10 MPa. With increasing number of multilayers, the orientation of the molecular chain along the extrusion direction is enhanced in the extrusion process. [13] Moreover, more layers would cause smaller cells in the material, which means fewer and smaller defects. It is harder to break when stretched. With higher EVA foam content in the samples, the yielding stress decreases, while elongation at break is almost unchanged. It should be attributed to the reason that the mechanical properties of the multilayer structure along the extrusion direction are mainly provided by the HDPE layer, so the tensile strength decreases on account of less HDPE film content but the elongation at break keeps constant.
Tensile and Compressive Properties of EVA Foam/HDPE Film Alternating Multilayer Sheets
The compressive stress-strain curves of various samples are displayed in Fig. 6. In each compressive stress-strain curve, it initially shows a short linear region and then a long plateau; finally the stress rises rapidly as the stretch goes. In the initial stages of the compression, the cells would bend and deform, reflecting linear elastic modulus. When cells begin to collapse, a long plateau would show in the stress-strain curve. After cells collapse, the densification would appear and the curve would rise sharply. As seen in Fig. 6(a), the compressive modulus and densification strain increase with increasing layer number. Fig. 6(b) shows higher foam content would cause lower compressive modulus but higher densification strain.
Dynamic Thermomechanical Analysis of EVA Foam/HDPE Film Alternating Multilayer Sheets
The storage modulus represents the ability to store elastic deformation energy and is an index to measure rigidity. Fig. 7 displays dynamic mechanical analysis of EVA foam/HDPE film alternating multilayer sheets. As shown in Fig. 7(a), compared with 8 layer unfoamed EVA/HDPE alternating multilayer sheets, all multilayer foamed samples have much lower storage modulus in the frequency range of 1−50 Hz. As the layer number increases and EVA foam relative thickness decreases, the storage modulus increases. The reason is that higher layer number would lead to enhanced polymer chain orientation, which means higher rigidity and storage modulus. Thinner foam layer would also cause higher strength and storage modules. The results are consistent with the stretch and compression results above.
The ratio between the storage modulus and the loss modulus is tanδ, which can be used to measure the energy loss during vibration. Fig. 7(b) shows that all foam/film alternating multilayer sheets have higher tanδ than 8 layer unfoamed EVA/HDPE alternating multilayer sheets. The reason is that the energy dissipation mainly depends on the friction between the polymer segments when an unfoamed sample is deformed by external forces. The alternating multilayer foam- ing structure introduces a large number of cells. When the structure is deformed, apart from polymer chain friction, the deformation and recovery of the cells will transform a large amount of energy into heat energy, resulting in higher loss tangent. [33] Theoretically, differences of interface properties during the vibration will also cause more energy consumption with increasing layer number. [34] However, the difference of tanδ between 8 and 32 layers is not significant in this work. The reason may be that EVA and HDPE layers have good interfacial adherence, resulting in little energy loss caused by layer interface during vibration; hence, the effects of layer number on tanδ is not obvious in EVA/HDPE system.
Effects of Layer Number and Foam Relative Thickness on Thermal Conductivity
Thermal conductivity is an important index to measure the thermal insulation capability of foamed materials. As shown in Fig. 8(a) The results can be explained by heat transfer mechanism, which includes three methods (heat conduction, heat radiation, and heat convection). When the environment temperature is lower than 200 °C and cell size in porous material is less than 4 mm, heat radiation and heat convection could be ignored, and heat conduction becomes the major heat transfer way. [35] The heat conduction in foamed materials is composed of gas and polymer heat conduction, and the thermal conductivity of gas is much lower than that of the polymer. It can explain why the samples with higher foam relative thickness show lower thermal conductivity. [36] When layer number increases, the sample density does not change greatly as displayed in Fig. 8(b), while the thermal conductivity decreases significantly. The reason is that the heat energy in polymer is mainly transmitted through the irregular diffusion of phonons at the interface of EVA foam and HDPE film layer. The scattering of phonons is intensified, which impedes the transmission of phonons and effectively reduces the thermal conductivity. [37,38] CONCLUSIONS EVA foam/HDPE film alternating multilayer materials are successfully prepared via microlayer coextrusion. The cells with uniform size in the materials show single-cell-arrangement along the EVA foaming layers. The cell diameter decreases from 245 μm at 8 layers to 89 μm at 32 layers due to significant confine effect. The tensile strength and the heat insulation increase with increasing layer number, while the damping properties change little. Higher foam relative thickness would cause weaker tensile strength, lower thermal conductivity, and higher damping properties at specific frequency range. However, the elongation at break basically keeps constant. After optimization, the elongation at break reaches up to 800% with the thermal conductivity as low as 60 mW·m −1 ·K −1 , showing high toughness and good heat insulation.
Electronic Supplementary Information
Electronic supplementary information (ESI) is available free of charge in the online version of this article at https://doi.org/10.1007/s10118-021-2524-0. | 2020-12-03T09:03:09.063Z | 2020-11-30T00:00:00.000 | {
"year": 2020,
"sha1": "f059a7e2ebc5b20396e9cf9d8b3c04c696f78f30",
"oa_license": "CCBY",
"oa_url": "http://www.cjps.org/article/doi/10.1007/s10118-021-2524-0",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8c1ac29484c139b041c2e2c290d528a5fa9b4b95",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
241408686 | pes2o/s2orc | v3-fos-license | Facilitators and barriers to advancing binational health coverage strategies for undocumented Mexican migrants in the United States of America
Background: Within the framework of a new national health program with emphasis on universal coverage strategies and in the context of revision/adjustments to the North American Free Trade Agreement (NAFTA/TEMEC), the present study aimed to identify barriers, facilitators and challenges for the development of strategies on social protection in the health of migrants and their families. Material and methods: Evaluative research based on a qualitative analysis with a cross-sectional design. The techniques of documentary analysis, applied political analysis (mapping of actors), in-depth interviews and case studies were used. In the first stage, key actors were mapped at the federal level and senior executives and health officials, federal deputies, senators and members of the Mexican foreign service were interviewed. In the second stage, field work was carried out in the state of Guanajuato and California; State health service officials, state government officials, municipal officials, health unit workers, representatives of CSOs and relatives of migrants were interviewed. The analysis of the interviews was carried out through the ATLAS-Ti software, as well as the mapping of actors and feasibility analysis through the POLICY MAKER software. Results: The main results allowed to identify indicators on barriers and facilitators regarding social actors, binational agreements under NAFTA/TEMEC, institutional spaces, interaction between social actors, as well as the impact and type of relations for a greater advance in binational health policies. Several obstacles were reported, including the fears that undocumented emigrants have in the U.S. of being arrested and deported if they use public health services in the U.S. The stakeholders also believed that many Mexican emigrants do not have a culture that values health insurance. Conclusions: In the context of reforms and adjustments of health systems that are being discussed in parallel in the revision and adjustments of NAFTA/TEMEC (United States of
The issue of lack of protection in social security and health, takes on high relevance because more than half of Mexican undocumented migrants between 18 and 64 years of age do not have health insurance in Mexico or California (4). In this context, the strategy of developing new coverage strategies supported by lines of action for greater access to health services, the approach to the health needs of Mexican migrants has allowed us to document various risk factors and health care needs. Situations suggesting barriers to access to services and medical insurance in places of origin and destination have been analyzed (5). Various analyzes have shown that the provision of social health protection services to migrants and their families is the subject of a broad debate in Mexico and the United States.
The available information on the health services offer shows that more than half of Mexican migrants aged 18 to 64 did not have health insurance (6)(7). Based on this, the creation of binational health programs and public or private medical insurance has been promoted, but its implementation, implementation and possible implementation scenarios have been poorly documented (8)(9)(10).
There are initiatives aimed at carrying out events that promote access and improvement of health among Mexican residents who live and work in the United States and who do not have health insurance, such as the Binational Health Week (see: http: // hia .berkeley.edu). The Vete Sano Regresa Sano program, where the Institute of Mexicans Abroad participates, a decentralized body of the Ministry of Foreign Affairs (11)(12)(13)(14).
However, in some studies it has been shown that since the implementation of the NAFTA, all these programs face resistance from the cultural, to the political, legal and administrative (15)(16)(17)(18).
Initiatives have also been developed to expand the supply of health services for this population, particularly in the border area. Such is the case of organizations such as Health Net (19) and the Health Window of Mexican consulates in various US cities (20)(21).
These mechanisms for offering health services represent an opportunity to characterize governance mechanisms related to access to health care, especially by illegal immigrants in California. This approach seeks to identify opportunities that favor health governance through social protection and the provision of health services to a vulnerable population in a scenario of high feasibility.
In recent years, the issue of actions on social protection in health, addressed to Mexican emigrants has been raised in multiple forums, while it has been used for various purposes in the political agendas of Mexico and the United States, in favor in one case and subject to circumstances of the political and economic environment in the other (22)(23)(24). However, it requires more information, new indicators and the construction of high feasibility scenarios, to identify agreements and necessary arrangements to be able to advance in the creation of a binational health social protection system, as well as to establish its scope. in the current and future scenario (25)(26)(27)(28).
In order to dimension the problems related to the governance and social protection in health of migrants, it is considered relevant to establish the map of actors (the normative frameworks, the processes, their interactions, etc.) linked to the creation and promotion of health services in the framework of the national health program 2018-2024 (29)(30)(31). To this end, the review of trade agreements on programs, services and medical insurance was examined, as well as an analysis of the legal frameworks that support the supply and access to health services in both countries, incorporating economic indicators of equity and access to health (32)(33). On the other hand, it is also very important to identify the key social actors involved, as well as the type of interactions and interaction spaces between the different actors of the health system. This information is strategic in the identification of governance indicators and their relationship with facilitators, barriers and challenges for a public policy of greater social protection in health for the benefit of migrants (34)(35)(36).
In summary, the objective of this manuscript is to present the main indicators on facilitators and barriers for the implementation of binational strategies in the area of social protection in the health of migrants.
Methodology
An evaluative research design was developed based on qualitative analysis of key documents, in-depth interviews with key actors and case studies in localities of Guanajuato, Mexico and California, US. Both states were selected at their convenience, in response to a binational call, which proposed involving academic institutions from Mexico and the United States with a large influx of migrants. Under a collaborative scheme with colleagues from University of California in Los Angeles, contact was made in the field with Mexican families from Guanajuato, who had the destination of the state of California, which further strengthened the collaboration and political mapping. The "snowball" method was used to carry out the interviews, combining two strategies; In the first case, the government actors contacted each other directly, sending them letters and emails.
With the state actors, some personal recommendations were used with families and members of migrant federations, who in turn recommended other contacts from which the interviews were conducted. The methodological procedures comprised three stages: The main integrated facilitators are highlighted below based on the recommendations, suggestions and findings of the sources of information selected and reviewed for that purpose. The NAFTA documents, government reform projects as well as national and international Civil Society Organizations (CSOs), make recommendations for a binational health policy. In this context, they establish that the challenges of the social protection system in health tend to coincide with the challenges of the national economy, in the sense of increasing access, coverage and quality at the lowest possible cost and improving administration to increase the efficiency To address these challenges, as opportunities or facilitators to develop, the following aspects are highlighted: • New mechanisms to standardize and certify units of medical care, licensing and certification of professionals, technology assessment and financial equity, and adjustments of the regulatory framework based on recommendations issued by the World Health Organization, which establish the expansion of public offer through participation of public and private providers.
• From the design of reform programs in social protection in health for different population groups, take advantage of the development of the wide range of commercial and investment transactions that operate within the framework of NAFTA/TEMEC through trilateral governmental and non-governmental institutions (including health aspects).
• There is already a normative framework that establishes that there must be a trinational or binational cooperation (Mexico-US-Canada) regarding the migration issue and collateral problems such as social protection. This opening includes the development of including new and innovative legal migration routes, new and effective mechanisms for law enforcement with rights and obligations in the workplace and a series of welfare support mechanisms to strengthen the supply of jobs and social protection programs.
• Strengthen the binational dialogue to find beneficial solutions to the migratory phenomenon in general, and to the resolution on social protection actions in the particular. This will allow the improvement of policy coordination and management within each government, both at the federal and state and municipal levels.
• Binational agreements have been signed for cooperation in health and education. Such agreements have been promoted between the health and labor authorities of Mexico and the US. These agreements make it easier for Mexicans in the United States to access information about health and education services available to them. In a context of more legal migration and less illegal, the bilateral cooperation of these agreements proposes to develop binational health coverage schemes, including those that give attention to undocumented workers.
• From the perspective of some US analysts, it was promising that the reform of the • There are limitations regarding the regulation of health insurers in Mexico, which limit a foreign company to control a certain percentage of the domestic insurance market.
• The transnational scope of actions of the health system of the United States in Mexico is generally limited to preventing the spread of infectious-contagious diseases originating in Mexico in its transit to North American territory.
• Different documents express that from the Mexican side there are conceptual and normative differences as barriers for bilateral cooperation in matters of public health and social protection in health.
• Unlike situations related to public health, Mexican private doctors expressed more interest than their counterparts in the development of coordination mechanisms for medical care, pointing out various forms of lack of reciprocity of doctors from the United States.
• The opinions of American doctors in this regard coincide with some problems referred to in recent years, such as the perception that Mexican doctors are not adequately trained.
• Since the implementation of NAFTA, there has been a lack of definition of frameworks for the implementation of binational programs in the field of health systems. Eight facilitators were identified, of which four were qualified with a high prospect and four with a medium prospect. Table 1 presents the facilitators with high impact prospects, while linking them with groups of potential actors and coalitions to exploit them.
Facilitators with a medium impact prospect are described in table 2, and include actors from the social and government sectors, as well as CSOs. This pattern suggests that the greatest opportunities were concentrated in political and governmental actors at the federal level, while, to the extent that the analysis is directed at the local level, greater barriers and feasibility challenges are outlined. Financial. 2-Paradoxical effects of remittances in terms of financial protection in health. The increase in remittances from migrants-family promotes addressed invest more in private institutions than in public institutions where is offered the health coverage program.
Financial. 3-Problems of financial sustainability to ensure all inputs required for free in all interventions and total coverage of medicines under the new effective universal coverage scheme.
Organizational. 4-Little interest and motivation in the new insurance schemes for major medical expenses, mainly due to the expectation of offering free services to all public sector institutions.
Culturalorganizational 5-Uncertainty generated by ignorance of users about that problems can be addressed from the basic package services of popular health insurance and which are not.
Organizational 6-High tendency to limit migrant programs to basic programs of promotion and prevention and poorly to treatment and rehabilitation.
Culturalorganizational. 7-Perception of poor quality of care, especially in the treatment of chronic diseases such as diabetes and hypertension.
Geographical. 8-Geographical access problems due to lack of transport or topographic difficulty to access easily from home to health center assigned.
Culturalorganizational. 9-Users, networks migrant families and community groups continue to have a passive role in decisions about the balance between what they need and what insurance coverage offers.
Organizational. 10-Absence of reliable indicators and targets to assess performance and effective coverage in addressing health problems of migrants and their families.
Type of Facilitator Facilitator
Political. 1 -Broad opportunity to strengthen social protection in populations of origin for community actors by the offer of government programs (Popular Insurance "3 x 1"). Organizational. 6-There are new schemes for greater interaction between committees of the legislative branch (migration, health and social security commissions), community leaders and civil society organizations for the development and monitoring of welfare programs, including health needs.
Culturalorganizational. 7-The new national health program 2018-2024 includes the design and strategies for continuous improvement in the quality of care.
Geographical. 8-From the Ministry of Communication and Transportation and Social Welfare, a proposal is being developed to implement a road infrastructure development plan that allows greater access of marginalized communities to public services, including health services.
Culturalorganizational. 9-The current program of coverage has contemplated that by 2024 in all the states of the country a promotion, detection and prevention plan has been implemented with a broad participation of users in public health activities.
Organizational. 10-The National Council of Humanities, Science and Technology is rethinking the support of financial resources to emphasize and promote the development of research and generation of knowledge on evaluation indicators that guarantee the effective implementation of the different programs and policies of the new development plan, among them the new health programs and policies. | 2019-10-17T08:56:23.693Z | 2019-10-15T00:00:00.000 | {
"year": 2019,
"sha1": "22a3b56bc9866d9d1447bd247801792967232cda",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-6785/v1.pdf?c=1585612603000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "3b605b8464fc1486d7aa62ca4d110c6f0902d1ec",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": []
} |
55120175 | pes2o/s2orc | v3-fos-license | Size Controlled Synthesis of FeCo Alloy Nanoparticles and Study of the Particle Size and Distribution Effects on Magnetic Properties
In this research the size controlled synthesis of FeCo nanoparticles was done using a quaternary microemulsion system. X-ray diffraction and high resolution transmission electron microscopy of as-synthesized nanoparticles confirm the formation of FeCo alloy nanoparticles. The effects of two process parameters, namely, water to surfactant molar ratio and molar concentration of metal salts, on the size and size distribution of nanoparticles were discussed by the aid of transmission electron microscopy. The size dependency of magnetic properties was also investigated using a room temperature vibrating sample magnetometer. The superparamagnetic-ferromagnetic and single domain-multidomain transition sizes were determined.Then the specific absorption rates at transition sizes were calculated and the best sample for magnetic hyperthermia treatment was introduced.
Introduction
Magnetic nanoparticles are a topic of growing interest because of their versatile applications such as ultrahigh density data storage, drug delivery, magnetic separation and MRI contrast enhancement [1][2][3][4][5][6][7][8][9][10][11][12][13].Among those, magnetic hyperthermia is a novel therapeutic method in which the magnetic nanoparticles are subjected to an alternating magnetic field to generate a specific amount of heat.The generated heat will then raise the temperature of the tumor to about 42 ∘ C at which certain mechanisms of cell damage are activated [14].The heat producing mechanisms under A.C. magnetic fields are (1) hysteresis, (2) the Neel or Brownian relaxation, (3) viscous losses [15].
The generated heat is quantitatively described by the specific absorption rate (SAR) of nanoparticles which is related to specific loss per cycle of hysteresis loop () by the equation SAR = × where is the frequency of applied field.
Up to now several models have been proposed to predict the behavior of magnetic nanoparticles under alternating magnetic fields [16].For superparamagnetic nanoparticles the equilibrium functions are used.The Langevin function () = coth() − 1/ which is valid at zero anisotropy is an example of equilibrium functions where = ( 0 max )/( ) in which 0 max is the external applied field, is the spontaneous magnetization of the nanoparticle, is the volume of nanoparticle, is the Boltzmann constant, and is the temperature.Linear response theory is valid for nanoparticles at superparamagnetic transition size.Based on this theory the area of the hysteresis loop is calculated by [16] where is the saturation magnetization, = 2, = = 0 exp( eff /( )) is the relaxation time of magnetization equal to the Neel relaxation time ( ), and 0 is the intrawell relaxation time.The Stoner-Wohlfarth model predicts the magnetic response for single domain ferromagnetic nanoparticles.This model neglects thermal activation and assumes a square hysteresis loop which is relevant for = 0 or → ∞.For magnetic nanoparticles with their easy axes randomly oriented in space the hysteresis area is calculated by [16]: where 0 is the coercive field and eff is the effective uniaxial anisotropy of the nanoparticle.The key factor to obtain the maximum SAR in the conventional clinical hyperthermia treatments ( = 100 kHz, 0 max = 20 mT, and = 300 K) is the anisotropy of nanoparticles.
Calculations of SAR as a function of anisotropy in the above-mentioned size regimes reveal that the maximal SAR would be obtained at single domain-multidomain transition size.So producing nanoparticles in this range for use in hyperthermia treatment is of high value from technical and clinical aspects.
FeCo alloy has the highest saturation magnetization among all binary magnetic alloys [1].Several methods have been used to synthesize FeCo alloy nanoparticles which include arc discharge [2], polyol [3][4][5][6][7], hydrothermal [8], reaction under autogenic pressure at elevated temperature (RAPET) [9], thermal decomposition [10], wet chemical [11,12], and coprecipitation [13,17,18].The morphology and size distribution of as-synthesized nanoparticles are not well controlled in most of these processes.To obtain the best properties for magnetic hyperthermia treatments the size distribution is an effective parameter.Researches show the loss of SAR due to nanoparticle size distributions.So employing a method capable of producing monodisperse nanoparticles is of high value.
Microemulsion technique is a method capable of controlling the shape, size, and size distribution of nanoparticles [19].In this process nanoparticles precipitate inside micelles.The micelle is in the form of sphere of oil in water (normal micelle) or water in oil (reverse micelle) which is surrounded by a layer of surfactant molecules [20].The technique could be used to synthesize mineral [21] or organic compounds [22].
There are few works on the synthesis of FeCo alloy nanoparticles.The novel quaternary system of water/cetyltrimethylammonium bromide (CTAB)/1-butanol/isooctane was employed for synthesis of FeCo alloy nanoparticles which has not been used before.Unlike other synthetic methods the proposed route is capable of controlling the size of nanoparticles in the range of 1-10 nm.This is achieved by controlling the water to surfactant molar ratio () and molar concentration of metal salts.This capability is of vital importance for investigating the heating effect of magnetic nanoparticles under A.C. magnetic fields.
In the present research, the shape and size controlled synthesis of iron cobalt alloy nanoparticles was carried out in the reverse micelles of water in isooctane and the magnetic properties of as-synthesized nanoparticles were studied to investigate their potential usefulness in magnetic hyperthermia treatment.
The key to formation of a microemulsion is the formation of a transparent and thermodynamically stable solution which forms at certain ratios of aqueous phase/surfactant/oil phase.Microemulsion 1 (ME1) and microemulsion 2 (ME2) were prepared on the basis of quaternary phase diagram of water/CTAB/1-butanol/isooctane which is described elsewhere [23].
Fe 0.65 Co 0.35 alloy nanoparticles were prepared by mixing equal volumes of ME1 and ME2 containing metal salts and precipitating agent, respectively.The [NaBH 4 ]/[metal salts] molar ratio was kept at 2 to ensure that all of precursors are reduced to zerovalent metal.First ME1 was transferred into a three-necked round bottom flask and then ME2 was added using a dropping funnel to vigorous stirring ME1 under N 2 atmosphere.Black precipitates of FeCo alloy nanoparticles appeared immediately after mixing of the two microemulsions.
After 10 minutes of reaction the solution was centrifuged and washed with chloroform, ethanol, and acetone several times to remove all residual elements.Some of as-synthesized powders were annealed in a tube furnace at 350 ∘ C and 550 ∘ C for 20 minutes under H 2 atmosphere.
Characterization of samples was done using X-ray diffraction (XRD) (PANalytical X'Pert Pro MPD with Cu -radiation), scanning transmission electron microscope (STEM) (ZEISS EM10-C at 100 KV), and high resolution transmission electron microscope (HRTEM) (JEOL JEM-2100 at 200 KV).Elemental analysis was done using an energy dispersive X-ray spectroscopy (EDS) detector attached to the HRTEM.The magnetic properties of samples were analyzed using a room temperature (300 K) vibrating sample magnetometer.The samples and process conditions are summarized in Table 1.
Microstructural Characterization.
Figure 1 shows XRD patterns for as-synthesized W3 and annealed samples.As seen from Figure 1(a) there is no major peak.Nanoparticle approximate size could be derived from the Scherrer formula ( = 0.94/( cos )) based on the Cu radiation and the FWHM of the peak.But for very small sizes (<5 nm) by considering the drastic peak widening and intensity decreasing there would not be any distinguishable peak.Also the bad crystallinity of nanoparticles (Figure 2(b)) as a result of fast borohydride reduction fortifies this problem.Figure 1(b) shows the diffraction pattern of W3 sample after annealing at 550 ∘ C for 20 minutes under H 2 atmosphere.XRD patterns reveal the formation of -bcc structured FeCo alloy (at 2 = 44.83∘ , 65.32 ∘ and 84 ∘ ).These values agree with JCPDS file for FeCo.Also a small quantity of CoFe 2 O 4 (at 2 = 35.4∘ , 62.4 ∘ ) is observed due to partial oxidation of the sample by exposing to air after annealing procedure.It has been found that FeCo is a substitutional alloy with a bcc structure from pure Fe to about 80 at.% Co and fcc for 90 at.% Co [18].This agrees well with our result.
Figure 2 shows conventional (Figure 2(a)) and high resolution (Figure 2(b)) TEM images of W3 sample.Electron diffraction pattern (inset of Figure 2(a)) and HRTEM image also confirm the formation of bcc structured iron cobalt alloy.EDS analysis shows Fe and Co peaks in which the Fe peak is sharper than that of Co indicating higher content of Fe.Cl peak is from the residual chloroform which was used to wash as-synthesized nanoparticles.Also an oxygen peak is observed due to the partial oxidation of FeCo.
Figure 3 shows the effect of water to surfactant molar ratio () on the morphology, size and size distribution of as-synthesized nanoparticles.The mean size, and size distribution of each specimen were determined by inspecting about 50 TEM micrographs.It is evident that all samples have spherical shape due to the nature of the used surfactant and cosurfactant.Cetyltrimethylammonium bromide (CTAB) which has been used by Schulman for the first time (Hoar and Schulman, 1943) has a hydrophilic head and a lipophilic tail which makes it soluble in both polar and nonpolar solvents.In this quaternary system the polar cosurfactant (1-butanol) makes ion-dipole interaction with the surfactant and forms spherical aggregates in which the polar (ionic) ends of the surfactant molecules are oriented towards the center.We observed that without the addition of 1-butanol the transparent microemulsion would not form.In fact the role of 1-butanol is to act as an electronegative spacer which minimizes the repulsive forces between polar heads of CTAB molecules and lets them be aggregated in the form of spherical micelles.
Effect of Water to Surfactant Molar
Ratio. Figure 3 indicates the increasing of the mean size of nanoparticles with .As the value decreases, the relative amount of water reduces and a smaller micelle would be obtained.Therefore the limiting stability of nanoreactors increases leading to smaller nanoparticles.Also it was observed that the spherical shape of nanoparticles would not be affected by changing the value unlike surfactants like polyvinylpyrrolidone (PVP) [24].
The W series of samples evidences very narrow (about 3 nm) size distribution which is related to the nature of the surfactant.In fact the surfactant has a double influence on the particle formation process: (1) particle stabilization and (2) growth control.The value affects the former by determining the micellar core size, but the latter is influenced by the nature of the surfactant.The growth mechanism in microemulsions is based on intermicellar exchange [25].A rigid surfactant surface layer tends to resist opening, thus the reaction is slowed down and simultaneous nucleation and growth occur.This in turn results in the formation of large nanoparticles with broad size distribution.But CTAB provides a very flexible film [25] which facilitates the coalescence exchange between micellar cores.The high exchange rate leads to a uniform nucleation and growth resulting in a narrow size distribution.As noted by Carrey et al., a broad size distribution decreases the maximum achievable SAR [16].Therefore for as-synthesized FeCo nanoparticles the negative effect of a broad size distribution is not expected.
Effect of Molar Concentration of Metal Precursors.
Figure 4 demonstrates the effect of molar concentration of metal salts on the size and size distribution of nanoparticles.
It is evident that increasing the molar concentration of metal precursors inversely affects the nanoparticle size.Since the borohydride reduction of metal precursors is almost instantaneous, a huge number of nuclei will form at the first stage of process followed by the nanoparticle growth via intermicellar exchange.Since at high concentrations of metal salts the supersaturation factor () is higher, according to the Adamson equation for homogeneous nucleation, the nucleation rate is higher [26]: where is the interfacial tension between solid nucleus and surrounding drop liquid, V is the volume of one precipitate molecule, is the supersaturation ratio of liquid product molecules, and * is the critical number of liquid product molecules in a nucleus.Thus at higher concentrations of metal precursors the number of stable nuclei is higher and consequently the number of remaining product molecules is smaller.Therefore the growth of nanoparticles which proceeds by the addition of product molecules is slowed down and the terminal nanoparticle size reduces.Similarly at lower concentrations of metal precursors the number of formed nuclei is lower providing more product molecules to contribute in the growth stage leading to larger terminal nanoparticles.
Magnetic Studies.
Magnetization curves for W and M series of samples are outlined in Figures 5(a) and 5(b). and values are seen to be size dependent.As for W1 sample with mean size of 2 nm, the equals 8 (emu/g).For W2, W3, and W4 (with mean sizes of 2.5 nm, 4 nm, and 9 nm) the values reach 22, 36 and 65 (emu/g) respectively.This is also observed for M1, M2, and M3 with sizes of 6 nm, 3 nm, and 1.5 nm, and corresponding of 49, 23, and 6 (emu/g).Also as expected some of the particles (W1, W2, M1, and M2) show superparamagnetic behavior with zero coercivity.
In ferromagnetic metals like Fe, Co, and Ni the exchange interaction is positive, favoring the parallel alignment of spins.But when the particle size decreases, the majority of atoms and consequently spins are located at the nanoparticle surface.Regardless of the lower spin density at the surface, the structural changes at the surface should be considered.It is proved that the average lattice parameter in nanoparticles is less than their corresponding bulk materials mainly due to the bond length reduction [27].This bond length contraction induces the overlapping of atomic orbitals, which reduces the atomic dipole moment.Also due to the lower coordination number for surface atoms the exchange coupling between dipoles is less than internal atoms and therefore the magnetic moments tend to fluctuate.The sum of these effects disorder spins at the surface to form a magnetic dead layer.The effect of this layer is reducing the total magnetic moment of nanoparticle [28].This event is important mainly at sizes below 5 nm in which about %50 of total atoms are located at the surface.By increasing the nanoparticle size the thickness of this dead layer reduces and the magnetization of nanoparticle increases [28].
It is also seen from Figures 5(a) and 5(b) that the coercivity is size dependent.In fact by increasing the nanoparticles size the coercivity increases such that for W4 sample the high coercive field of 100 Oe is achieved.The reduced coercive force in terms of nanoparticle size at constant temperature is described as where ℎ is the reduced coercive field ( / ,(=0) ), is the nanoparticle size, and is the critical nanoparticle size in which the anisotropy energy (KV) dominates the thermal energy ( ) and the magnetic properties change from superparamagnetic to ferromagnetic.The critical size depends on the composition of nanoparticles as for iron oxide nanoparticles and it has the value of 12 nm [29] or for Fe 74.5- Cu Nb 3 Si 22.5- B alloy nanoparticles is dependent on and and changes from 10 nm for 1 at % Cu and 9 at % B to 15 nm for 1 at % Cu and 6 at % B [30].When ≤ , the coercive field is zero and further increasing in nanoparticle size beyond increases the coercivity.It is why for both W and M series of samples the coercivity increases with size.Only an exception is observed in the case of M3 specimen in which regardless of its greater size the coercivity is less than W3 sample.The reason could be the partial oxidation of M3 sample, but more investigations are needed.Figure 6 shows TEM images and corresponding hysteresis curves for W3 annealed samples.It is seen from Figures 6(a) and 6(b) that the nanoparticles have grown up by fusionfission to a mean size of 60 and 25 nm, respectively.As expected, by increasing the size of nanoparticles the corresponding saturation magnetizations have been increased (to 128 emu/g and 78 emu/g) but are still smaller than the bulk value (240 emu/g).
Figure 7 shows the coercivity as a function of particle size.It is seen that the coercivity has been decreased from 100 Oe for as-synthesized W3 to 60 and 40 Oe for annealed samples.The reason lies in the mechanism of magnetization.In multidomain particles magnetization reversal takes place by the motion of domain walls, but in single domain particles the magnetization reversal occurs by coherent reversal of the magnetic moment which requires the anisotropy energy to be dominated.Since the anisotropy energy is much higher than domain wall energy, the coercivity increases by the transition from multi-domain to single domain size regime.But with further decreasing the size, the coercivity diminishes as the result of decreasing the anisotropy energy (KV) such that below = 4nm the coercivity reaches zero exhibiting superparamagnetic behavior.Therefore for assynthesized FeCo nanoparticles is about 4 nm.So it is inferred that the transition from superparamagnetic to ferromagnetic is at 4 nm and the transition between single domain-multi domain size regimes is at 9 nm.
Calculation of SAR.
Some researches on the inductive magnetic properties of nanoparticles represent the superparamagnetic-ferromagnetic transition to have maximum SAR [15] and some other works show that the maximum SAR is achieved at single domain-multi domain boundary [16].For single domain-multi domain transition size the Stoner-Wohlfarth model is used to estimate the magnetic response.Therefore for W4 sample on the basis of (5) for random orientation nanoparticles we have where ,(=0 K) is the coercive field at = 0 K which could be calculated from ,(=300) using the following equation [16]: where is the Boltzmann constant, is the volume of nanoparticle, is the measurement frequency 5.5 × 10 −4 (Hz), 0 = 10 −10 (s), 1 = 0K, and 2 = 300 K.By solving the system of ( 5) and ( 6) simultaneously, the effective anisotropy would be calculated as eff = 3.8 × 10 4 (J/m 3 ) and 0 ,(=0 K) ≈ 145 mT.Then the coercive field at = 300 K and the frequency of hyperthermia treatment = 100 kHz could be calculated from (7) [16]: ] .
For the above calculations to be valid the condition 0 max ≥ 0 ,(=300 K,=100 KHz) ≈ 70 mT should be satisfied such that in each reversal of the applied field the full area of the hysteresis loop could be placed within the limits of the applied alternating field.
The obtained value of SAR is comparable to the highest value of 1700 (W/g) reported by Mehdaoui et al. for iron based nanoparticles [32] and larger than values of 450 (W/g) and 198 (W/g) for magnetite nanoparticles reported by Bakoglidis et al. [29] and Wang et al. [33], respectively.It should be noted that a broad range of SAR was calculated by Carrey et al. [16] depending on the anisotropy of the nanoparticles ranging from 1 to 2000 (W/g).
For superparamagnetic-ferromagnetic transition the LRT approximation should be applied.For the sake of simplicity it is assumed that eff = 3.8×10 4 (J/m 3 ).Considering = = 0 exp( eff /( )) the relaxation time could be obtained as = 1.171×10 −9 (s).Finally from (1), the area of the dynamic hysteresis loop would be = 2.62 × 10 −8 (mJ/g) with the corresponding SAR = 0.262 × 10 −2 (W/g).This is a rough estimation for W3 sample because the anisotropy field was assumed to be equal to that of W4 sample.But qualitatively it could be deduced that for W4 sample in the single domainmulti domain boundary the SAR is much higher (about 10 5 times) than W3 sample at ferromagnetic-superparamagnetic transition.Indeed the reason lies in the loss mechanism: hysteresis loss versus the Neel-Brown relaxation.The former is described by the Stoner-Wohlfarth models which assume null thermal activation and predict a square hysteresis area (W4 sample), but the latter assumes thermal equilibrium and a linear response of magnetic system to applied magnetic field (W3 sample).
Theoretically it is expected that the higher the frequency of the applied field the higher the losses.But experimental results indicate that by increasing the frequency of the applied field the generated heat reaches a frequency independent maximum.This could be due to nanoparticle interaction which makes hysteresis area independent of frequency [31].In fact the above calculations are based on the hypothesis that the nanoparticles do not have considerable interactions with each other.In order to take the interactions into account, Monte-Carlo simulations should be used.But there is a severe problem in matching the time in simulations with the real time and to the best of the authors' knowledge there is no theoretical expression for this problem [31].
Another important point to be highlighted is that the above calculations neglect the Brownian relaxations (physical motion of nanoparticles under the influence of the magnetic field) in colloidal solution.By considering the effect of this type of relaxation some of the applied field will be dissipated due to the physical rotation of nanoparticles to align them along the applied field and the generated heat will be different from the above values.
Based on the large obtained theoretical SAR it could be inferred that W4 sample (with mean size of 9 nm) has a great advantage to be used in magnetic hyperthermia treatment.But to assay these theoretical results which are based on the simple SW model, real experiments will be the subject of the future work to measure the values of A and SAR comparing them with these theoretical results.
Conclusions
Size controlled synthesis of FeCo nanoparticles was done using microemulsion method.The main parameters which determine the particle size are water to surfactant molar ratio () and molar concentration of metal salts.By increasing the former the size of micelles increases leading to larger terminal nanoparticles and by raising the latter the number of formed nuclei increases leading to smaller terminal nanoparticles.
Size dependency of magnetic properties including and was investigated.The observed increase of with size is due to the disappearance of the magnetic dead layer in larger nanoparticles.But the observed change in coercivity is due to the transition between various size regimes and consequently the magnetization reversal mechanisms.Based on the variations of coercivity the superparamagneticferromagnetic and single domain-multi domain transition sizes were determined.
Then the magnetic losses were calculated at transition points based on the Stoner-Wohlfarth and LRT models.For W4 sample which is at the single domain-multi domain ferromagnetic transition point the anisotropy field, ,(=0 K) , and SAR were calculated using the Stoner-Wohlfarth model.The results are comparable to the highest reported in the literature.It should be noted that the results are applicable only when 0 max ≥ 0 ,(=300 K,=100 KHz) ≈ 70 mT.
But for W3 sample at superparamagnetic-single domain ferromagnetic transition the approximate SAR was obtained very lower than that of W4 sample.Based on the large obtained theoretical SAR it could be concluded that W4 sample (with mean size of 9 nm) has a high potential to be used in magnetic hyperthermia treatment.Future experiments such as calorimetry could assay the theoretical results of the present research.
Figure 1 :
Figure 1: XRD patterns of W3 sample in (a) as synthesized state and (b) annealed at 550 ∘ C for 20 minutes.
Figure 2 :
Figure 2: (a) TEM micrograph of W3 sample (inset: selected area diffraction pattern).(b) High resolution TEM image of W3 sample.(c) EDS spectra of W3 sample showing Fe, Co, Cl, and O peaks.
Figure 5 :
Figure 5: Magnetization curves for as-synthesized nanoparticles showing the effect of (a) water to surfactant molar ratio (R) (b) concentration of metal salts.
Table 1 :
Samples and process conditions. | 2018-12-06T19:20:03.811Z | 2014-02-24T00:00:00.000 | {
"year": 2014,
"sha1": "cd7bf8228dabeb18e5fdf1c6dc9ee4bc529e0f43",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/amse/2014/295390.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cd7bf8228dabeb18e5fdf1c6dc9ee4bc529e0f43",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
235809225 | pes2o/s2orc | v3-fos-license | CROTON: an automated and variant-aware deep learning framework for predicting CRISPR/Cas9 editing outcomes
Abstract Motivation CRISPR/Cas9 is a revolutionary gene-editing technology that has been widely utilized in biology, biotechnology and medicine. CRISPR/Cas9 editing outcomes depend on local DNA sequences at the target site and are thus predictable. However, existing prediction methods are dependent on both feature and model engineering, which restricts their performance to existing knowledge about CRISPR/Cas9 editing. Results Herein, deep multi-task convolutional neural networks (CNNs) and neural architecture search (NAS) were used to automate both feature and model engineering and create an end-to-end deep-learning framework, CROTON (CRISPR Outcomes Through cONvolutional neural networks). The CROTON model architecture was tuned automatically with NAS on a synthetic large-scale construct-based dataset and then tested on an independent primary T cell genomic editing dataset. CROTON outperformed existing expert-designed models and non-NAS CNNs in predicting 1 base pair insertion and deletion probability as well as deletion and frameshift frequency. Interpretation of CROTON revealed local sequence determinants for diverse editing outcomes. Finally, CROTON was utilized to assess how single nucleotide variants (SNVs) affect the genome editing outcomes of four clinically relevant target genes: the viral receptors ACE2 and CCR5 and the immune checkpoint inhibitors CTLA4 and PDCD1. Large SNV-induced differences in CROTON predictions in these target genes suggest that SNVs should be taken into consideration when designing widely applicable gRNAs. Availability and implementation https://github.com/vli31/CROTON. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Clustered regularly interspaced short palindromic repeats (CRISPR)/ CRISPR-associated protein 9 (Cas9) is a revolutionary gene-editing technology that has broad applications in basic biology, biotechnology and medicine (Hsu et al., 2014).CRISPR/Cas9-mediated genome editing follows two major steps: (1) the induction of a doublestranded break (DSB) in a target DNA sequence and (2) the activation of cellular DNA-repair pathways.CRISPR/Cas9 is a ribonucleoprotein that consists of a guide RNA (gRNA) that defines a target DNA sequence and the dual DNA endonuclease Cas9 which induces a DSB around 3 base pairs (bps) upstream of an 'NGG' protospacer adjacent motif (PAM).Following DNA cleavage, a DSB can be repaired by three DNA repair pathways: template-free nonhomologous end-joining (NHEJ) and microhomology-mediated end joining (MMEJ), as well as template-directed homology-directed repair (HDR).HDR can be used to introduce precise DNA modifications, but it is inefficient, especially in non-mitotic cells, and often generates unwanted byproducts.In contrast, NHEJ and MMEJ were believed to trigger random repair outcomes.However, recent research has shown that NHEJ and MMEJ repair outcomes are dependent on features on target DNA sequences (Molla and Yang, 2020).
Since a range of DNA sequence factors, such as GC content and microhomology length and position, may contribute to repair outcomes, accurate prediction of template-free CRISPR/Cas9 editing outcomes is a challenging bioinformatics question.Three machine learning (ML) models, inDelphi, FORECasT and SPROUT, which utilize neural networks and k-nearest neighbors, multinomial logistic regression, as well as gradient-boosting decision trees, respectively, have been designed to tackle this question.However, these ML methods require both feature and model engineering and are thus
A potential alternative ML framework is deep convolutional neural networks (CNNs), which have attracted attention in computational biology because they excel at pattern recognition.Indeed, many state-of-the-art ML models capable of predicting specific molecular phenotypes from raw DNA sequences utilize deep CNNs (Eraslan et al., 2019;Zhang et al., 2021).Since deep CNNs process raw sequences, manual feature engineering is not required for CNNgeneration, which can expedite model creation.Furthermore, CNNs can detect and process important, but not well-understood, parts of an input, rendering it potentially more effective and versatile relative to other ML methods.Effective model architectures are essential for CNN performance, but CNN architecture design requires a substantial amount of ML knowledge and time.Recently, neural architecture search (NAS), a state-of-the-art method for finding good neural network architectures has been developed to automate model-engineering.NAS is a form of automated machine learning (AutoML) that has been shown to generate CNNs with comparable efficacy to manually engineered models (Zhang et al., 2021;Zoph and Le, 2017).
Herein, CROTON (CRISPR Outcomes Through cONvolutional neural networks), a novel deep learning framework based on deep CNNs and NAS, has been created to predict CRISPR/Cas9 editing outcomes.By leveraging CNNs and NAS, CROTON fully automates the tasks of predicting 1 bp insertion and deletion probability as well as deletion and frameshift frequency from raw sequences alone and without any prior knowledge (Fig. 1).We demonstrate that CROTON, which was trained on a synthetic construct-based dataset, outperforms existing approaches on a held-out, independent endogenous T-cell dataset.CROTON was then utilized to evaluate the effect of single nucleotide variants (SNVs) on the CRISPR/Cas9mediated genome editing outcomes of four clinically relevant target genes: ACE2, CCR5, CTLA4 and PDCD1.The differences in predicted SNV-induced editing outcomes suggest that SNVs should be considered when designing widely applicable gRNAs.
Acquisition and pre-processing of CRISPR/Cas9 editing outcome datasets
The datasets used to train CROTON were acquired from two previous works that produced the models FORECasT and SPROUT (Allen et al., 2019;Leenay et al., 2019).To reconcile the two datasets, we compiled 60 bp genomic sequences as the model inputs.Specifically, for each gRNA in the FORECasT dataset, we aligned the PAM sites at 33 nt so the cut site was at the center (30 nt) of all input sequences.The pseudo-letter 'N' was padded to the FORECasT sequences if they were shorter than 60 bp after PAM realignment.To obtain DNA sequences for SPROUT, retrieved genomic coordinates were mapped to the human genome build 38 (hg38).Subsequently, sequences from FORECasT and SPROUT were one hot encoded to 4 Â n matrices for each DNA sequence, where n ¼ 60 was the sequence length, the nucleotide 'A' was represented by the array [1, 0, 0, 0], 'C' was represented by [0, 1, 0, 0], 'G' was represented by [0, 0, 1, 0], 'T' was represented by [0, 0, 0, 1] and 'N' was represented by [0.25, 0.25, 0.25, 0.25].
To compile the editing outcomes, CIGAR (Compact Idiosyncratic Gapped Alignment Report) strings were processed for the FORECasT and SPROUT datasets.For each gRNA, we computed the following editing outcome statistics: (1) 1 bp insertion frequency, (2) 1 bp deletion frequency, (3) deletion frequency, (4) 1 bp frameshift frequency, (5) 2 bp frameshift frequency and (6) total frameshift frequency.(Given I is the total number of insertions, D is the total number of deletions and I þ D is the total number of insertions or deletions (indels), the first three metrics were defined as follows: (i) IþD and (iii) D IþD .The next three frameshift frequency statistics were defined as the proportion of indel outcomes that induced a frameshift of 1 bp, 2 bp or the union of both).
We leveraged the large-scale FORECasT data to train the model and held out the SPROUT dataset as an independent dataset for performance evaluation.Within the FORECasT data, samples were randomly split into training, validation and testing datasets in an 8:1:1 ratio.The FORECasT training dataset had 28 105 datapoints, and both the FORECasT test and validation datasets had 3512 datapoints.In addition, the SPROUT dataset, which we utilized for cross-cellular testing, had 1603 datapoints.The validation dataset was used to monitor model training convergence and early-stopping, while the testing datasets were held-out as independent, unseen datasets to evaluate the trained model performance.
Automated deep learning interface for CRISPR/Cas9 editing outcome prediction
CROTON is a deep CNN that predicts CRISPR/Cas9 editing outcomes from raw one hot encoded DNA sequences.Given a one-hot encoded input sequence of shape x i 2 R 4Â60 , the task for CROTON was to learn a function f x;a ðÁÞ with trainable parameters x under a fixed architecture a that mapped a sequence x i to a vector of six indel and frameshift-related probabilities y i ¼ fy ij 2 ½0; 1 jj ¼ 1; 2; . . .; 6g, such that y i ¼ f x;a ðx i Þ.To search for expressive architectures a and learn f x;a , AMBER (v0.1.0),a framework for CNN architecture design for genomic sequence processing, was utilized to automatically design the CROTON model architecture.In AMBER, CROTON's input and output stems were fixed to fit the input sequences and output labels, while its middle eight convolution layers were searched (Zhang et al., 2021).
We first describe the fixed input and output stems for CROTON.The input layer contained a 4 Â 60 matrix x i , followed by a linear stem convolution layer with kernel size 8 that expanded the 4-channel DNA sequence into 32 channels.The top of the model employed global average pooling that flattened convolution layers to a fully connected layer with 32 hidden units, and the final outputs of the model were multi-tasking predictions for each of the six editing outcome statistics.We used binary cross-entropy as the loss function to update x for predictions of the six indel and frameshiftrelated probabilities on a set of N training datapoints: Next, we describe the model search space for the variable layers of CROTON.Specifically, the middle eight convolution layers were variable, and their computational operations and residual connections were searched by AMBER (Zhang et al., 2021) to build an optimal CNN architecture.For each layer, AMBER searched for six candidate computational operations: four convolution layers with Rectified Linear Unit (ReLU) activation, kernel size 2 f4; 8g and dilation rate 2 f1; 4g, as well as the maximum and average pooling layers with pooling size ¼ 4 and stride size ¼ 1. Convolution operations across all layers had 32 filters, consistent with the input stem linear convolution.A special operation, identity mapping, was also added at each layer to potentially reduce model complexity.For any layer t, the computation operation was sparsely encoded by a o t 2 ½1; 7. Residual connections for the tth layer were encoded as binary tokens a r t 2 f0; 1g from each of the preceding layers 1; 2; ::; t À 1.For brevity, we let a ¼ fa o t ; a r t jt ¼ 1; 2; . . .; Tg be CROTON's model architecture tokens for both computation operations and residual connections in the T ¼ 8 model space, such that a set of architecture tokens a fully specifies a model architecture for CROTON.In total, this eight-layer model space hosted 1:54 Â 10 15 viable model architectures.
Therefore, the architecture search problem was formulated as a sparse classification for the selection of computation operations, and binary classifications for residual connections, respectively.AMBER leverages a recurrent neural network (RNN) with parameters h as a controller model to generate CROTON's model architectures a with log-likelihood pða; hÞ.A detailed explanation of the AMBER workflow can be found in our published work (Zhang et al., 2021).
Formally, let a o t denote the computation operation, and a r t denote residual connections for the tth layer; let h t denote the hidden states of the controller model at the tth layer.At each layer t, a o t and a r t were sampled probabilistically from multinomial and binomial distributions, respectively; subsequently, the sampled tokens were fed as inputs to the next layer t þ 1.In particular, the controller model predicted the a o t by first updating the hidden state through a long short-term memory (LSTM) cell h t ¼ f ho ða o tÀ1 ; h tÀ1 Þ, then sampling from the multinomial distribution of softmax function rðÁÞ transformed h t by weight W o : The residual connection for the tth layer from the rth layer, 1 r < t, was sampled from the binomial distribution whose probability was determined by an attention mechanism between the query layer's hidden state h t and the previous layer's hidden state h r , with trainable weights v, W r1 and W r2 : Thus, the total trainable parameters for the controller model were h ¼ fh o ; W o ; W r1 ; W r2 ; vg, and the log-likelihood for selecting a set of architecture tokens a under the parameters h was pða; hÞ.We employed reinforcement learning to optimize h.Following the previously established REINFORCE rule (Williams, 1992), the policy gradient for h was obtained to maximize the average multi-tasking Spearman's correlation coefficient R on the validation dataset over a batch of m sampled architectures, with an exponential moving average of rewards b to stabilize the reward signals: Finally, the optimal AMBER-searched CNN architecture, which was defined as the best reward architecture in the last controller step, was scaled in width by a scaling factor.Dropouts were then added after each searched layer before the architecture was retrained from scratch.We performed a simple grid search for these two additional hyperparameters and reported the best performing model with width scaling factor ¼ 6 and dropout rate ¼ 0.4.Training convergence was defined as validation loss not decreasing for at least 50 epochs.
Performance comparisons
We sampled CNN architectures from the same model space without training the AMBER controller model to benchmark the quality of the automatically designed model architecture.In particular, computational operations were sampled uniformly from the model space; residual connections were sampled at the same density as CROTON.A total of n ¼ 50 models with sampled architectures were trained with identical width-scale factor and optimization configurations to robustly evaluate an uninformed, null distribution of performance in the model space.Subsequently, the testing performance for every CROTON prediction task was compared to that of the sampled model cohort.
We also evaluated CROTON's predictions by classifying each individual task's predicted probability as high (larger than the observed median value) versus low (lower than the observed median value), and calculated area under the curve receiver operating characteristics (AUC-ROC) for this binary classification problem.
Furthermore, we applied the trained CROTON model on the held-out SPROUT T-cell dataset as an independent, cross-cellular benchmark.Existing methods were benchmarked against CROTON, including inDelphi (Shen et al., 2018), FORECasT (Allen et al., 2019) and SPROUT (Leenay et al., 2019).For inDelphi and FORECasT, we used the publicly available trained models (https://github.com/maxwshen/inDelphi-modeland https://github.com/felicityallen/SelfTarget) to generate predictions for all sequences in the SPROUT dataset.The Pearson's correlation between predicted and observed values was then utilized to compare the performance of inDelphi and FORECasT to that of CROTON.Since inDelphi had different models for different cell lines, we reported values from the best performing inDelphi cell-line/model.For the SPROUT model trained on the SPROUT dataset, we compared CROTON's performance to the published metrics (Leenay et al., 2019) under the criteria defined by SPROUT (i.e.Kendall's tau for 1 bp insertion and deletion probabilities, and Pearson's correlation for deletion frequency).
In silico saturated mutagenesis analysis for model interpretation
To interpret how the CNNs made their predictions, in silico saturated mutagenesis was performed using the Selene framework (Chen et al., 2019).In silico saturated mutagenesis is a perturbation-based base importance analysis method in which CNNs evaluate DNA sequences with single nucleotide polymorphisms (SNPs).In an SNP, a nucleotide at a specific position along a DNA sequence is changed to another, for instance, 'ACC' is a perturbed sequence of 'GCC'.In in silico saturated mutagenesis, the model runs on every possible one hot encoded sequence that can be perturbed from the original sequence.The final interpretation output is a matrix with the same shape as the input (4 Â 60) in which every matrix entry represents a base importance score calculated as the difference between the predictions of the reference sequence and the altered sequence.In summary, in silico saturated mutagenesis evaluates how important every base pair position is to a CNN by computing the deviation of its predictions for sequences with SNPs at that position from the original unperturbed sequence.Herein, sequences with model predictions within 0.05 of true values were utilized for in silico saturated mutagenesis analysis.
Variant effect analysis for frameshift gRNA design
The human genome-wide variants dbSNP build 151 VCF file was downloaded from NCBI (ftp.ncbi.nih.gov/snp/organisms/human_9606_b151_GRCh38p7/VCF/).For all annotated coding exons in Gencode V35, we scanned potential PAM sites ('NGG') in the hg38 genome before aligning them to the CROTON 60 bp window.Then, bedtools (v2.29) was used to intersect the PAM sequences to the variants.For each PAM site with variants in the four representative genes (ACE2, CCR5, CTLA4 and PDCD1), CROTON predicted editing outcome probabilities for sequences with reference and alternative alleles.The differences between reference and alternative alleles were subsequently calculated for each of the individual tasks.In addition, to find the least variant gene-editing targets, the absolute differences between the reference and alternative CROTON predictions were computed across all statistics for all SNVs at a particular potential target location.Then these targets were ranked by the mean of their absolute differences to elucidate the gene targets with the least impactful SNVs.
Automated model architecture design for CROTON
CROTON was built on data from FORECasT because it produced the largest CRISPR/Cas9 editing outcome dataset relative to those of inDelphi and SPROUT (Allen et al., 2019;Leenay et al., 2019;Shen et al., 2018).FORECasT data was split into training, validation and testing datasets and all metrics presented are from model performance on the testing dataset, which CROTON was unexposed to during training.CROTON was designed to predict 1 bp insertion and 1 bp deletion probability, as well as deletion, 1 bp frameshift, 2 bp frameshift and overall frameshift frequency.Since these features were interrelated, we chose to utilize a multi-task learning framework.
Multi-task learning can outperform single-task learning by leveraging features derived for multiple prediction tasks (Zhang and Yang, 2018).In addition, manually tuning a CNN would be timeconsuming and limited in scope.Thus, we utilized NAS to automatically create a multi-task deep CNN framework for CRISPR/Cas9 editing outcome prediction.The NAS model search space contained dilated and non-dilated one-dimensional convolutional layers with kernel sizes four and eight (dconv4, dconv8, conv4 and conv8) as well as the maximum pooling (maxpool), average pooling (avgpool) and identity layers (Methods).To assess the efficacy of automated model engineering, the final NAS architecture was compared to 50 randomly sampled model architectures from the search space.The final NAS-designed CROTON model outperformed all randomly selected model architectures, indicating that NAS is an effective strategy for deep-CNN design.The final CROTON architecture achieved Pearson's Correlations (R P ) greater than 60 for all prediction tasks and R P greater than 70 for deletion frequency, 1 bp insertion probability and 1 bp deletion probability prediction (Fig. 2A).
We also analyzed the average layer selection probabilities for the NAS run.Interestingly, across all model layers, convolutional layers were consistently favored over pooling layers, indicating that precise feature locations were in our model.In addition, a dilated convolutional layer of size 8 was favored for all layers after Layer 1. Dilated layers allow the receptive field to be enlarged without losing resolution or coverage, further suggesting that spatial relationships between features were important for CRISPR/Cas9 outcome prediction (Fig. 2B; Yu and Koltun, 2016).
Croton accurately predicts CRISPR/Cas9 editing outcomes across cell lines and outperforms other predictors
The efficacy of CROTON was also assessed by computing whether it made accurate predictions above or below the median value in each task dataset.Using this evaluation strategy, the area under the curve (AUC) was calculated to measure CROTON's performance.On the FORECasT data, CROTON achieved AUCs of greater than 80 for all prediction tasks and greater than 90 for the deletion frequency and 1 bp insertion tasks (Fig. 3A).Since FORECasT data was based on synthetic gRNA-CRISPR target constructs, it was important to test CROTON on an endogenously generated gene-editing dataset (Allen et al., 2019).To this end, CROTON was tested with the held-out, independent SPROUT CRISPR/Cas9 editing outcome dataset.This dataset was derived from primary human T cells, which are widely utilized in therapeutic cell engineering (Leenay et al., 2019;Wang et al., 2020).On the SPROUT dataset, CROTON's performance was conserved with AUCs similar to those measured with the FORECasT dataset, indicating that large-scale synthetic construct based datasets are effective for endogenous CRISPR/Cas9 predictions (Fig. 3B).
CROTON's predictive accuracy on the SPROUT dataset was then compared to that of existing ML-based CRISPR/Cas9 editing outcome predictors: SPROUT, FORECasT and inDelphi.For inDelphi, metrics for the best performing model on the HEK293 cell line were reported.Overall, CROTON substantially outperformed all models on all but one task.CROTON was only less effective than FORECasT at frameshift frequency prediction but outperformed FORECasT with wide margins on other prediction tasks such as deletion and 1 bp insertion frequency (Tables 1 and 2).Notably, CROTON performed on par or even outperformed SPROUT, which was trained on the SPROUT dataset.
In silico mutagenesis revealed local sequence determinants for diverse editing outcomes
Since CROTON is an effective CRISPR/Cas9 editing outcome predictor and does not utilize any manual feature engineering, it was important to elucidate how CROTON made predictions from a raw input sequence.Thus, we conducted in silico saturated mutagenesis for CROTON on all prediction tasks for both the FORECasT and SPROUT datasets.These plots display the average importance values computed over multiple sequences in these test datasets (Methods).Across FORECasT and SPROUT data, saturated mutagenesis plots for the same prediction task were very similar.Representative in silico saturated mutagenesis plots based on FORECasT data are shown in which larger text is indicative of nucleotides with greater importance to CROTON prediction (Fig. 4).Consistent with prior reports, the base pairs upstream of the PAM sequence were the most important for CROTON's templatefree CRISPR/Cas9 editing outcome predictions (Fig. 4; Leenay et al., 2019;Shen et al., 2018).In particular, our analyses support crosscell line and cross-organism studies that have shown that the nucleotide 4 base pairs upstream of the PAM has the greatest effect on CRISPR/Cas9 DSB repair (Fig. 4A and C; Molla and Yang, 2020).Thus, our study confirms that the positions of nucleotides relative to the PAM site are important to CRISPR/Cas9 editing outcomes.Notably, 1 bp deletion and frameshift frequency had determinants across the entire input sequence (Fig. 4B and D), suggesting that they are more complex prediction tasks.
Croton reveals the effects of SNVs on CRISPR/Cas9mediated genome editing
There are approximately 10-15 million common human SNVs, which can impact the efficacy of CRISPR/Cas9 editing (Chen et al., 2020;Eichler et al., 2007).Since CROTON accurately predicts 1 bp insertion probability with the best performance (Tables 1 and 2), we utilized 1 bp insertion probability to analyze the effect of SNVs on CRISPR/Cas9 editing outcomes.CROTON was applied across the coding regions of the gene bodies of 4 clinically relevant gene editing targets: ACE2, CCR5, CTLA4 and PDCD1.ACE2 and CCR5 are receptors for the SARS-CoV-2 virus and the human immunodeficiency virus (HIV), respectively, and have been considered as therapeutic targets for viral infection (Michauld et al., 2020;Vangelista and Vento, 2018).CTLA4 and PDCD1 are immune checkpoint inhibitors that can be targeted for cancer immunotherapy (Shi et al., 2017;Stadtmaue et al., 2020;Wang et al., 2020).Indeed, several ongoing clinical trials utilize CRISPR/Cas9 to delete PDCD1, including one that has been deemed safe and feasible for late-stage non-small cell lung cancer (NSCLC) patients (ClinicalTrials.govNCT02793856; Lu et al., 2020;Wang et al., 2020).
Notably, CROTON's analysis revealed that there were SNVs that the 1 bp insertion probability by !30% in all four clinically relevant genes (Table 3).We have also tabulated the top ten least variant gene target locations for each of these genes (Supplementary Tables S1-S4).Since 1 bp insertions result in frameshift mutations that will likely inactivate the target gene, these findings indicate that personalized genomic variants should be properly considered for these PAM sites for genome-editing applications in patients.We further analyzed PDCD1 because it is involved in the greatest number of ongoing interventional CRISPR/Cas9 clinical trials (Wang et al., 2020). 1 bp insertion probability in PDCD1 varies considerably across the coding regions of the gene body.Notably, the two gRNAs which were used in the NSCLC trial, hereafter referred to as gRNA1 and gRNA2, had a high and low 1 bp insertion probability, respectively (Fig. 5; boxed in orange).These differences indicate that gRNA1 by itself is more likely to create a loss of function mediated by a 1 bp insertion than gRNA2.CROTON's predictions indicate that SNVs may be important factors to consider in CRISPR/Cas9 genome editing, especially in clinical trials with patients that harbor these variants.
Discussion
CRISPR/Cas9 is a transformative gene-editing technology that has been widely applied in basic and translational biological research (Hsu et al., 2014;Wang et al., 2020).Recently, the creation of ML models capable of predicting the repair outcomes of CRISPR/Cas9 editing has highlighted the potential of predictable and precise template-free genome editing paradigms (Molla and Yang, 2020).However, existing ML prediction methods are all dependent on feature and model engineering, which may restrict their performance to current knowledge about CRISPR/Cas9 editing (Allen et al., 2019;Leenay et al., 2019;Shen et al., 2018).Notably, deep CNNs and state-of-the-art NAS have been used to generate computational models based on genomic sequences (Eraslan et al., 2019;Zhang et al., 2021;Zoph and Le, 2017).In this study, we created CROTON, a novel framework that leverages both multi-tasking deep CNNs and NAS to predict CRISPR/Cas9 editing outcomes.CROTON predicts 1 bp insertion and 1 bp deletion probability, as well as deletion, 1 bp frameshift, 2 bp frameshift, and overall frameshift frequency directly from raw DNA target sequences.CROTON is highly automated relative to existing ML prediction methods and it outperforms them on a primary T cell-based genomic editing dataset.These results highlight the potential for CNNs and NAS for the precise prediction of genomic editing outcomes.A CROTON web interface has been made publicly available at the following link: https://github.com/vli31/CROTON.
The efficacy of NAS-designed models implies that NAS has significant potential in genomics and can design models that accurately predict molecular phenotypes from raw sequence alone.Furthermore, in silico saturated mutagenesis of CROTON showed that nucleotides upstream of the PAM were important to CNN prediction, which aligns with previous reports (Molla and Yang, 2020).Currently, although CROTON was built on the synthetic constructbased FORECasT dataset, when tested on the endogenous genomic SPROUT dataset, CROTON's accuracy was largely conserved and it outperformed existing models.CROTON's effectiveness between these two datasets indicates that utilizing synthetic constructs is an effective strategy to generate the large-scale data necessary for ML.
CROTON is highly effective in predicting 1 bp insertion probability, which can result in a frameshift mutation that inactivates a target gene.However, similar to other CRISPR/Cas9 editing outcomes predictors, CROTON is less effective in predicting overall frameshift frequency, which may limit its usage for loss-of-function gRNA design.CROTON's accurate 1 bp insertion probability predictions were applied to 4 clinically relevant target genes to assess how SNVs affect genome editing outcomes.On all four genes, ACE2, CCR5, CTLA4 and PDCD1, CROTON found variants that caused a significant difference in 1 bp insertion probability.To our knowledge, this is the first study that considers how naturally occuring variants affect CRISPR/Cas9 gene editing outcomes.We found that genomic loci with SNVs that have large effects on CRISPR/ Cas9 editing outcomes should be avoided in widely applicable gRNA design.Further analysis of two gRNAs that were utilized in an NSCLC clinical trial revealed differential 1 bp insertion probability.Future studies may reveal whether this difference has a significant impact on genome editing outcomes in patients and whether there are better gRNA pairs for effective PDCD1 genome editing.A CRISPR editing outcome predictor sensitive to genetic alterations at base-pair resolution like CROTON could be critical for designing effective gene therapies tailored to individual patients.In addition, CROTON may be further developed to predict a more complete spectrum of DNA repair sequences.Notably, template-free CRISPR/Cas9-based correction of genetic diseases has been performed in Hermansky-Pudlak syndrome and Menkes disease with 88% and 94% efficiency, respectively (Shen et al., 2018).If CROTON can predict specific DNA sequences resulting from template-free repair of a CRISPR/Cas9-induced DSB, it may also be utilized to design gRNAs capable of restoring normal gene function.
Furthermore, CROTON may be adapted to elucidate the fundamental cellular and molecular alterations induced by CRISPR/Cas9 editing.CROTON can be used to form a deep learning pipeline with existing algorithms that can predict transcription, splicing and polyadenylation from raw DNA sequences (Bogard et al., 2019;Jaganathan et al., 2019;Zhou et al., 2018).This CROTON-based platform would allow CRISPR/Cas9 to be used for precise manipulation of the transcriptome, thus creating a novel paradigm for functional genomics and biomedicine.
V
C The Author(s) 2021.Published by Oxford University Press.
Fig. 1 .
Fig. 1.The CROTON ML pipeline is highly automated.Unlike the three existing models for CRISPR/Cas9 editing outcome prediction, CNN and NAS-based CROTON is based on automated feature and model design, which creates an endto-end ML pipeline from data acquisition to model deployment
Fig. 2 .Fig. 3 .
Fig. 2. NAS designs effective multi-task deep CNN architectures.(A) CROTON outperforms models with sampled architectures from the model search space.CROTON achieves R P of > 60 for all prediction tasks and R P of 87.96 and 90.79 for deletion frequency and 1 bp insertion probability prediction, respectively.(B) The layer selection probabilities for the best CROTON architecture
Fig. 4 .
Fig. 4. The importance assigned by CROTON to every nucleotide on the input sequence In silico saturated mutagenesis plots for 1 bp insertion probability (A), 1 bp deletion probability (B), deletion frequency (C) and frameshift frequency (D)
Fig. 5 .
Fig. 5. SNVs affect CRISPR/Cas9 editing outcomes.The distribution of CROTON's 1 bp insertion probability predictions on all 211 PAM sites across the five PDCD1 coding regions.The orange boxes indicate gRNA1 (left) and gRNA2 (right), which were utilized in an NSCLC clinical trial that used CRISPR/Cas9 to inactivate PDCD1.The black horizontal line indicates the 1 bp insertion probability prediction for the reference sequence, while circles (color-coded by exon) indicate the 1 bp insertion probability predictions for sequences with alternative alleles
Table 1 .
Performance comparison of CROTON, inDelphi and FORECasT by Pearson's correlation (R P )
Table 2 .
Performance comparison of CROTON and SPROUT
Table 3 .
SNVs with a High Impact on 1 bp Insertion Probability | 2021-07-13T21:51:54.697Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "43cae9af661bd11e66e3daa5d154a846e940da4d",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/37/Supplement_1/i342/39620203/btab268.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "65e02d662eab185b56cce6925cebaf25234c81f2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
48940 | pes2o/s2orc | v3-fos-license | An Efficient Compiler for Weighted Rewrite Rules
Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computational phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be simpler and more efficient than existing algorithms. Further, many of our applications demand the ability to compile weighted rules into weighted FSTs, transducers generalized by providing transitions with weights. We have extended the algorithm to allow for this.
Motivation
Rewrite rules are used in many areas of natural language and speech processing, including syntax, morphology, and phonology 1. In interesting applications, the number of rules can be very large. It is then crucial to give a representation of these rules that leads to efficient programs.
Finite-state transducers provide just such a compact representation (Mohri, 1994). They are used in various areas of natural language and speech processing because their increased computational power enables one to build very large machines to model interestingly complex linguistic phenomena. They also allow algebraic operations such as union, composition, and projection which are very useful in practice (Berstel, 1979;Eilenberg, 1974Eilenberg, 1976. And, as originally shown by Johnson (1972), rewrite rules can be modeled as 1 Parallel rewrite rules also have interesting applications in biology. In addition to their formal language theory interest, systems such as those of Aristid Lindenmayer provide rich mathematical models for biological development (Rozenberg and Sa]omaa, 1980).
231
finite-state transducers, under the condition that no rule be allowed to apply any more than a finite number of times to its own output. Kaplan and Kay (1994), or equivalently Karttunen (1995), provide an algorithm for compiling rewrite rules into finite-state transducers, under the condition that they do not rewrite their noncontextual part 2. We here present a new algorithm for compiling such rewrite rules which is both simpler to understand and implement, and computationally more efficient. Clarity is important since, as pointed out by Kaplan and Kay (1994), the representation of rewrite rules by finite-state transducers involves many subtleties. Time and space efficiency of the compilation are also crucial. Using naive algorithms can be very time consuming and lead to very large machines (Liberman, 1994).
In some applications such as those related to speech processing, one needs to use weighted rewrite rules, namely rewrite rules to which weights are associated. These weights are then used at the final stage of applications to output the most probable analysis. Weighted rewrite rules can be compiled into weighted finite-state transducers, namely transducers generalized by providing transitions with a weighted output, under the same context condition. These transducers are very useful in speech processing (Pereira et al., 1994). We briefly describe how we have augmented our algorithm to handle the compilation of weighted rules into weighted finite-state transducers.
In order to set the stage for our own contribution, we start by reviewing salient aspects of the Kaplan and Kay algorithm.
2The genera] question of the decidability of the halting problem even for one-rule semi-Thue systems is still open. Robert McNaughton (1994) has recently made a positive conjecture about the class of the rules without self overlap. Prologue o I d( Obligatory( ¢ , <i , >)) (1)
The KK Algorithm
The rewrite rules we consider here have the following general form: Such rules can be interpreted in the following way: ¢ is to be replaced by ¢ whenever it is preceded by A and followed by p. Thus, A and p represent the left and right contexts of application of the rules. In general, ¢, ¢, A and p are all regular expressions over the alphabet of the rules. Several types of rules can be considered depending on their being obligatory or optional, and on their direction of application, from left to right, right to left or simultaneous application. Consider an obligatory rewrite rule of the form ¢ --+ ¢/A p, which we will assume applies left to right across the input string. Compilation of this rule in the algorithm of Kaplan and Kay (1994) (KK for short) involves composing together six transducers, see Figure 1.
We use the notations of KK. In particular, denotes the alphabet, < denotes the set of context labeled brackets {<a, <i, <c}, > the set {>a, >i, >c}, and 0 an additional character representing deleted material. Subscript symbols of an expression are symbols which are allowed to freely appear anywhere in the strings represented by that expression. Given a regular expression r, Id(r) is the identity transducer obtained from an automaton A representing r by adding output labels to A identical to its input labels.
The first transducer, Prologue, freely introduces labeled brackets from the set {<a, <i, <~, >a, >i, >~} which are used by left and right context transducers. The last transducer, Prologue -i, erases all such brackets.
In such a short space, we can of course not hope to do justice to the KK algorithm, and the reader who is not familiar with it is urged to consult their paper. However, one point that we do need to stress is the following: while the construction of Prologue, Prologue -i and Replace 232 is fairly direct, construction of the other transducers is more complex, with each being derived via the application of several levels of regular operations from the original expressions in the rules. This clearly appears from the explicit expressions we have indicated for the transducers. The construction of the three other transducers involves many operations including: two intersections of automata, two distinct subtractions, and nine complementations. Each subtraction involves an intersection and a complementation algorithm 3. So, in the whole, four intersections and eleven complementations need to be performed.
Intersection and complementation are classical automata algorithms (Aho et al., 1974;Aho et al., 1986). The complexity of intersection is quadratic. But the classical complementation algorithm requires the input automaton to be deterministic. Thus, each of these 11 operations requires first the determinization of the input. Such operations can be very costly in the case of the automata involved in the KK algorithm 4.
In the following section we briefly describe a new algorithm for compiling rewrite rules. For reasons of space, we concentrate here on the compilation of left-to-right obligatory rewrite rules. However, our methods extend straightforwardly to other modes of application (optional, right-to-left, simultaneous, batch), or kinds of rules (two-level rules) discussed by Kaplan and Kay (1994).
3A subtraction can of course also be performed directly by combining the two steps of intersection and complementation, but the corresponding algorithm has exactly the same cost as the total cost of the two operations performed consecutively.
4 One could hope to find a more efficient way of determining the complement of an automaton that would not require determinization. However, this problem is PSPACE-complete. Indeed, the regular expression non-universality problem is a subproblem of complementation known to be PSPACE-complete (Garey and Johnson, 1979, page 174), (Stockmeyer and Meyer, 1973). This problem also known as the emptiness of complement problem has been extensively studied (Aho et al., 1974, page 410-419).
Overview
In contrast to the KK algorithm which introduces "brackets everywhere only to restrict their occurrence subsequently, our algorithm introduces context symbols just when and where they are needed. Furthermore, the number of intermediate transducers necessary in the construction of the rules is smaller than in the KK algorithm, and each of the transducers can be constructed more directly and efficiently from the primitive expressions of the rule, ~, ~, A, p.
A transducer corresponding to the left-toright obligatory rule ¢ --* ¢/A p can be obtained by composition of five transducers: (3) 1. The transducer r introduces in a string a marker > before every instance of p. For reasons that will become clear we will notate this as Z* p --~ E* > p.
2. The transducer f introduces markers <1 and <2 before each instance of ~ that is followed by >: }5 >. In other words, this transducer/harks just those ~b that occur before p. 3. The replacement transducer replace replaces ~b with ~ in the context <1 ~b >, simultaneously deleting > in all positions ( Figure 2). Since >, <1, and <2 need to be ignored when determining an occurrence of ~b, there are loops over the transitions >: c, <1: ¢, <~: c at all states of ¢, or equivalently of the states of the cross product transducer ¢ × ~. 4. The transducer 11 admits only those strings in which occurrences of <1 are preceded by A and deletes <l at such occurrences: 5. The transducer 12 admits only those strings in which occurrences of <2 are not preceded by A and deletes <~ at such occurrences: 2*X <2-~ ~*~.
Clearly the composition of these transducers leads to the desired result. The construction of the transducer replace is straightforward. In the following, we show that the construction of the other four transducers is also very simple, and that it only requires the determinization of 3 automata and additional work linear (time and space) in the size of the determinized automata.
Markers
Markers of TYPE 1 Let us start by considering the problem of constructing what we shall call a TYPE I transducer, which inserts a marker after all prefixes of a string that match a particular regular expression. Given a regular expression fl defined on the alphabet E, one can construct, using classical algorithms (Aho et al., 1986), a deterministic automaton a representing E*fl. As with the KK algorithm, one can obtain from a a transducer X = Id(a) simply by assigning to each transition the same output label as the input label. We can easily transform X into a new transducer r such that it inserts an arbitrary marker ~ after each occurrence of a pattern described by ~. To do so, we make final the nonfinal states of X and for any final state q of X we create a new state q~, a copy of q. Thus, q' has the same transitions as q, and qP is a final state. We then make q non-final, remove the transitions leaving q and add a transition from q to q' with input label the empty word c, and output ~. Proposition 1 Let ~ be a deterministic automaton representing E*/3, then the transducer r obtained as described above is a transducer postmarking occurrences of fl in a string ofF* by #. Proof.
The proof is based on the observation that a deterministic automaton representing E*/~ is necessarily complete 5. Notice that nondeterministic automata representing ~*j3 are not necessarily complete. Let q be a state of a and let u E ~* be a string reaching q6. Let v be a string described by the regular expression ft. Then, for any a E ~, uav is in ~*~. Hence, uav is accepted by the automaton a, and, since ~ is deterministic, there exists a transition labeled with a leaving q. Thus, one can read any string u E E* using the automaton a. Since by definition of a, the state reached when reading a prefix u ~ of u is final iff u ~ E ~*~, by construction, the transducer r inserts the symbol # after the prefix u ~ iff u ~ ends with a pattern of ft. This ends the proof of the proposition, t3 Markers of TYPE 2 In some cases, one wishes to check that any occurrence of # in a string s is preceded (or followed) by an occurrence of a pattern of 8. We shall say that the corresponding transducers are of TYPE 2. They play the role of a filter. Here again, they can be defined from a deterministic automaton representing E*B. Figure 5 illustrates the modifications to make from the automaton of figure 3. The symbols # should only appear at final states and must be erased. The loop # : e added at final states of Id(c~) is enough for that purpose.
All states of the transducer are then made final since any string conforming to this restriction is acceptable: cf. the transducer !1 for A above. 5An automaton A is complete iff at any state q and for any element a of the alphabet ~ there exists at least one transition leaving q labeled with a. In the case of deterministic automata, the transition is unique.
6We assume all states of a accessible. This is true if a is obtained by determinization.
Markers of TYPE 3
In other cases, one wishes to check the reverse constraint, that is that occurrences of # in the string s are not preceded (or followed) by any occurrence of a pattern of ft. The transformation then simply consists of adding a loop at each nonfinal state of Id(a), and of making all states final.
Thus, a state such as that of figure 6 is transa:a c:c formed into that of figure 5. We shall say that the corresponding transducer is of TYPE 3: cf. the transducer 12 for ~.
The construction of these transducers (TYPE 1-3) can be generalized in various ways. In particular: • One can add several alternative markers {#1,'", #k} after each occurrence of a pattern of 8 in a string. The result is then an automaton with transitions labeled with, for instance, ~1,'" ", ~k after each pattern of fl: cf. transducer f for ¢ above.
• Instead of inserting a symbol, one can delete a symbol which would be necessarily present after each occurrence of a pattern of 8.
For any regular expression a, define M arker( a, type, deletions, insertions) as the transducer of type type constructed as previously described from a deterministic automaton representing a, insertions and deletions being, respectively, the set of insertions and deletions the transducer makes.
Proposition 2 For any regular expression a, Marker(a, type, deletions, insertions) can be constructed from a deterministic automaton representing a in linear time and space with respect to the size of this automaton.
Proof. We proved in the previous proposition that the modifications do indeed lead to the desired transducer for TYPE 1. The proof for other cases is similar. That the construction is linear in space is clear since at most one additional transition and state is created for final or non-final states 7. The overall time complexity of the construction is linear, since the construction of ld(a) is linear in the ~For TYPE 2 and TYPE 3, no state is added but only a transition per final or non-final state. We just showed that Marker(a,type, deletions, insertions) can be constructed in a very efficient way. Figure 7 gives the expressions of the four transducers r, f, ll, and 12 using Marker.
Thus, these transducers can be constructed very efficiently from deterministic automata representing s ~*reverse(p), (~ O {>})* reverse(t> >), and E*,~. The construction of r and f requires two reverse operations. This is because these two transducers insert material before p or ¢.
Extension to Weighted Rules
In many applications, in particular in areas related to speech, one wishes not only to give all possible analyses of some input, but also to give some measure of how likely each of the analyses is. One can then generalize replacements by considering extended regular expressions, namely, using the terminology of formal language theory, rational power series (Berstel and Reutenauer, 1988;Salomaa and Soittola, 1978).
The rational power series we consider here are functions mapping ~* to ~+ U {oo) which can be described by regular expressions over the alphabet (T~+ U {co}) x ~. S = (4a)(2b)*(3b) is an example of rational power series. It defines a function in the following way: it associates a non-null number only with the strings recognized by the regular expression ab*b. This number is obtained by adding the coefficients involved in the recognition of the string. The value associated with abbb, for instance, is (S, abbb) = 4 + 2 + 2 + 3 = 11.
In general, such extended regular expressions can be redundant. Some strings can be matched SAs in the KK algorithm we denote by ¢> the set of the strings described by ¢ containing possibly occurrences of > at any position. In the same way, subscripts such as >:> for a transducer r indicate that loops by >:> are added at all states of r. We denote by reverse(a) the regular expression describing exactly the reverse strings of a if a is a regular expression, or the reverse transducer of a if a is a transducer.
235
in different ways with distinct coefficients. The value associated with those strings is then the minimum of all possible results. S' = (2a)(3b)(4b) + (5a)(3b*) matches abb with the different weights 2+3+4 --9 and 5+3+3 = 11. The minimum of the two is the value associated with abb: (S', abb) = 9. Non-negative numbers in the definition of these power series are often interpreted as the negative logarithm of probabilities. This explains our choice of the operations: addition of the weights along the string recognition and min, since we are only interested in that result which has the highest probability 9.
Rewrite rules can be generalized by letting ¢ be a rational power series. The result of the application of a generalized rule to a string is then a set of weighted strings which can be represented by a weighted automaton. Consider for instance the following rule, which states that an abstract nasal, denoted N, is rewritten as m in the context of a following labial:
Y ---* m/__[+labial]
(8) z Now suppose that this is only probabilistically true, and that while ninety percent of the time N does indeed become m in this environment, about ten percent of the time in real speech it becomes n. Converting from probabilities to weights, one would say that N becomes m with weight a = -log(0.9), and n with weight fl = -log(0.1), in the stated environment. One could represent this by the following rule:
N --* am + fin/__[+labial]
(9) We define Weighted finite-state transducers as transducers such that in addition to input and output labels, each transition is labeled with a weight.
The result of the application of a weighted transducer to a string, or more generally to an automaton is a weighted automaton. The corresponding operation is similar to the unweighted case.
However, the weight of the transducer and those of the string or automaton need to be combined too, here added, during composition (Pereira et al., 1994).
9Using the terminology of the theory of languages, the functions we consider here are power series defined on the tropical semiring (7~+U{oo}, min, +, (x), 0) (Kuich and Salomaa, 1986). We have generalized the composition operation to the weighted case by introducing this combination of weights. The algorithm we described in the previous sections can then also be used to compile weighted rewrite rules.
As an example, the obligatory rule 9 can be represented by the weighted transducer of Figure 8 10. The following theorem extends to the weighted case the assertion proved by Kaplan and Kay (1994).
Theorem 1 A weighted rewrite rule of the type defined above that does not rewrite its noncontextual part can be represented by a weighted finite-state transducer.
Proof. The construction we described in the previous section also provides a constructive proof of this theorem in the unweighted case. In case ¢ is a power series, one simply needs to use in that construction a weighted finite-state transducer representing ¢. By definition of composition of weighted transducers, or multiplication of power series, the weights are then used in a way consistent with the definition of the weighted contextdependent rules, o
Experiments
In order to compare the performance of the Mgorithm presented here with KK, we timed both algorithms on the compilation of individual rules taken from the following set (k • [0, 10]): 1°We here use the symbol ~ to denote all letters different from b, rn, n, p, and N.
236
In other words we tested twenty two rules where the left context or the right context is varied in length from zero to ten occurrences of c. For our experiments, we used the alphabet of a realistic application, the text analyzer for the Bell Laboratories German text-to-speech system consisting of 194 labels. All tests were run on a Silicon Graphics IRIS Indigo 4000, 100 MhZ IP20 Processor, 128 Mbytes RAM, running IRIX 5.2. Figure 9 shows the relative performance of the two algorithms for the left context: apparently the performance of both algorithms is roughly linear in the length of the left context, but KK has a worse constant, due to the larger number of operations involved. Figure 10 shows the equivalent data for the right context. At first glance the data looks similar to that for the left context, until one notices that in Figure 10 we have plotted the time on a log scale: the KK algorithm is hyperexponential.
What is the reason for this performance degradation in the right context? The culprits turn out to be the two intersectands in the expression of Rightcontext(p, <, >) in Figure 1. Consider for example the righthand intersectand, namely ~0 > P>0~0-> ~0, which is the complement of ~0 > P>0~0-> ~0-As previously indicated, the complementation Mgorithm. requires determinization, and the determinization of automata representing expressions of the form ~*a, where c~ is a regular expression, is often very expensive, specially when the expression a is already complex, as in this case. Figure 11 plots the behavior of determinization on the expression Z~0 > P>0Z~0-> ~0 for each of the rules in the set a ~ b/__c k, (k e [0, 10]). On the horizontal axis is the number of arcs of the non-deterministic input machine, and on the vertical axis the log of the number of arcs of the deterministic machine, i.e. the machine result of the determinization algorithm without using any minimization. The perfect linearity indicates an exponential time and space behavior, and this in turn explains the observed difference in performance. In contrast, the construction of the right context machine in our algorithm involves only the single determinization of the automaton representing ~*p, and thus is much less expensive. The comparison just discussed involves a rather artificiM ruleset, but the differences in performance that we have highlighted show up in real applications. Consider two sets of pronunciation rules from the Bell Laboratories German text-to-speech system: the size of the alphabet for this ruleset is 194, as noted above. The first ruleset, consisting of pronunciation rules for the orthographic vowel <5> contains twelve rules, and the second ruleset, which deals with the orthographic vowel <a> contains twenty five rules. In the actual application of the rule compiler to these rules, one compiles the individual rules in each ruleset one by one, and composes them together in the order written, compacts them after each composition, and derives a single transducer for each set. When done off-line, these operations of compo- sition and compaction dominate the time corresponding to the construction of the transducer for each individual rule. The difference between the two algorithms appears still clearly for these two sets of rules. Table 1 shows for each algorithm the times in seconds for the overall construction, and the number of states and arcs of the output transducers.
Conclusion
We briefly described a new algorithm for compiling context-dependent rewrite rules into finite-state transducers. Several additional methods can be used to make this algorithm even more efficient. The automata determinizations needed for this algorithm are of a specific type. They repre-237 sent expressions of the type ~*¢ where ¢ is a regular expression. Given a deterministic automaton representing ¢, such determinizations can be performed in a more efficient way using failure functions (Mohri, 1995). Moreover, the corresponding determinization is independent of ~ which can be very large in some applications. It only depends on the alphabet of the automaton representing ¢.
One can devise an on-the-fly implementation of the composition algorithm leading to the final transducer representing a rule. Only the necessary part of the intermediate transducers is then expanded for a given input (Pereira et al., 1994).
The resulting transducer representing a rule is often subsequentiable or p-subsequentiable. It can then be determinized and minimized (Mohri, 1994). This both makes the use of the transducer time efficient and reduces its size.
We also indicated an extension of the theory of rule-compilation to the case of weighted rules, which compile into weighted finite-state transducers. Many algorithms used in the finite-state theory and in their applications to natural language processing can be extended in the same way.
To date the main serious application of this compiler has been to developing text-analyzers for text-to-speech systems at Bell Laboratories (Sproat, 1996): partial to more-or-less complete analyzers have been built for Spanish, Italian, French, Romanian, German, Russian, Mandarin and Japanese. However, we hope to also be able to use the compiler in serious applications in speech | 1996-06-19T18:35:56.000Z | 1996-06-19T00:00:00.000 | {
"year": 1996,
"sha1": "30cd6bab1201af143cd7c7e1520c3d334642c9ce",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=981894&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "3e80e8b110951607ff4e7d1cf50c9fe7bcb2c381",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233356475 | pes2o/s2orc | v3-fos-license | The Use of Indirect Immune-fluorescence Antibody Testing (IFAT) IgM And IgG In the Diagnosis of Melioidosis
Introduction: Meliodosis is an important public health disease caused byBurkholderiapseudomallei. Early laboratory diagnosis is crucial for appropriate treatment due to its high mortality rate. Objective: This study is conducted to assess the potential role of the in-house IFAT IgM and IgG as the serodiagnostic tool in melioidosis and to determine the cut-off levels. Method: 40 culture-confirmed melioidosis patients were recruited. Controls consisted of a group of 40 patients without active infection and another group of 40 patients with positive blood culture for organisms other thanBurkholderiapseudomallei. Results and Discussion: Using the receiver operating characteristic (ROC) curve, the best cut-off levels determined to diagnose melioidosis are 1:20 for IgM and 1:80 for IgG. Of these cut off levels, the sensitivity and specificity for IgM are 72.5% and 80% respectively and 65% and 87.5%respectively for IgG which also has high background seropositivity. Conclusion: IFAT IgM at the cut-off level 1:20 is recommended for diagnosis.
Introduction
It has been ten decades since the melioidosis outbreak in Pahang with the consequent 8 fatalities 1 and meliodosis remains as the potential fatal endemic infectious disease in the Southeast Asia and Northern Australia. 2 Direct exposure to contaminated soil and surface water is a well-known transmission mode of this deadly disease caused by Burkholderiapseudomallei (B. pseudomallei). 3,4 B. pseudomalleiis a facultative intracellular Gram-negative rod that is able to grow on the routinely used microbial media such as Blood agar, MacConkey and Nutrient agar upon incubation at 35 to 37ºC. Thus, conventional culture method still remains the gold standard for definitive diagnosis of melioidosis despite its poor sensitivity (60.2%). 5 However, the culture result is only available after 3 to 5 days and hence the resultant delay in administering appropriate treatment to the infected patients. 6 In the meantime, its diverse clinical manifestations pose further challenges in the clinical diagnosis rendering further difficulty in instituting the treatment. 7 In some cases, the bacterium is not always isolated 2 and this may cause further dilemma to clinicians in deciding on the continuation of prolonged maintenance therapy in some patients who show good responses to the initial, empirical therapy.
Early detection of the causative agent is life-saving especially in septicaemic patients. Therefore, serological tests are often employed for a rapid diagnosis of meliodosis. These antigen tests are performed directly on the clinical specimens such as serum, urine and sputum in which results are available within a day. 8 Indirect Hemagglutination Assay (IHA) test has been widely used in endemic regions such as Northern Australia and Northeast Thailand. 9,10 This assay is developed and established by the "in house" protocol and generally has poor sensitivity and specificity due to the weak immunogenicity of antigens used in its preparation. 11 IHA testing is not encouraged to be used as a diagnostic tool in an endemic area due to high seropositivity in healthy subjects who are likely repetitively exposed to B. pseudomallei. 12 Meanwhile, ELISA testing is yet to be recognized as a reliable serodiagnostic tool as to identify the perfect antigen(s) to be used in this method. 13 On the contrary, Indirect Fluorescent Antibody Testing (IFAT) has been used in Malaysia for many years since it was first described by Vadivelu et al. in 1995. 14 In comparison to IHA, this assay is able to give specific antibody titers of individual immunoglobulin M (IgM) or immunoglobulin G (IgG) or both. Currently, IFAT IgM test for samples from hospitals all over the country at the Institute of Medical Research (IMR). It is used concomitantly with the serological culture for the better diagnostic yield in melioidosis. However, this test has not yet been well-validated in any prospective clinical trial. Hence, the objective of this current study is to evaluate the potential role and efficacy of the in-house IgM and IgG IFAT methods in the diagnosis of melioidosis and to determine the diagnostic cut off levels among Malaysian patients.
Bacterial Strains
The strain of B. pseudomallei used in IFAT was obtained from the blood culture of a patient at Hospital Tengku AmpuanAfzan (HTAA), Kuantan, Pahang. This strain was identified using Francis Medium, 15 conventional biochemical assimilation tests and API 20NE System (Bio-Merieux, France). A bacterial suspension was prepared from pure colonies in Trypticase Soy Broth (TSB) which was heat-killed before being used as the antigen in IFAT.
Melioidosis patients and controls
This study was conducted using sera collected from November 2014 to November 2015. A total of 120 patients were recruited in this study. 40 of them were from culture-confirmed melioidosis cases (28 of them were from HTAA and the remaining 12 patients were from Hospital Sultanah Nur Zahirah (HSNZ), Kuala Terengganu, Terengganu). 80 patients were recruited as control subjects consisting of 40 consecutively-selected patients with positive blood culture for bacteria other than B. pseudomallei and another 40 patients were healthy subjects (without any clinically evident infection) who came for routine blood investigation during their hypertension or diabetes clinics. Sample collection for melioidosis patients was done on day 1 (±2days) of the culture-positive results.
Indirect Immunofluorescent Antibody Testing (IFAT)
The IFAT was carried out as described by Ashdown 16 with modifications. Briefly, the bacterial antigen was washed and resuspended in phosphate-buffered saline (PBS), pH 8.5. Then a working antigen was prepared using PBS (pH 7.3) and coated onto 12-wells of Teflon coated slides before being air-dried and fixed with cold acetone. Patients' sera were serially diluted two-folds in PBS (pH 7.3) starting from 1:10 until 1:160, then each dilution overlaid onto the antigen wells and incubated at 37 ºC for 30 minutes in a moist chamber. A fluorescein isothiocynate (FITC)tagged anti-human globulin IgM and IgG specific dye (Kirkegaard& Perry Laboratories (KPL), United States) was each separately added after washing the slides with PBS (pH 7.3) for three times and subsequently incubated for further 30 minutes in a moist chamber at 37 ºC. The slides were then washed with PBS (pH 7.3) and mounted using buffered glycerol. Positive and negative sera were included in each batch of tests as controls. Finally, the stained slides were examined under a fluorescent microscope at 40X magnification.
A positive result was determined by appearance of apple green fluorescence of the bacilli of B. pseudomallei. If high positive cell counts were noted at dilution 1:160, testing at further higher dilutions of 1:320, 1:640 and 1:1280 was performed. Meanwhile, if a negative result was noted at dilution 1:10, it would be recorded as <10.
Statistical Analysis
Demographic data of the patients with melioidosis and subjects in the control groups were compared using one-way ANOVA for age and Chi-square for gender and diabetic status. In order to perform valid statistical analysis on the IgM and IgG levels, means of antibody titers in each immunoglobulin class were log-transformed and expressed as geometrical means (GM) and standard deviations as geometrical standard deviations (GSD). 17 For this purpose, titers of <10 were presumed as equal to 5 to avoid any missing value during the statistical analysis.
The optimum cut-off value for IFAT-IgM and IFAT-IgG was determined by using Receiver Operating Characteristics (ROC) curve. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated. Then, the seropositivity rate for specific IgM and IgG antibody between melioidosis and non-melioidosis (control groups) was compared by Chi-square method. All data obtained in this study were analyzed using SPSS Version 21 for windows. Data with p values of <0.05 were considered as statistically significant.
Results
All 40 patients with melioidosis in this study are culture-positive for blood (40). 9 of them are also positive for pus/tissue (5), sputum (3) and knee joint aspirate (1) cultures. Majority of the cases are newly diagnosed (38) and only 2 are re-infections. As for the control group, 18 of them have positive cultures for gram-positive bacteria while 22 are culture-positive for gram-negative bacteria. They are no difference in the mean age and ethnic group distribution for both melioidosis patients (M) and the control groups (Table 1). However, we observe that there are significantly more diabetic patients (Table 3). Meanwhile, the sensitivity and specificity of IFAT IgG at 1:80 titer are 65.0% and 87.5% respectively with positive and negative predictive values of 72.2% and 83.3% respectively (Table 3). Table 4 shows that, despite being infected, 27.5% and 35.0% of patients withmelioidosis have negative (below the cut-off value) serological responses for both IgM and IgG respectively at diagnosis (p< 0.001).Using the cut-off level of 1:20 for IgM, 72.5% of culture-confirmed patients are seropositive as compared to 10% in Cp and 30% in Ca groups. On the other hand, as for IgG, 65% of culture-confirmed patients were serologically positive as compared to 17.5% in Cp and 7.5% in Ca groups at 1:80 cutoff level. These differences are statistically significant.
Discussion
The cut-off value for IFAT-IgM in the diagnosis of melioidosis, as determined from this study, is determined as 1:20 based on ROC curve. This level is lower than recommended by earlier studies. Lower level of cut off value will increase the rate of detection since the incidence of melioidosis in Malaysia, particularly the Pahang state, is relatively high (4.3 per 100,000) and carries high morbidity and mortality rates (44%). 15 This cut-off value would maximize the sensitivity of the test and at the same time does not encroach upon its specificity, thus it will eventually prevent underdiagnoses of melioidosis.Based on our ROC curves, the cut-off value for IFAT-IgG is determined at higher level which is 1:80 as compared to IFAT-IgM. This is due to the higher background of positive IgG in patients without melioidosis (control group). Our findings confirm another previous local study which showed that IFAT-IgG titers are of most value for prognostic rather than diagnostic purposes. 16 A study done elsewhere also concluded that IFAT-IgMis the most useful marker for diagnosis of an active infection. 17 Although IFAT-IgM is more appropriate than IFAT-IgG for the diagnosis of melioidosis, 18 the IFAT-IgG value is still indispensable when IFAT-IgM is negative.
Furthermore, the area under the ROC curve of about 0.810 for both IFAT-IgM and IFAT-IgG which are more than 0.80 suggests that IFAT method has performed well in discriminating between melioidosis patients and control subjects. 19 The cut-off values for IFAT that had been used in previous studies varied from 16 18 , 40 11,17 , to 80 19 .
All of these values were determined via different methods than ROC which has been extensively utilized in diagnostic tests evaluation as one of the best methods. 20 Therefore, more data is needed to validate the cut-off value via case control studies.
Based on the selected cut-off values for IFAT-IgM and IFAT-IgG, the majority of control subjects (>70%) have no detectable serological evidence (both IgM and IgG) of melioidosis infection. However, there are 27% to 35% of melioidosis patients have poor or undetectable serological responses which are probably related to their poorly controlled diabetes mellitus status 21 or due to overwhelming sepsis. 22 Thus, IFAT and perhaps other similar serological tests will only be useful as a complementary test. Meanwhile, seropositivity is also observed among control subjects and this is probably due to previous single or repeated exposure to a source of infection 3 caused by B. pseudomallei, but without any clinical symptom. Moreover, presence of seropositivity among residents from an endemic area is relatively common and is encountered in any serological assay. 7,22 The IgM and IgG seropositivity detected among control subjects in this study is unlikely to be due to cross-reactivity of IFAT-IgM and IFAT-IgG with other bacterial infections since similar seropositivity is also detected among control subjects without apparent infection.
The two cut-off values determined in this study are more practicable to be applied in endemic areas than in non-endemic or low endemicity areas whereby the specificity shall take priority before selecting the optimal cut-off value. At end of the day, the interpretation of the IFAT results in melioidosis shall be done in the light and context of a wise clinical judgment after detailed history taking and physical examination of a suspected case considering the presence of the risk factors especially diabetes mellitus and exposure to contaminated soil or water. A serological test in general is to be considered only as a guide rather than a stand-alone diagnostic test especially with one such as IFAT that has modest sensitivity and specificity. Nevertheless, the IFAT IgM level is shown to be better than IgG in the diagnosis of melioidosis in the present study.
Conclusion
The in-house IFAT IgM and IgG is a useful method for early serodiagnosis of suspected melioidosis patients at cut-off values of 1:20 and 1:80 respectively. IgM is shown to be of better indicator than IgG and thus recommended. | 2021-04-22T19:08:46.605Z | 2021-02-06T00:00:00.000 | {
"year": 2021,
"sha1": "8f608be3643e7186bc6fa4d308986319a3d08100",
"oa_license": "CCBY",
"oa_url": "http://ijhhsfimaweb.info/index.php/IJHHS/article/download/280/279",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8f608be3643e7186bc6fa4d308986319a3d08100",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.