id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
55525539 | pes2o/s2orc | v3-fos-license | Effects of drama method on speaking anxieties of pre-service teachers and their opinions about the method
The aim of this study is to determine the effects of the drama method on speaking anxieties of preservice teachers and their opinions about the method. In the study, mixed method including experimental design, quantitative, and basic qualitative research was used. The study was carried out with 77 first grade students from day-time and evening education programs at Kazım Karabekir Faculty of Education, Atatürk University. Speaking Anxiety Scale (SAS), developed by Sevim, was used to collect the data of the study. Paired and Independent Samples t-tests were used for the analysis of the research data. The results of the study revealed that the drama method was more effective for students than the activities carried out according to the present curriculum to control their speaking anxieties.
INTRODUCTION
What makes humans different from other living creations is their ability to think.The thinking humans want to share their needs, wishes, thoughts, dreams, sadness and happiness with the members of the society or the group they belong to.Speaking is the language ability which allows humans to express what they know, think and feel in the shortest way.While the speaking ability is an inborn talent, it is also possible to learn and develop it in time (Clinton, 1992).This possibility is both psychological and physical.Because of this feature of speaking, it is also defined as a psychophysical process led by the movements of the muscles (Taser, 2001).
Speaking is the verbal expression of plans, wishes, feelings and thoughts.In other words, it is the verbal transfer of a subject after it has been planned in the mind (Temizkan, 2009).Speaking is also a reflection of personality formation, mental development and social skills (Sever, 2000).Studies show that people spend 50 to 80% of their daily lives by communicating; spend approximately 9% of their communication time by writing, 30% of it by speaking and 45% of it by listening (Nalıncı, 2000).
Speaking is an important skill for people to build and develop strong social relations (Liddicoat, 2009).If a person has a weak speaking skill, then his or her other language skills are badly affected as well.Speaking, one of the four communication skills, is the basis for language abilities.It is usually the sign of people's educational and cultural levels and social status.Speaking ability, not the ideas, should be used effectively to build strong social relations (Özbay, 2003).
Speaking plays an important role for people to develop communication and social skills.However, it may be affected positively or negatively by certain speaker-oriented and environment-oriented anxiety factors (Sevim, 2012).Anxiety can be defined as the state of worry or uneasiness whose source is not conscious but whose effects are felt consciously and which causes psychological and physiological tension (Kartopu, 2012).
Excessive anxiety which starts before speaking and continues during the speech not only ruins all mental plans of an individual about their speech but also makes it difficult for them to express what they want to say.The important thing here is that individuals must be able to control the factors which cause their anxieties and to evaluate these factors positively as it is not the anxiety but excessive anxiety that ruins the individual's speaking plan and mental design.
While there are researchers (Alport and Haber, 1960;Scovel, 1978;Yaman, 2010) who claim that anxiety has a positive effect on an individual's daily activities and has constructive functions which prepare and warn the individual against the negative effects of the outer world, there are also other researchers (Atabek, 2000) who mention the destructive functions of anxiety which affect individuals' lives negatively and make them feel like there is something dangerous in their work operations, although there is not.Examining the results of studies on the relationship between anxiety and success, Scovel (1978) points out the difference between facilitating and debilitating anxiety.Facilitating anxiety encourages individuals to stand against the difficulties in a learning environment, while debilitating anxiety stimulates individuals to adopt avoidance behavior for the new information (Alport and Haber, 1960).Facilitating and debilitating anxiety factors have the same effect on the speaking ability.While individuals' feeling excessively anxious hinders their speaking success, feeling adequately anxious is a factor which facilitates their speech.
Educational environments are places in which individuals are systematically presented activities developed to control anxiety and in which they make use of the facilitating side of it.Dramatization is one of the most effective techniques to be employed in educational environments to solve the anxiety problem which individuals very often face in daily life.The term "dramatization" is derived from the verb "dran" meaning "to do" or "to act".It was transferred to Turkish from French as "dram".It is usually mentioned as "dramatization" in literature on education (Önder, 1999).
In the drama method, some students are performers and some are audiences.Students who are audiences experience the situations in drama.Performer students, however, both act and experience the events (Bilen, 2000).Audiences watch their friends and evaluate the situation while performers try to get their target acquisition in front of a community.This increases students' levels of motivation.Drama, meaning "to push", "to do", "to pull", "to practice" and "to make", includes any kind of activity directed towards action.This allows students to express themselves in the best way and gives them the chance to analyze the comments of their friends (Adıgüzel, 2007).
Using drama in educational environments improve students' comprehension and communication skills and is also very useful in terms of helping students gain basic skills stated in contemporary curriculums.Students taking part in drama activities have the opportunity to acquire skills like self-confidence, self-knowledge, creative and critical thinking and problem solving by having fun.Drama prepares occasions for students to acquire these skills.By allowing them to use their comprehension and communication skills freely (Crumpler and Jasinski, 2002), drama helps control the anxiety factors which could cause a problem especially for speaking (Nixon, 1987).
Another important feature of drama is its being an effective method that can be used not only in primary and secondary school classes but also in pre-service teacher training.It is important for students that the teacher, who is a role model in an educational environment, uses speaking skills effectively (Otoshi and Heffernen, 2008) because the embodiment of what is taught in a class environment usually appears due to the teacher.It is an important factor for successful education on speaking that the teacher should express his or her thoughts confidently and fluently with no feeling of excessive anxiety (Temiz, 2013).For this reason, teachers should join drama activities in pre-service teacher training before they start working as it is considered helpful to solve possible anxiety problems in future.
The effects of drama on learning outcomes were studied in various fields by many researchers.When related literature was reviewed in terms of drama, it was seen that the effects of the drama method on developing (Kırmızı, 2008) students' listening skills (Köklü, 2003), emotional intelligence (Özdemir, 2003), writing skills (Karakus, 2000;Kara, 2011), creative thinking skills (Kara, 2000), imaginative languages skills (Cebi, 1996), comprehension skills were examined.It was also seen that there were studies carried out to examine the effects of drama on speaking abilities of pre-service teachers (Aykac and Cetinkaya, 2013;Oztürk, 1997;Tümtürk, 2000).However, when studies based on drama were taken into consideration in general, it was seen that there was no research conducted to examine the effects of drama on speaking anxieties of pre-service teachers.The aim of this study was to investigate the effects of the drama method on speaking anxieties of pre-service teachers.In line with this purpose, the following research questions were directed in the study: 1.When compared within and between groups, is there a significant difference between the pretest speaking anxiety mean scores of the control group the current curriculum was applied to and those of the experimental group creative drama was applied to? 2. When compared within groups and between groups, is there a significant difference between the protest speaking anxiety mean scores of the control group the current curriculum was applied to and those of the experimental group creative drama was applied to? 3. What are the experimental group of pre-service teachers' views about drama application?
Design of the study
This study examining the effects of the drama method on speaking anxiety was modeled based on the quasi-experimental design used in quantitative studies.While matching the two groups that took the course of Oral Communication, the participants' average academic achievement scores in the previous academic term and their average pretest speaking anxiety scores were taken into account.It was seen according to the results of the independent samples t-test that the groups were close to each other in terms of both success levels and speaking anxiety levels.Thus, on random basis, one of the groups was determined as the control group, and the other as the experimental group was.Such quasi-experimental designs used in studies are called matched designs (Büyüköztürk and et al., 2010: 206).
Study group
The study was conducted with 77 students who took the course of Oral Communication in the Department of Turkish Language Education at Kazım Karabekir Faculty of Education, Atatürk University in the spring term of the academic year of 2012-2013.The experimental group consisted of 44 freshman students who took daytime education, while the control group was made up of 43 freshman students who took evening education.
Data collection
The data were collected via the "Speaking Anxiety Scale (SAS)" developed by Sevim (2012) and via the Drama Activities Interview Form (DAIF) prepared by the researcher.
SAS included 20 items structured according to a five-point Likerttype scale ("1" Never, "2" Rarely, "3" Sometimes, "4" Usually, "5" Always).The scale was checked in terms of its content validity and construct validity.Exploratory factor analysis, item-total correlation coefficient and item discrimination were used for construct validity.Following these studies, it was found out that the scale items were grouped under three factors.The difference between the scores of the bottom (27%) and top (27%) was analyzed by conducting t-test.The results revealed that the internal consistency of the items was high.The Cronbach alpha reliability coefficient was used to check the reliability of the scale, and it was calculated as 0.912.Accordingly, it could be stated that the scale was really reliable.The highest and lowest scores that could be taken from the scale ranged between 20 and 100.SAS was applied to the experimental and control groups both before and after the application.The study lasted for 12 weeks.DAIF, developed by the researcher, was used to determine the pre-service teachers' views about drama activities.In the development process of DAIF, the related literature was reviewed, and an item pool consisting of 8 questions was created.Two faculty member experts in the field of Turkish Language Education and one faculty member expert in the field of assessment and evaluation were asked for their views.As a result, four questions were excluded from the item pool.The remaining questions were directed to the pre-service teachers during the semi-structured interviews.All 44 students of the experimental group were inter-viewed and the timing of each interview was approximately 6-8 min.
The course of Oral Communication was conducted according to the current curriculum in the control group to determine the effects of the drama method on the speaking anxiety of the students.The process followed in the study was as follows: 1. SAS, developed by Sevim (2012) to measure the effects of the drama method on the students, was applied as pretest to the freshmen students taking daytime classes and to those taking evening classes in the Department of Turkish Language Teaching.The data collected from the pretest were compared with the independent samples t-test to determine if there was a significant difference between the groups.The results revealed no significant difference between the groups in terms of average anxiety scores.The students who took daytime education were determined as the experimental group, and those taking evening education were defined as the control group in accordance with the quasiexperimental design.2. The students were provided with basic information about drama to meet their needs for theoretical information about drama during the first two weeks (Four hours in total).During these presentations, the students focused on things they should pay attention to during the drama activities, and the students were provided with answers to their questions regarding the activities.After the students were basically informed about drama, the weeks during which the groups were expected to perform their plays were determined randomly.3. The students in the experimental group were asked to create groups with which they would work together for 12 weeks.The groups were limited to 10 students.There was no intervention by the researcher while the experimental group students were determining the groups.The students reached an agreement among themselves and created 10 groups with which they would work together during the study.The researcher monitored the process of grouping.4. The drama groups made use of the creative writing technique while preparing their plays.After they wrote their scenarios, a copy of those scenarios was presented to the researcher at the end of the drama activity.While the drama groups stuck to the scenarios they wrote, they also made use of the improvisations depending on the flow of the play.5.When the drama groups chose their subject for their plays, the researcher did not make any intervention.The basic reason was not to restrict the creative and free thinking of the students.6.When and which drama group will perform their plays were determined by drawing lots among the groups.After drawing lots, each group continued their work until it was their turn to perform.Every week a performance was done, and after each performance, all drama groups came together and criticized the play of that week.In this way, all drama groups took part in the drama activity.All the drama groups provided the researcher with a drama report mentioning the experiences of the students in drama groups and a CD on which the plays were recorded.It took 10 weeks (20 class hours) for all drama groups to perform their plays they prepared.7. The instructional activities in the control group were conducted according to the learning contents of the course of Oral Communications in the current curriculum for 12 weeks (24 class hours).8.At the end of 12 weeks, SAS was applied as posttest to the experimental and control groups, and the study was ended.
Data analysis
Speaking anxiety levels of the students in the experimental and control groups were determined according to the evaluation interval prepared based on the evaluation interval of arithmetic averages, which was created by Bascı and Gündogdu (2011).In this evaluation interval, each coefficient was calculated as 0.80, and the score interval was found to be 16 (Table 1).
In the data analysis, the pretest and posttest scores were examined to determine if they had a normal distribution, and the tests to be used were determined.As the number of the participants was lower than 50, Shapiro-Wilks normality test was used.The results of Shapiro-Wilks test showed that the research data demonstrated a normal distribution.For the analysis of the pretest and posttest data gathered from the Speaking Anxiety Scale applied to the students, paired samples t-test was used for comparisons within groups, while independent samples t-test was used for comparisons between the groups.
For the analysis of the data collected via DAIF, the descriptive analysis method, one of qualitative research data analysis techniques, was used.The responses of the pre-service teachers to each question in the form were examined within the context of the related question, and codes were created.Following this, these interrelated codes were gathered in an upper-theme and expressed in tables with frequencies and percentages.
Findings related to the comparison of the pretest speaking anxiety mean scores of the experimental and control groups
The pretest speaking anxiety scores of the experimental group the drama method was applied to and those of the control group the current curriculum was applied to were analyzed with independent samples t-test.As a result, no significant difference was found between the groups (t: .427;p (0.67) > 0.05).
When the results presented in Table 2 were examined, it was seen that the pretest speaking anxiety mean score of the experimental group was 70.95 and that it was 70.16 for the control group.This shows that the groups had similar features in terms of speaking anxiety at the beginning of the study.When the mean scores of the groups were taken into account, it was seen that the groups had a high level of speaking anxiety in the initial phase of the study.
Findings related to the comparison of the pretest and posttest speaking anxiety mean scores of the students in the control group
The teaching process was conducted in accordance with the current curriculum based on the learning contents of the course of Oral Communications, and statistical analysis was conducted to understand the effects of this teaching process on the speaking anxiety of the students in the control group as can be seen in Table 3.
When the results presented in Table 3 were taken into consideration, it was seen that there was no significant difference between the pretest and posttest speaking anxiety mean scores of the control group (t: 1,758; p (0,08) > 0.05).Depending on the pretest and posttest speaking anxiety mean scores, it could be stated that the speaking anxiety level of the students in the control group was high although the application period decreased the anxieties of the students for about 3 points and that the current curriculum was not significantly effective in solving the speaking anxiety problem.
Findings related to the comparison of the pretest and posttest speaking anxiety mean scores of the students in the experimental group
The results of the paired sample t-test which was conducted to determine the effects of the drama method on the speaking anxiety of the students in the experimental group can be seen in Table 4.
When the data presented in Table 4 were examined, it was seen that there was a significant difference in favor of posttest between the pretest and posttest speaking anxiety mean scores of the experimental group (t: 9,666; p (0.00) < 0.05).Based on the pretest and posttest speaking anxiety mean scores, it could be stated that the application period had considerable influence on the students in the experimental group and that the pretest speaking anxiety mean score was high prior to the application yet gradually lower following the application of the drama method.
Findings related to the comparison of the posttest speaking anxiety mean scores of the experimental and control groups
The posttest speaking anxiety mean score of the control group the course subjects of Oral Communication were taught in accordance with the current curriculum and the posttest speaking anxiety mean score of the experimental group the subjects were taught with the drama method are presented in Table 5.
When Table 5 was examined, it was seen that there was a significant difference in favor of the experimental group between the posttest speaking anxiety mean grades of the control and experimental groups (t: -3,629; p (0.00) < 0.05).It was seen that there was a 5-point difference between the speaking anxiety mean scores of the experimental and control groups.This shows that the drama method was much more effective in normalizing the speaking anxiety of the students.
The experimental group pre-service teachers' views about the drama activities
Table 6 presents the findings regarding the pre-service teachers' responses to the first question in the interview form: "What are the differences between the activities carried out according to the drama method and the previous class activities?" When Table 6 was examined, it was seen that the preservice teachers focused on the features of drama like especially its being interesting and its creating a free and sincere atmosphere with a high attendance.These features of drama have made the prospective teachers attend to the process actively and also have given them the opportunities to use the effective communication skills.The natural feature of drama activities which makes it obligatory to use the oral communication skills effectively has been an important factor to normalize the speaking anxiety of the prospective teachers.The opinions of the Participant 23 related to the fact that drama activities create an interesting and free atmosphere are: "The attention of all the class was directed at the play when the drama activities were done in the class.We were really having a lot of fun while watching the plays our friends prepared.All drama groups were able to write plays about any subject they wanted.There were also no restrictions when the plays were performed.I put into words and experienced most of the things that I thought of but couldn't say in my own play" (Participant 23).
Findings related to the answers of the prospective teachers to the second item in the interview form "How has this teaching period affected your communication with your friends?" are shown in Table 7.
When Table 7 is studied, it is seen that the shyness which is common in the speaking anxiety of the prospective teachers has decreased and is understood that the prospective teachers believe drama has important effects on expressing the opinions freely.The opinions of the prospective teachers related to the second questions can be thought as the reflections of the drama activities on the communication the individual had after they were done according to the drama principles.The opinions of the Participant 17 related to the second question in the interview form are: "There were times I was very surprised during the drama activities.I witnessed that some of my friends displayed performances which I wouldn't expect of them.I have never thought that especially M…… would be such a sociable person who expresses himself very easily, because M……. is normally a very shy person who talks little.That is true for all my friends like him.During the plays, some of my friends were like different people."(Participant 17) Findings related to the answers of the prospective teachers to the third item in the interview form "Do you think drama activities have improved you about your future job?If yes, how?" are shown in Table 8.
When Table 8 is examined, it is seen that prospective Drama activities are conducted in a free atmosphere 25 3.
Drama activities are conducted in a more sincere atmosphere 21 4.
The attendance was higher during the drama activities 20 5.
The students are aware that lessons are being taught 14 6.
The process is in the control of the students during drama activities 13 7.
Drama activities are for creativity 9 8.
What drama activities teach is more permanent 7 9.
Drama activities are not boring 5 Total 147 Table 7. Findings obtained from the second interview question.
The expression of opinions freely 26 3.
Getting together very often for drama rehearsals 15 4.
Enjoyable process of scenario writing 13 5.
Looking for solutions together to the problems confronted during the rehearsals 12 6.
Respecting the opinion of each group member 10 7.
Tolerating the critics of the other students in the class 9 8.
Development of the empathy skill 7 9.
Increase in the sincerity of the relations 4 10.
Increase in the shared experiences 3 Total 127 Table 8.Findings obtained from the third interview question.
1.
The management of the process effectively 23 2.
The expression of the opinions freely 20 3.
The development of different views 18 4.
The acquisition of planned studying skills 17 5.
The development of empathy skills 10 6.
Creating solutions after problems are detected 6 7.
Awareness of the value of each opinion 4 8.
Effective use of body language 3 9.
Development of assessment skills 1 Total 102 teachers believe drama method was effective in developing occupational skills like process management which is an important skill in teaching, expression of ideas effectively, having different views to things and the habit of planned studying.The opinions of the Participant 31 related to the third question in the interview form are: "I believe it was a great idea to create drama groups.The cooperative working while writing the scenario and dealing with the technical issues related to the play relieved us.All the period from the writing of the scenario to the performance of it was planned by the group members.Sometimes, there were serious discussions.However, nobody was silenced or isolated during these discussions.I tried to understand my friends even when I was really angry and depressed.Then, I applied the things I had never applied before to my drama work" (Participant 31).
Table 9 presents the findings related to the answers of the prospective teachers to the fourth item in the interview form "How would you evaluate yourself after participating in the drama activities?":When Table 9 was examined, it was seen that the preservice teachers defined themselves after the drama activities as free, self-confident, comfortable, creative, dreamer, aware of feelings and problem solver.Regarding the fourth question in the interview form, the opinions of the Participant-5 were: "I had a feeling of fear at first when they told us that we would have some drama activities in class because I had never done that before.I noticed that my friends liked my fiction about the play when we got together to write the scenario.We were open to any opinion put forward that could even be described as absurd.I saw that it is very important to trust yourself after we have performed our play" Participant-5 It was not necessary to mention the views of all the participants here since they are presented in tables with their frequency value.However, analysis of the interview data revealed the finding that drama positively affected the speaking anxiety of the pre-service teachers.It was understood that following the drama activities, the preservice teachers' feeling themselves free, comfortable, creative and aware of their feelings and controlling and normalizing their speaking anxieties support the quantitative findings obtained via the experimental process in the study.
DISCUSSION, CONCLUSION, AND SUGGESTIONS
This quasi-experimental with control group study examining the effects of the drama method on the speaking anxiety of the students revealed the following conclusions: In this study, which examined the effects of the drama method on speaking anxiety, no significant difference was found between the speaking anxiety mean scores of the control and experimental groups.When the pretest speaking anxiety mean scores of the control and experimental groups were examined, it was seen that the students in the experimental group had 70.95 out of 100; that the students in the control group received 70.16 from the Speaking Anxiety Scale; and that both groups started the process with high levels of speaking anxiety.
In addition, no significant difference was found between the pretest and posttest speaking anxiety mean scores of the students in the control group.However, the pretest speaking anxiety mean score of the control group decreased by about 1 point compared to the posttest mean score.Although the instructional activities conducted according to the current curriculum decreased the speaking anxiety of the students in the control group by 1 point, they did not lead to a statistically significant difference.This result might have occurred because the students were not provided with a free, comfortable and interactive atmosphere to express themselves.The pretest and posttest speaking anxiety mean scores of the students in the experimental group were compared, and it was seen that there was a significant difference in favor of the posttest.This result showed that the drama method was effective in normalizing the speaking anxiety levels of the students in the experimental group who had a high level of speaking anxiety in the pretest.This normalization of the speaking anxiety of the students may have resulted from the drama atmosphere in which they were able to express themselves freely, talked about their creative opinions without any restriction and experienced less anxiety.
There was a significant difference in favor of the experimental group between the pretest and posttest speaking anxiety mean scores of the control and experimental groups.When the posttest speaking anxiety mean scores were taken into account, it was seen that there was a 5point difference between the groups.This difference could be said to result from the drama method, and it was more effective in managing the speaking anxiety compared to the activities carried out according to the current curriculum.
The responses of the pre-service teachers taking part in the drama activities to the questions in the interview form were analyzed, and it was seen that the qualitative findings support the findings obtained via the statistical analysis.It could be stated that the findings obtained via the interview forms explained the factors which had a role in the normalization of the speaking anxiety of the preservice teachers in the experimental group.
In training individuals within the scope of the constructivist learning approach, drama is one of the teaching strategies offered to provide students with an effective, permanent and productive teaching process in educational environments.Individuals who perceive and evaluate the stimuli around them using the receptive language skills should be able to use the expressive language skills to start an effective communication process.Speaking, which is one of the expressive language skills and is used most after listening in daily life, is an important language skill which directly affects the communication process of individuals.The results of the present study revealed that the drama method, which prepared an atmosphere where individuals could express their opinions, feelings and wishes freely and where individuals could put themselves in others' shoes and evaluate things from their perspectives using the empathy skills, had positive effects on speaking anxiety (Okvuran, 1993).Based on the findings obtained especially from the interview forms, the drama method, which creates a free atmosphere for pre-service teachers, which involves interesting activities and which helps build close relations, plays a functional role in managing the speaking anxiety of the pre-service teachers.
Individuals who have severe speaking anxiety may have problems during their communications.This prevents them from developing their social skills.In a study carried out by Akın (1993) and Kent (1994), who examined the effects of the drama method on social development, it was found out that there was a significant increase in the level of social development of the students.In other studies conducted by Gönen and Dalkılıc (1998), it was pointed out that drama was a social process and that it included such elements as effective communication and working with a group which are necessary for the development of social skills.In this study, it was also seen that the students in the experimental group experienced less anxiety in oral communications, and for that reason, they were able to express themselves in social occasions more comfortably, build more close relationships and trust themselves in social relations.The students were able to display their skills easily, share their thoughts with their friends without feeling too anxious, use their creativity for the success of the group they were in, tolerate the critics directed at them, empathize with people and express themselves without any difficulty in front of a crowd.All these factors functioned as regulatory experiences to normalize the high level speaking anxieties of the students.
It was observed that the students in the experimental group were anxious when they were performing the first scene of their play.However, they were feeling more comfortable as they got used to the play after the first scene.The drama group of that week was criticized by all the drama groups after each play, and the performers of the drama group who performed their play made statements confirming that the anxiety they had in the first scene was low in other parts of the play.Moreover, the students who joined the drama activities had the opportunity to benefit from the viewpoints and experiences of the students who were in their group or in other groups by cooperating with them until the staging of the play.In one study conducted by Aykac and Adıgüzel (2011), it was observed that the students in the experimental group, which creative drama was applied to, got on well with their friends, shared more things with each other and knew each other better.Based on similar findings, it could be stated that drama is a useful technique to provide an effective communication atmosphere in the learning process.
Another effect of the drama method on the normalization of the speaking anxiety of the students in the experimental group was its positive reflections on fluent and flexible thinking skills.Creativity means the number Sevim 741 of individuals' thoughts and categories regarding a topic in a certain period of time.The number of thoughts is the aspect of fluent thinking, while the number of categories refers to the aspect of flexible thinking (Biber, 2006).In this study, the students in the experimental group improvised both before and during the play.These improvisations during the rehearsals and the play established the ground for fluent and flexible thinking.The students tried to create different alternatives by using their individual talents to make contributions to the performance of the group before and after the play.They activated their imagination skills and dealt with things from a critical point of view.These individual performances allowed many original ideals to come up (Coskun, 2005;Karakelle, 2009;Nixon, 1987).Both individual and group performances allowed improving the fluent and flexible thinking skills.For this reason, they created a more effective communication process.
In this study, it was found out that the drama method, which was used to normalize the speaking anxiety, was also a great way to bring social skills to pre-service teachers in teacher training (Erbay and Yıldırım, 2010;Kara and Cam, 2007).It was seen that pre-service teacher could benefit from the drama method effectively to develop and apply communication skills and to turn them into an acquisition during their pre-service education.
Based on the conclusions of this study, which examined the effects of the drama method on the speaking anxiety of students, the following suggestions could be put forward for future research: 1.In this study, it was seen that the drama method could help solve the anxiety problem of speaking which receives less focus than other language skills in the field of Turkish Language Education.Different studies may examine the effects of the drama method on the anxiety felt in other basic language skills.2. Future studies could examine how the drama method affects students' creative and critical thinking skills and their creative writing skills by focusing on the period before the drama groups stage their plays.
Table 1 .
Evaluation intervals of arithmetic average of speaking anxiety scores.
Table 2 .
Results of the comparison of the pretest speaking anxiety scores of the experimental and control groups.
Table 3 .
Results of the comparison of the pretest speaking anxiety mean scores of the control group.
Table 4 .
Results of the comparison of the pretest speaking anxiety mean scores of the experimental group.
Table 5 .
Results of the comparison of the posttest speaking anxiety mean scores of the experimental and control groups.
Table 6 .
Findings obtained via the first interview question.
Table 9 .
Findings obtained via the forth interview question. | 2018-12-06T00:24:26.349Z | 2014-09-23T00:00:00.000 | {
"year": 2014,
"sha1": "7886da303f4f1a3d9502615bf15d2ab55aafd457",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/ERR/article-full-text-pdf/A231ED747275.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7886da303f4f1a3d9502615bf15d2ab55aafd457",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
233232091 | pes2o/s2orc | v3-fos-license | New-onset lesions on MRI-DWI and cerebral blood flow changes on 3D-pCASL after carotid artery stenting
This study aimed to investigate the relationship between the new-onset hyperintense lesions on diffusion-weighted images (DWI) and the changes of cerebral blood flow (CBF) before and after carotid artery stenting (CAS) in patients with symptomatic unilateral carotid artery stenosis. Twenty-four patients with symptomatic unilateral carotid stenosis (50–99%) were enrolled. Routine head magnetic resonance imaging and three-dimensional pseudo-continuous arterial spin labeling were taken 7 days before the surgery and for four consecutive days post CAS. While the incidence of new DWI lesions were high (17/24, 70.8%) and 176 lesions were observed among the 17 cases, there was only one subject showing the symptoms. The majority of the lesions were located at the cortex/subcortex of the ipsilateral frontal and parietal lobes (60.8%) with 92.6% of the lesions size being less than 3 mm. The CBFs in this area were significantly higher than that of the temporal lobe on the first 3 days post stenting (p < 0.05). No periprocedural CBF differences were observed between the two groups, however, the micro-embolism group presented decreased relative CBF in frontal and parietal lobes prior to stenting compared with the non-embolism group. The systolic blood pressure in the micro-embolism group at discharge was significantly lower than that at admission. The high incidence rate of micro-embolism in patients receiving CAS may not be the result of direct changes of hemodynamics in the brain but rather the loss of CBF regulation due to long-term hypoperfusion prior to the stenting.
Patients with moderate to severe carotid artery stenosis (≥ 50% measured according to the North America Symptomatic Carotid Endarterectomy Trial criteria) 10 were confirmed by Digital Subtraction Angiography (DSA) and carotid duplex ultrasound. The inclusion criteria were as follows: (1) patients with symptomatic carotid artery stenosis; (2) DSA confirmed unilateral carotid artery narrowing > 50%, contralateral and posterior circulation arteries narrowing < 50%, and cerebral arteries narrowing < 50%; (3) the carotid stenosis was caused by atherosclerosis. Subjects with one of the following conditions were excluded: (1) unilateral carotid artery occlusion or bilateral carotid artery narrowing > 50%; (2) history of stroke or new-onset stroke in the past 2 weeks; (3) patients with symptomatic coronary artery disease; (4) patients with resistant hypertension and diabetes or poor blood pressure and glucose control; (5) patients received CEA due to personal preference or other underlying health conditions making CAS unfavorable; (6) patients who had poor imaging results, did not complete the study, or sign the consent.
The 3D-pCASL CBF images were processed by using Function Tool in Advantage Workstation 4.5 software (GE Healthcare, Milwaukee, WI) and transferred for frontal, parietal, and temporal lobes CBF quantification. The 3D FSPGR were first normalized to the Montreal Neurological Institute templates using the SPM8 software collection (Statistical Parametric Mapping, University College of London, available at www. fil. ion. ucl. ac. uk/ spm/ softw are/ Spm8) in Matlab (R2013b; math Works, Natick, MA). The MR images were then coregistered to the normalized 3D FSPGR followed by a 6 mm Gaussian smoothing filter. The three-dimentional CBF measures in frontal, parietal, and temporal lobes were then abstracted with DPABI software (Data Processing & Analysis of Brain Imaging) 11-13 . Treatments and periprocedural events analysis. A daily dose of 100 mg aspirin and 75 mg Clopidogrel were given to all subjects at least a week before operation. Thromboelastography was routinely performed before stenting, and the operation was performed only in patients with platelet inhibition rate (induced by adenosine diphosphate receptor pathway and arachidonic acid pathway) up to standard. All procedures were performed under local anesthesia with a EV3 embolic protection device (Spider FX) except one subject received general anesthesia due to intolerance to the operation. Patients were monitored with electrocardiogram post stenting for 24 h or more depending on the circulation stability of the patients. Blood pressure was monitored and the systolic pressure was controlled within 100-130 mmHg to reduce the incidence of hyperperfusion syndrome and the impact of blood pressure fluctuation on CBF. Isoprenaline and dopamine were given if the heart rate was < 50 per minute and the systolic blood pressure (SBP) was < 90 mmHg or the mean arterial pressure (MAP) was < 50 mmHg.
The heart rate, blood pressure, dynamic changes of CBF, hyperperfusion condition and syndrome, and newonset hyperintensity on DWI were monitored and recorded. Neurological functions were accessed in clinic or through phone follow-up by using a modified Rankin Scale (mRS) 30 days post stenting. The heart rates and blood pressures were taken four times a day for daily average. Hyperperfusion condition was defined as > 100% CBF increase post surgery and hyperperfusion syndrome was defined as clinical manifestations of migraine, ophthalmalgia, facial neuralgia, epilepsy, and focal neurological dysfunction caused by cerebral edema or hemorrhage excluding any ischemic lesions 14,15 . Continuous hypotension was defined as either SBP < 90 mmHg, MAP < 50 mmHg, or requirement of more than 6 h administration of vasoactive drugs 16 . MRI were read and analyzed by two experienced physicians and a senior physician participated to the interpretation of the images to help reach consensus in case of disagreement. New-onset DWI hyperintensity was determined as the occurrence of hyperintensity on DWI sequence postoperatively in the area where there was no hyperintensity before operation. Patients were then divided into micro-embolism and non-embolism groups based on the existence of new DWI hyperintensity and CBF and relative CBF (rCBF calculated as ipsilateral CBF/contralateral CBF) were compared between groups.
Statistical analysis. Statistical analyses were performed by using SPSS 20.0 software. Measurement data was represented as mean ± SD. Independent sample t-test and paired-sample t-test were used to perform interand intra-group comparison respectively when the data were normal distributed. Rank sum test was used for non-gaussian distributed data. Categorical data was represented as percentage and comparison was performed with chi-square test. Bonferroni correction was performed when multiple statistical analyses was used. P < 0.05 was considered statistically significant. Informed consent. We declare that all study participants provided informed consent.
Results
Patient characteristics and perioperative events. All patients successfully underwent the surgery without new-onset neurological dysfunctions. Among the two patients (8.3%) with hyperperfusion, one (4.2%) developed hyperperfusion syndrome. Six patients (25%) encountered hypotension and bradycardia. Of the 17 patients (70.8%) with new DWI lesions, one presented nausea, vomiting, vertigo, and nystagmus which were resolved in 2 days, while the rest patients exhibited no clinical symptoms (Table 1). There were no significant differences in demographic characteristics between the micro-embolism and nonembolism groups. Clinical characteristics including hypertension, hyperlipidemia, diabetes, smoking habits, BMI, history of cardiovascular diseases and stokes, lesion side, degree of stenosis, and mRS upon admission and discharge did not differ substantially between the two groups ( Table 1).
Analysis of new-onset DWI hyperintensity in CAS patients.
A total of 176 new DWI lesions were observed in the 17 patients with hyperintense DWI after stenting. Most lesions (85.8%) appeared in the first day post CAS with 96.6% ipsilateral lesions and 92.6% lesion size of less than 3 mm. The lesions located mainly in the cerebral cortex and subcortex with 60.8% occurrences at frontal and parietal lobes and 5.7% and 13.6% at temporal lobe and occipital lobe respectively. We also observed new DWI lesions at centrum semiovale (5.7%), lateral ventricle white matter (6.8%), basal ganglia (1.1%), insula (0.6%), ipsilateral cerebellum (2.3%), and cortical watershed (33.0%) ( Table 2). Changes in heart rates and blood pressure post stenting. The patients in the two groups showed comparable heart rates in baseline and post-stenting statuses, despite the observation of reduced heart rates on the first 2 days after the operations and a return to normal rates gradually afterward. There were no differences in systolic and diastolic blood pressures between groups upon admission and discharge. While the micro-embolism group showed significantly lower systolic blood pressure at discharge admission than that on admission (136.5 ± 18.0 vs 124.9 ± 14.9, P < 0.05), the non-embolism group presented lower diastolic blood pressure at discharge admission than that on admission (81.1 ± 10.4 vs 69.6 ± 9.8, P < 0.05).
Dynamic changes in CBF post stenting. Cerebral blood flows in the frontoparietal and temporal lobes were measured to further investigate the relation between CBF dynamics and the formation of embolism after stenting. All patients exhibited comparable pre-surgery CBFs in these two locations and experienced significant higher CBFs in both ipsilateral and contralateral frontoparietal lobes on day 1-3 post stenting compared with www.nature.com/scientificreports/ that of temporal lobe. The differences disappeared on day 4. We also observed increased ipsilateral CBFs from baselines in these two locations for 4 consecutive days after the surgery with most patients' CBF peaked at day 3 before it decreased (Table 3, Supplementary file). When patients were grouped according to the presence of new DWI lesions, we did not see CBF differences in the frontoparietal and temporal lobes between the two groups despite the patients exhibited significant increased ipsilateral CBFs post intervention (Tables 4 and 5, Supplementary file).
The relative CBF (rCBF) in the frontoparietal and temporal lobes were further compared between groups and we observed a significant reduced rCBF of the frontoparietal lobes in the micro-embolism group (0.861 in the microembolism group vs 0.912 in the non-embolism group, P < 0.05), whereas the rCBF of temporal lobe showed no differences between the two groups (0.975 in the microembolism group vs 0.888 in the non-embolism group, P > 0.05). [17][18][19][20] . The differences of perioperative stroke are the results of excessive mild non-disabling infarctions. Currently recognized possible mechanisms of ischemic stroke involve hemodynamics mechanism, artery-to-artery embolism, and combination of the two. To investigate the pathogenetic mechanism of perioperative microembolism, we analyzed the size and distribution of the DWI lesions and changes of hemodynamics in patients receiving CAS.
In this study, we showed 17 (70.8%) patients with new DWI lesions and most of them were free from symptoms, except one patient experienced vertigo and nystagmus. The new lesions on DWI images are generally considered as embolism resulting from plaque rupture or dislodged during the stenting procedure 5,6 . Other studies also indicated that it can be a result of continuous emboli dislodging after the surgery, although the cause and mechanism are still not clear 21. While the majority of perioperative micro-embolism occurred as ipsilateral hemispheric events with a few were located in areas unrelated to the carotid artery therapies 10,22 , contralateral micro-emboli after intervention may be caused by catheter destruction of the aortic arch plaques 22 . Our study showing contralateral embolism observable 1 day after surgery is in keeping with the mechanism of catheter induced thrombosis. Although 85.8% of the emboli were observed on day 1 post stenting, the discovery of new lesions on day 2-4 indicated a continuous spontaneous release of unstable emboli.
The cerebral cortical arteries, also known as pial collaterals, are small arterial networks joining the terminal branches of anterior, middle, and posterior cerebral arteries along the surface of the brain. The vessel branches penetrate through the cortex, white matter, and the axon fibers with the deepest ones forming the perforating medullary branches supplying the blood to centrum semiovale. The medullary branches locate at the distal end of the internal carotid artery with a relatively low perfusion pressure. As the medullary branches rarely anastomose with the deep perforating branches, hypoperfusion is more prone to occur in centrum semiovale in the presence of severe stenosis of internal carotid artery. It has been shown that large particles may be restricted from entering these blood vessels, however, particles less than 150 μm show equal opportunity of entry 23,24 . The less microembolic incidences in this area might be the result of relative low blood flows compared with the cortex 25 . Our study showed that the majority (60.8%) of micro-embolism occurred at the cortex/subcortex of the frontal and parietal lobes, and only 5.7% and 6.8% were observed in centrum semiovale and paraventricular white matter, respectively. In addition, the fact that these micro-embolism did not mainly occur in the cortical watershed territory, suggested the hypoperfusion might not be the causal mechanism of micro-embolism. This view is further strengthened by our observation of reduced cortical CBF and incidences of embolism in temporal lobe post stenting compared with that of the frontal and parietal lobes. The less increase of CBF in temporal lobe can be partially explained by the disproportionate increment of blood flow after stenting as a portion of the temporal lobe, particularly the basal temporal region, is supplied by the posterior cerebral artery.
We further grouped the patients into non-embolism and micro-embolism groups based on the presence of new DWI lesions after intervention. Interestingly, we observed a significantly reduced rCBF in the frontal and parietal lobes prior to the surgery in the micro-embolism group, but the basal and post stenting CBFs of the 2 groups were comparable. The relative CBF is associated with impaired regulation of pre-operative CBF, indicating a possible relationship between micro-embolism and impaired CBF regulation caused by long-term hypoperfusion in patients receiving CAS. This phenomenon was not observed in the temporal lobe, probably because it is fed by anterior and posterior circulations whereas the frontal and parietal lobes are fed by anterior circulation only.
When the catheter passed the unstable plaque, thousands of the emboli were released and circulated along the edge of the blood vessel wall 25,26 . While most of the emboli penetrated through the vessel wall 27 , a few arrived at the corresponding arteries on the surface of the brain. The blood vessels may be capable of regulating the vasodilation for maintaining the blood supply in the local brain tissue until the emboli are dissolved or expelled. However, ischemia is inevitable when the arteries are in its critical condition and lose the ability of regulating blood supply due to long-term hypoperfusion 27 . Because of the higher CBF in the frontal and parietal lobes, the microemboli mainly flow to and lodge in these areas. Further research is needed to support our speculation.
Although most micro-embolism do not cause apparent neurological dysfunction, some studies have shown otherwise 7,8 . Zhou et al. reported a correlation between the volume of subclinical embolic infarct and long-term cognitive changes after carotid revascularization 28 . Other animal studies also demonstrated neural and cognitive www.nature.com/scientificreports/ damages in association with the size of microemboli 29,30 . Thus, it is of important to further compare the changes of long-term cognitive function between patients receiving CAS and CEA.
In this study, we could see increased CBF in contralateral hemisphere after stenting of ipsilateral side. For carotid artery stenosis may lead to ipsilateral cerebral hemisphere ischemia, contralateral CBF may supply this area due to the existence of circle of Willis. Similarly, the ipsilateral CBF can flow to the contralateral cerebral hemisphere after stenting through the circle of Willis, which increases the contralateral CBF.
We observed that there was no difference in systolic and diastolic blood pressure between the two groups at admission and discharge. But in the microembolism group, the systolic blood pressure at discharge was significantly lower than that at admission. We speculate that patients with impaired CBF regulation are more likely to develop microembolism if their systolic blood pressure drops too much after operation. This result needs to be confirmed by further research.
We recognize the limitations of this study as follows: (1) the relative small sample size; (2) we did not analyze the risk factors of the micro-embolism; (3) we did not measure the patients' cognition; (4) some patients did not show up for the follow-up MRI examination 30 days after the surgery; (5) we performed a relatively short-term follow-up as most subjects came from other provinces.
Conclusion
The CAS-induced micro-embolism might not be the result of direct changes of hemodynamics in the brain but rather the loss of CBF regulation due to long-term hypoperfusion prior to the stenting. It might also be related to the excessive decrease of systolic blood pressure post stenting. | 2021-04-15T05:15:30.935Z | 2021-04-13T00:00:00.000 | {
"year": 2021,
"sha1": "6f7843b9edf31ecf309b9fc938162f876e942dbf",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-87339-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6f7843b9edf31ecf309b9fc938162f876e942dbf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15787479 | pes2o/s2orc | v3-fos-license | Environmental estrogens induce transcriptionally active estrogen receptor dimers in yeast: activity potentiated by the coactivator RIP140.
We used three yeast genetic systems to investigate the estrogen-like activity of octylphenol (OP), bisphenol-A (BPA), o,p'-DDT, and o, p'-DDE to induce human estrogen receptor (hER) dimerization and transcriptional activation. We have demonstrated that OP, BPA, and o, p'-DDT can induce hER ligand-dependent dimerization using a yeast two-hybrid assay. All three xenoestrogens, plus estradiol, enhanced estrogen response element (ERE)-dependent transcriptional activation of hER. In the presence of receptor interacting protein 140 (RIP140), ERE-dependent activity was dramatically amplified by 100-fold for estradiol, OP, BPA, and o,p'-DDT. A yeast whole-cell [(3)H]estradiol binding assay was developed to determine the site of interaction on the hER. We determined nonspecific binding by parallel incubations run in the presence of 5 microM unlabelled estradiol in PCY2 yeast. At the concentrations tested, unlabeled estradiol, OP, and BPA displaced [(3)H]estradiol in this binding assay, whereas the concentrations of o,p'-DDT and o,p'-DDE tested were insufficient to inhibit binding. Incubating yeast in the presence of increasing concentrations of estradiol and OP (1 microM) or BPA (1 microM) neither blocked nor altered the effect of estradiol on hER activity. We observed no agonistic activity of o,p'-DDE in any of the yeast models used. These results suggest that OP, BPA, and o,p'-DDT exert their estrogen-like activity through the ER in a manner similar to that of estradiol, and the coactivator RIP140 markedly potentiates this activity.
bttp.//e/epnetl.nies.nih.gov/docs/2000/108p97-103sheelerlabstraa.html Environmental estrogens are man-made chemicals that possess estrogen-like biologic activity. Some of these compounds, also called xenoestrogens, are associated with reproductive failure, abnormal feminization or masculinization, and altered immune function in wildlife (1). Many widely distributed xenoestrogens are industrial by-products. Octylphenol (OP), a nonionic surfactant, increases uterine weight in rats (X) and stimulates human breast cancer cell proliferation, trout vitellogenin gene expression, and other estrogen receptor-dependent gene transcription (3). o,p'-DDT and o,p'-DDE are components of commercial DDT that exhibit estrogen-like activity. o,p'-DDT can increase the weight, glycogen, water, and RNA content of mammalian and avian reproductive tracts (4). Furthermore, o,p'-DDT and o,p'-DDE have been reported to induce blastocyst implantation and maintain pregnancy in rats (5). Bisphenol A (BPA) is used in the manufacturing of polycarbonate and has been shown to bind to the estrogen receptor (ER), stimulate MCF-7 cell proliferation, and increase the expression of progesterone receptor (6). The majority of xenoestrogens are structurally distinct from estrogen, making it difficult to determine whether a compound is estrogenic on the basis of its chemical structure alone.
ER, a member of the ligand-dependent nuclear receptor superfamily of transcription factors (7), mediates the effects of estrogen. Estrogen enters the cell and binds to the ER, resulting in receptor dimerization. The dimer then binds to specific DNA sequences termed estrogen response elements (EREs) in the promoter region of ER-responsive genes. Once bound to the ERE, the ER influences gene expression through interactions with basal transcription factors. In conjunction with the basal transcription machinery, the ER associates with a group of novel nudear proteins, co-modulators, to enhance (8)(9)(10)(11) or suppress (12) transcriptional activity. One such co-modulator, receptor interacting protein 140 (RIP140), has been demonstrated to interact directly with the ER hormone-binding domain in the presence of estrogen and to amplify ER-dependent transcriptional activity (13)(14)(15). Recently, RIP140 has been shown to interact with other nuclear receptors and regulate gene expression (14,16,17). The complexing of receptor-interactive proteins with the ER serves as an additional level of selectivity and responsiveness in the regulation of estrogen-dependent gene expression.
The mechanism through which environmental estrogens exert their estrogen-like activity is unknown. We used three yeast genetic systems to determine whether several environmental estrogens affect transcriptional activation of human ER (hER). Also, it is unclear whether xenoestrogen-induced ER transactivation could also be influenced by the coactivator RIP140. Our data suggest that the estrogen-like activity of several environmental estrogens is mediated via the estrogen receptor, and that this activity is enhanced by RIP140. In addition, our binding data suggest that OP and BPA interact with the receptor through the ligand-binding site.
Materials and Methods
Materials. 4-tert-Octylphenol (OP) and bisphenol A (BPA) were purchased from Aldrich Chemical Co., Inc. (Milwaukee, WI). The DDT analogs 2-(4-chlorobenzene)-2- Yeast strains. We used two yeast strains to measure ligand-dependent hER dimerization or induction of hER-dependent transcription. The yeast strain PCY2 (MATa Agal4 Agal8O URA3::GAL1-lacZ lys2-801amber his3-A200 trpl-A63/eu2ade2-1Olchre) was used to measure ligand-dependent hER dimerization (18). The PCY2 yeast strain carries a genomic copy of the GAL4 binding site upstream of a lacZ reporter. Induction of hER-dependent transcriptional activity was measured in the yeast strain RS188N (MATa ade2-1 his3-1 leu2-1 112 trpl-1 ura3-1) (19). Transformation of yeast with plasmid DNA was done using the lithium acetate method (20). Transformed yeast colonies were selected by culture on synthetic medium lacking uracil, tryptophan, and/or leucine. cDNA and constructs. Construction of GAL4-hER fusion vectors involved cloning the full-length hER cDNA into the pPC62 (GAL4-DB) and pPC86 (GAL4-TA) fusion vectors, as previously described (20). hER cDNA, placed in the SaA site of pCMV-5, was digested with Sat and subdoned into the pBluescript II K+ at the Sat site such that its transcription was dependent on T7 polymerase (T7-hER). T7-hER was amplified with T3 primer and an oligonucleotide (5'-GGG GAT CGT CGA CTC GGT CTG CA-3'). This construct was designed to keep the correct reading frame intact in the GAL4-hER vector. The polymerase chain reaction (PCR) product was directly cloned into GAL4-DB and GAL4-TA at the Sai site such that, when expressed, the N-terminal end of the hER would be fused to the C-terminal end of DB and TA. GAL4-DB-hER and GAL4-TA-hER fusion cDNA were sequenced to confirm the correct reading frame before transforming yeast. The two expression vectors were cotransformed into the PCY2 yeast strain and used to assess ER dimerization The RS188N yeast strain was cotransformed with an ERE-dependent reporter (YRpE2), hER cDNA (YEpE12), and an empty pPC62. YRpE2 contains two EREs from the Xenopus vitellogenin-A gene, upstream of a lacZ gene. YEpE12 expresses hER fused to ubiquitin and is under the control of a yeast copper metallothionein promoter. The copper metallothionein promoter allows for regulated expression of hER in yeast by the addition of exogenous copper (19). The attachment of ubiquitin to the amino terminus of proteins such as hER improves both the quality and quantity of proteins expressed in yeast (19). Following expression in yeast, ubiquitin is cleaved from the fused product, releasing the desired protein. The cotransformation of an empty pPC62 vector in the RS 188N is used as a baseline for comparing the effect of RIP140 on ER-dependent transcription. Yeast carrying an ERE2-/aZ reporter, hER, and pPC62 was used to assess ER-dependent transcription.
To construct the yeast RIP140 expression vector, we first amplified the coding region of RIP140 cDNA in pEF-BOS vector (15) by two oligonucleotides: an up-stream primer (5'-GCG TCG ACG CTT CTA TTG AAC ATG ACT CAT-3') and a down-stream primer (5'-GGA CTA GTC CAA AAC TGG ATG GCA GGT-3'). The upstream oligonucleotide flanked the start codon (underlined in the up-stream primer), and the downstream primer was designed around the stop codon of RIP140 cDNA. The upstream primer included a Sai restriction site and the downstream primer contained a SpeI restriction site for further doning. The RIP140 PCR product was then cloned into pBluescript II SK+ at Sai and SpeI. The RIP140 coding region, released from pBluescript II SK+ by Sai and SpeI, was cloned into GAL4-DB (pPC62) at the Sai and SpeI sites. The GAL4-DB-RIP140 fusion cDNA was sequenced to confirm correct reading frame before transforming yeast. pPC62 containing RIP140 cDNA was cotransformed, along with YEpE12 and YRpE2, into the RS188N yeast strain.
Dimerization assay using the yeast twohybrid system. Ligand-dependent dimerization of the hER was determined by using the two domains of the yeast transcription factor GAL4, the DNA-binding (DB) domain and the transcription-activating (TA) domain (18,20). The functionality of GAL4 requires that the DB and the TA domains be juxtaposed. To exploit the use of these domains, we cloned hER into separate expression vectors (pPC62 and pPC86) such that hER was expressed as a fusion protein with DB and with TA. The two fusion plasmids were cotransformed into the PCY2 yeast strain.
The strategy behind these genetic manipulations was to design a recording system that could be used to quantitate hER dimerization. If ligand-dependent hER dimerization occurred, DB and TA fusion proteins would be brought in close proximity, reconstituting GAL4 activity. The reconstituted GAL4 increases ,B-galactosidase synthesis via the expression of a lacZ reporter gene that is driven by the GALl promoter. Thus, the production of I-galactosidase confirms the ability of estrogenic ligands to induce hER dimerization.
Transactivation assay. We assessed the ability of environmental estrogens to promote transcriptional activity of hER in a genetically manipulated RS 188N yeast strain. RS1 88N yeast was cotransformed with YEpE12, YEpE2, and an empty pPC62 vector (see "cDNA and Constructs") yielding an EREdependent yeast strain. We used ,B-galactosidase activity to quantitate hER-dependent gene transcription. We also used the yeast strain RS188N to determine the influence of RIP140 on xenoestrogen-induced ER transcription activity by cotransforming the yeast with YEpE12, YEpE2, and RIP14O cDNA in pPC62.
Ligand treatment. Stock cultures of the two-hybrid yeast or ERE-dependent yeast with or without RIP140 were grown in 20 ml synthetic supplemented medium lacking the appropriate amino acid selection markers; 1% (v/v) ethanol serves as the carbon source.
Cultures were grown at 30°C until reaching an OD600 (optical density at 600 nm) reading between 1.5 and 2.5 absorbance units (AU)/ml culture, after which the stock cultures were stored at 40C for up to 1 month.
For ligand treatment, we added a volume of the stock yeast culture to 1.0 ml of supplemented synthetic dextrose (SD) medium such that the initial OD600 was between 0.1 and 0.15 AU/ml culture. PCY2 yeast were incubated in SD medium lacking leucine and tryptophan; RS188N yeast were incubated in SD medium lacking uracil, tryptophan, and leucine. Estradiol and test compound stock solutions were diluted in DMSO such that 1 il of the test compound added to 1.0 ml of culture gave the desired final ligand concentration. Control cultures received an equal volume of DMSO. Expression of hER was induced in the ERE-dependent yeast by adding 100 pM CuSO4. The cultures were then grown at 30°C in a shaking incubator (230 rpm) for 16-18 hr, after which the samples were removed and P-galactosidase was quantitated.
P-Galactosidase assays. The j-galactosidase assay was used to measure liganddependent dimerization in the two-hybrid yeast system and hER-dependent transcription activity in the ERE-dependent yeast system. Overnight yeast cultures were added to Z buffer (60 mM Na2HPO4, 40 mM NaH P04, 10 mM KCl, 1 mM MgSO4, 50 mM ?3-mercaptoethanol, pH 7.0) containing sodium dodecyl sulfate (SDS; 0.027%), with a total volume of 1.1 ml. This was accomplished by adding 0.1 ml of PCY2 yeast to 0.7 ml Z buffer and 0.3 ml 0.1% SDS in Z buffer or 0.05 ml of RS188N yeast to 0.75 ml Z buffer and 0.3 ml 0.1% SDS in Z buffer. The reaction mixture was preincubated at 30°C for 10 min. The reaction, started by adding 0.2 ml of o-nitrophenyl-j-D-galactopyranoside (ONPG; 4.0 mg/ml in Z buffer containing j-mercaptoethanol and lacking SDS), was incubated at 30°C for up to 1 hr and terminated by adding 0.5 ml of 1.0 M Na2CO3. J3-Galactosidase activity was determined by the degree of ONPG hydrolysis at 420 nm. Total P-galactosidase units were calculated according to the following equation: (20). The concentration of each ligand that gave 50% of its maximal ,B-galactosidase activity (EC50) was determined by fitting the data to a four parameter logistic function, and analyzed by SigmaPlot software (SPSS Inc., Chicago, IL).
Receptor binding assays. We determined the affinities of the various test compounds and estradiol for hER in a whole yeast cell binding assay using [3H] 17,B-estradiol (84.1 Ci/mmol). The yeast cultures used for the binding assays were prepared from stock twohybrid cultures maintained at 40C by growing fresh cultures in the appropriate volume of SD medium lacking tryptophan and leucine. The cultures were grown to an OD600 between 0.2 and 0.8 AU/ml culture. This density was found to give optimal specific binding.
Saturation analysis of [3H]estradiol binding was performed in 96-well plates by incubating 0.08 ml of the yeast culture with 0.02 ml of various concentrations of [3H]estradiol (0.1-10 nM) for 3 hr at room temperature. Total assay volume was 0.2 ml made up with yeast culture medium. We determined nonspecific binding at each concentration of ligand by parallel incubations run in the presence of 5 pM unlabeled estradiol. At the indicated time, we harvested the cells by filtration onto glass filter membranes using a 12-well cell harvester and measured the levels of bound radioactivity. The specific binding was analyzed by the method of Scatchard to obtain linear regression on a plot of bound/free versus bound (21). The apparent dissociation constant (Kd) was calculated from the slope of the regression line, and the number of binding sites per 1 million cells was calculated from the x-axis intercept.
We performed competition binding assays using the same protocol as above, except that cells were incubated in the pres- (13,20).
Activation of an ERE-lacZ reporter in yeast by environmental estrogens. After determining that xenoestrogens can promote ER-ER dimerization, we sought to investigate whether such a protein-protein interaction is accompanied by ERE-dependent transcriptional activity in the RS 188N yeast. This yeast expresses hER under the control of a yeast copper metallothionein promoter and an ERE-dependent lacZ reporter. The copper metallothionein promoter allows for regulated expression of hER in yeast by the addition of exogenous copper (19). Estradiol initiated lacZ production at 100 pM, with maximal induction reached at 10 nM. The EC50 value of estradiol was 1.02 ± 0.31 nM. OP, BPA, and o,p'-DDT also stimulated hER-dependent gene transcription ( (Table 1), whereas o,p'-DDE failed to promote ERdependent gene transcription ( Figure 2). Although OP was the most potent of the xenoestrogens tested, OP was 686 times less potent than estradiol at inducing the EREdependent reporter gene. As compared to estradiol stimulation of f-galactosidase, BPA and o,p'-DDT were 2,108 and 23,333 times less potent, respectively. Inhibition of f3H]estradiol binding to hER by environmental estrogens in whole yeast cells. The experiments in the two-hybrid yeast and ERE-dependent yeast systems demonstrated that xenoestrogens could induce an active hER dimer. However, the exact site of binding to the receptor remained unknown. To investigate whether the environmental estrogens interacted with the ligand-binding site of the hER or at another site, a whole cell competition binding assay for yeast was developed using [3H]estradiol and the hER twohybrid yeast. After determining the optimal assay conditions, we measured the specific binding of [3H]estradiol to hER and performed saturation analysis. Saturation analysis using [3H]estradiol gave a Kd of 3.00 ± 0.24 nM, with a Bmax (maximum specific binding) of 9.39 ± 3.5 fmoles/106 cells (Figure 3), showing that estradiol can transverse the yeast cell wall and bind to the receptor at physiologic conditions. The issue, RS188N yeast were grown in medium lacking exogenous copper, thus limiting the production of hER and maximizing the ability to detect antagonistic or additive effects on the estradiol dose-response curve. The concentration-dependent response of estradiol in RS188N yeast (in the absence of copper) was similar to the response seen when copper was present (compare Figure 5 to Figure 2). We observed a copper-dependent increase in ERE-lacZ activity in RS188N yeast (data not shown). The EC50 values for estradiol were 0.71 ± 0.07 nM in the absence of copper and 1.02 ± 0.31 nM in the presence of copper. In the absence of exogenous copper, estradiolinduced levels of f3-galactosidase were 6-8 RRU, as compared to 100-150 RRU in the presence of copper. In the absence of added copper, EC50 values for OP ( Figure 5A) and BPA ( Figure 5B or BPA (1 jiM) were tested in combination with estradiol, no changes in the maximal [galactosidase activity or the EC50 values were observed ( Figure 5). RIP140 amplifies xenoestrogen-induced hER activity. Because xenoestrogens show weak estrogenic activity, we sought to determine if nuclear receptor coactivators would influence ER-mediated transcriptional activity of hER. We chose to determine the effect of RIP140 on xenoestrogen-induced EREdependent gene transcription. RIP140 has been shown to interact with the ER in the presence of estradiol, increasing ERE-dependent gene transcription (13,15). To determine if RIP140 also can increase xenoestrogen-stimulated hER-dependent gene transcription, we expressed RIP140 in RS188N yeast. Although the basic transcription factors are conserved between yeast and mammalian cells, yeast appear to lack steroid receptor comodulator proteins (14). Therefore, yeast are ideal for studying the effect co-modulators might have on nuclear receptor-mediated transcription. As mentioned above, the EREdependent yeast expressing RIP140 were grown in medium without added copper.
When RIP140 was coexpressed with hER, the maximum [-galactosidase activity stimulated by estradiol was increased approximately 100-fold over yeast without RIP140 (compare Figure 5 to Figure 6). To investigate if RIP140 could affect xenoestrogen-induced hER transactivation, we repeated the experiment using OP, BPA, and o,p'-DDT. As with estradiol, RIP140 increased the maximum [-galactosidase activity of OP, BPA, and o,p'-DDT by 100-fold over yeast without RIP140. The same concentrations of estradiol, OP, BPA, and o,p'-DDT are required to initiate and maximally stimulate the reporter gene in the presence or absence of RIP140 (compare Figure 2 to Figure 6). RIP140 did not appear to alter the EC50 values for estradiol or OP and thus did not alter the potency of the ligands. However, in the presence of RIP140, the dose-response curve of BPA shifted to the right (compare Figure 2 to Figure 6), resulting in an approximate 6-fold difference between EC50 values ( Table 1). The EC50 for o,p'-DDT decreased by approximately 3-fold in the presence of RIP140 (Table 1). In the absence of ligand, RIP140 did not induce [3-galactosidase activity. Also, no induction of [3-galactosidase activity was observed in the presence of o,p-DDE.
Discussion
The objective of this study was to characterize the mechanism by which several manmade nonsteroidal compounds emulate estrogen in vivo. Using three yeast genetic systems, we demonstrated that the xenoestrogens OP, BPA, and o,p'-DDT exert their estrogen-like activity through the ER.
In the yeast two-hybrid system, all three compounds were able to induce ,B-galactosidase activity. This required dimerization of the two hER fusion proteins to reconstitute the GAL4 DNA binding protein. These results support the hypothesis that ER dimer formation is ligand dependent (20,23). The results with the RS188N yeast showed that the ER dimers formed in the presence of these compounds were able to induce transcription of an ERE-dependent gene. In (29). These investigators measured uterine vascular permeability of ovariectomized mice and found that neither OP nor BPA altered the stimulatory effect of estradiol on vascular permeability. Because the concentrations of OP, BPA, and o,p'-DDT required to promote hER dimerization and transcription were more than 3,000-times higher than estradiol, we classified these compounds as weak ER agonists. Another explanation for their weak estrogen-like activity could be their inability to cross the yeast cell wall. To address this issue we performed a whole yeast cell receptor binding assay, which showed that OP and BPA do bind to the hormone binding site of the ER. A comparison of AG values for these compounds from the whole yeast cell-binding assay with RG values from in vitro binding assays showed the values for estradiol, OP, and BPA to be of a similar magnitude (3,6,30,31). Thus, our results support the idea that OP and BPA are indeed weak ER agonists, and the yeast cell wall does not inhibit these compounds from entering the yeast. Although sufficient concentrations of o,p'-DDT enter our yeast to induce ER dimerization and ERE-dependent gene transcription, our whole yeast cell-binding assay may not be sensitive enough to detect competition between o,p'-DDT and [3H]estradiol for ER binding. This is based on previous cell-free binding studies which demonstrate that o,p'-DDT can displace [3H]estradiol from the ER (26,(32)(33)(34)(35). Furthermore, it has been demonstrated that o,p'-DDT can transcriptionally activate ER expressed in yeast (25,26,28). As with our o,p'-DDT binding results, there is at least one in vitro study which used spotted seatrout ER that also failed to show any competition of o,p '-DDT with [3H]estradiol (36).
The concentrations of the xenoestrogens required to elicit a response in the yeast genetic systems appear to be difficult to achieve in vivo. However, several reports suggest circumstances under which the local concentrations of xenoestrogens could be increased. Many xenoestrogens are hydrophobic and can accumulate in fatty tissue, reaching physiologically relevant concentrations (37). Therefore, adipose tissue might serve as a xenoestrogen reservoir from which xenoestrogens could be released in concentrations sufficient to stimulate the ER (37). In addition, data suggest that environmental estrogens may not be bound to serum proteins as effectively as estradiol, thus increasing their bioavailibility and relative concentrations (28,31,38,39). Xenoestrogens could have a much greater effect on ER-dependent gene transcription than originally thought in the presence of coactivators such as RIP140. In our yeast system in which hER and RIP140 are coexpressed, the coactivator dramatically increases ER-dependent gene transcription. The presence of RIP140 increases 17,B-estradiol, OP, BPA, and op'-DDT induction of ER-dependent gene transcription by 100-fold. Also, there is the possibility that RIP140 may slightly alter the potency of some xenoestrogens such as BPA and o,p'-DDT. As in yeast lacking RIP140, o,p'-DDE did not induce ERdependent gene transcription in the presence of RIP140. Previously reported data from our laboratory clearly demonstrated that RIP140 directly interacts with the ligand-binding domain of hER in the presence of estradiol (13). Additionally, the Fdomain of the hER was shown to play an important regulatory role in the association of RIP140 with ER and in ER homodimerization (13). Using the yeast two-hybrid assay, Nishikawa et al. (40) reported that the xenoestrogen BPA can induce an interaction between the ligand-binding domain of rat ER and the receptor-interacting domain of RIP140. They observed BPAinduced activity with a concentration range and ratio similar to maximum estradiol response, as reported in this study. The ratio of RIP140 to ER has been reported to be critical for the coactivator to influence ER transactivation (14,15,17). Therefore, the effect of xenoestrogens on ER-dependent gene transcription could be much greater in cells expressing coactivators than originally anticipated.
In conclusion, the data presented in this report indicate that the environmental estrogens OP, BPA, and o,p'-DDT posses ER agonist activity. These compounds were able to induce the formation of a transcriptionally active hER dimer, whose activity was further enhanced in the presence of the ER coactivator RIP140. OP and BPA appear to activate the ER through interaction at the estradiol binding site. As a result, exposure to OP, BPA, or o,p'-DDT at sufficient concentrations or in the presence of an ER coactivator could have deleterious effect on normal cell function due to the untimely activation of estrogen-regulated genes. | 2014-10-01T00:00:00.000Z | 1999-12-23T00:00:00.000 | {
"year": 2000,
"sha1": "801587c4d4e2f3614edc8373048b4a379359c94d",
"oa_license": "pd",
"oa_url": "https://doi.org/10.1289/ehp.0010897",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "801587c4d4e2f3614edc8373048b4a379359c94d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
88524115 | pes2o/s2orc | v3-fos-license | USTCSpeech System for VOiCES from a Distance Challenge 2019
This document describes the speaker verification systems developed in the Speech lab at the University of Science and Technology of China (USTC) for the VOiCES from a Distance Challenge 2019. We develop the system for the Fixed Condition on two public corpus, VoxCeleb and SITW. The frameworks of our systems are based on the mainstream ivector/PLDA and x-vector/PLDA algorithms.
Introduction
The USTCSpeech system to Challenge 2019 is based on x-vector&i-vector/PLDA frameworks. We use some open source code as well as our own platform to provide complementarity. Two kinds of acoustic features, PLP and MFCC, are adopted in all the sub-systems. Three i-vector sub-systems are implemented using Kaldi and our own platform. Nineteen xvectors sub-systems are implemented using Kaldi and Tensorflow platform. The PLDA algorithm is adopted as backend classifier for all the i-vector/x-vectors. The submitted system is a fusion at the score-level of these sub-systems.
The remainder of this document is organized as follows. Section 2 presents the data preparation. In section 3, we describes various sub-systems. After it, section 4 lists results on development set.
Individual Datasets
The VOiCES from a Distance Challenge 2019 can be evaluated over a fixed or open training condition. In our work, we only train our model in fixed condition. The datasets used for training included Voxceleb1 & Voxceleb2. The SITW will be used for score normalization.
Data Augmention
The following strategies are used for data augmentation. Reverb: The speech utterances are artificially reverberated via convolution with simulated RIRs [2], and we didn't add any additive noise here. Music: A single music file (without vocals) is randomly selected from MUSAN corpus, trimmed or repeated as necessary to match duration, and added to the original signal at 5-15dB SNR [3].
System Descriptions
We will introduce the sub-systems in this section.
GMM-UBM i-vector
We explored three i-vector [4] systems with different experiment setups.
I-vector 1&2
These two sub-systems are implemented using Kaldi toolkit [5].
For the acoustic features, a 25 ms window with 10 ms shifts is applied to compute the 24-dimensional MFCCs and their first and second derivatives. The cepstral filter banks are selected within the range of 20 to 7600 Hz. Short-time cepstral mean subtraction is applied over a 3-second sliding window and energy based VAD is used to remove the non-speech frames then. The features are used to train a 2048 component Gaussian Mixture Model-Universal Background Model (GMM-UBM) with full covariance matrices. After UBM is trained, 600-dimensional total variability matrix is trained using the above-mentioned training set.
We also used PLP features for another sub-system with the same setups. The only difference is that we use 39dimensional PLP to replace the MFCC.
I-vector 3
This sub-system follows the similar setups like i-vector 1, but here we extracted 13-dimentional PLP with their first and second derivatives using HTK toolkit. The i-vector extractor is implemented with our own platform.
Kaldi X-vector 4-6
Three x-vector systems are trained using Kaldi toolkit. The main differences of them are the acoustic feature and network parameters. The training recipe and data are all the same. For x-vector 4, we extract 30-dimensional MFCCs (including c0) from 25 ms frames every 10 ms using a 30channel Mel-scale filter bank spanning the frequency range 20 Hz to 7600 Hz. A short-time cepstral mean subtraction is applied over a 3-second sliding window, and an energy based VAD is used to drop the non-speech frames. The parameters of the network are listed in Table 1. The first five hidden layers are constructed with the time-delay architecture and operate at frame-level. Then a statistics pooling layer is employed to compute the mean and standard deviation over all frames for an input segment. The resulted segment-level representation is then fed into two fully connected layers to classify the speakers in the training set. After training, speaker embedding is extracted from the 512-dimensional affine component of the first fully connected layer [6]. -512 We also trained another 2 sub-systems with similar network architecture. One used the MFCC mentioned above, and the other used 20-dimentional PLP. We find that there is some complementarity between Kaldi x-vectors with different setups.
Tensorflow X-vector
The systems introduced in this section are based on the Tensorflow implementation of the x-vector speaker embedding. Some systems are trained using our own Tensorflow code and the others are implemented based on the open source code as described in [7]. Only the training process is implemented in Tensorflow toolkit. The overall steps are same as the original recipe with some modifications which are explained below. The features are extracted with the same setups in 3.1.2. We also extracted several embedding b of the sub-systems in this section for system fusion [8].
Tensorflow x-vector using basic architechure =TFXvec 7-9
The TFXvec 7 is trained using the open source Tensorflow code [7]. Its network architecture is a little different from the Kaldi x-vector 4 where the 1536 output nodes are used for the fifth frame-level layer.
Meanwhile, we trained a sub-system using the same setups with the activation function replaced with PReLU and the context of the third layer replaced with {t-4, t, t+4}. The other sub-system TFXvec 9 is trained where 256 and 2048 output nodes are used for the first four layers and the fifth layer respectively, and the context of the fourth layer is {t-4, t, t+4} in TFXvec 9.
Tensorflow
x-vector with Gated CNN layer=TFXvecGcnn 10 The architecture of TFXvecGcnn 10 is similar to TFXvec 7 except that the first four layers are replaced with the Gated CNN [9] layers.
Tensorflow x-vector with CNN, LRelu and Attention1 = TFXvecCLReluAtt1 11&12
The system TFXvecCLReluAtt1 11 adopts the attention mechanism. Here we used the same type of attention as what described in [7]. The size of the last hidden layer before pooling is doubled and equally split into two parts. The first part is used for calculating attention weights while the attentive statistics pooling is calculated using the second part.
At the same time, TFXvecCLReluAtt1 11 uses the CNN and LRelu instead of TDNN and Relu.
For TFXvecCLReluAtt1 12, we trained a sub-system using the same type of attention with the 256 output nodes for the first two frame-level layers. The kernel sizes of the CNN layers are 7,5,3,1 and 1 respectively.
Tensorflow x-vector with Attention2 = TFXvecAtt2 13
In TFXvecAtt2 13, we used the same type of single-head attention mechanism described in [10] for the TFXvec 7.
Tensorflow x-vector with Attention3 = TFXvecAtt3 14
Unlike TFXvecAtt2, this system directly averages the output of the last frame-level layer to calculate attention weights through softmax function instead of training the additional weighting parameters for calculating attention weights.
Tensorflow x-vector with CNN and He initialization = TFXvecCHeInit 15
The differences between this system and TFXvec 7 are that CNN is used instead of TDNN and the weight initialization method is He initialization [11] in this system.
Tensorflow x-vector with auto encoder = TFXvecAE 16-20
In the TFXvecAE systems, the additional task is added in which the network is forced to reconstruct the high-order statistics of input features [12]. The high-order statistics pooling is computed by concatenating the mean, standard deviation, skewness, and kurtosis of input features with a total of 30*4=120 dimensions. The network is trained through multi task learning where the original cross-entropy (CE) loss and mean squared error (MSE) loss are given weights of 0.7 and 0.3 respectively.
We trained other sub-systems TFXvecAE 17-20 with different task weights, shared layers and the input without data augmentation.
Tensorflow x-vector with Gated CNN and Attention =TFXvecGcnnAtt 21&22
These two sub-systems are trained using our Tensorflow platform. In the TFXvecGcnnAtt 21, the first four layers are constructed with 256-dimensional dilated Gated CNN and the last frame-level layer is modeled by 1500-dimensional CNN. The dilation rates are {1, 2, 4, 1, 1} and the kernel sizes are {5, 3, 3, 1, 1} respectively. The single-head attention statistics pooling [10] is used in the pooling layer. The rest of the structure is the same as that of TFXvec 7.
We also implemented a gate-attention statistics pooling in the second TFXvecGcnnAtt sub-system where the output of the last frame-level layer is modulated by an output gate and the output gate is further used to calculate the attention weights for the attentive statistics pooling layer [13].
PLDA backend
Our PLDA backend is implemented in the Kaldi toolkit [6]. The extracted speaker embeddings (i-vectors or x-vectors) are centered and projected by LDA. The LDA dimension was tuned on the development set. The PLDA model is trained using the longest 200k recordings from the training set and the length-normalization is adopted before the PLDA scoring.
Unsupervised Clustering Score Normalization
We proposed an unsupervised clustering score normalization algorithm for a contrastive system. The motivation of the proposed algorithm is to use the scores from the most competitive impostors for the normalization parameters. Firstly, the K-means clustering algorithm is applied to a large pooling normalization scores. The scores belonging to the clusters with small mean values are discarded and will not be used. Then the expectation maximization (EM) algorithm is applied and the Gaussian mixture models (GMM) are used to fit the distribution of the remaining scores. The parameters of the Gaussian component with the largest mean value are used for normalization.
System Fusion
We submit three systems finally. The scores of most of the sub-systems are fused with equal weights for system 1, which is our primary system. For system 2, the scores of each subsystem are normalized using the above-mentioned unsupervised clustering score normalization. The scoring weight of each sub-system is tuned in development set and the min DCF is used as main criterion. For system 3, the procedure is almost the same with system 2 except that the score normalization is not applied.
Experiment Results
Results of the final fusion systems for development set are summarized in Table 2 | 2019-03-29T10:05:13.000Z | 2019-03-29T00:00:00.000 | {
"year": 2019,
"sha1": "5a7cd75c7b322ed9016e91b6a00816f6adc30dee",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5a7cd75c7b322ed9016e91b6a00816f6adc30dee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
14367648 | pes2o/s2orc | v3-fos-license | Barriers and facilitators of hepatitis C screening among people who inject drugs: a multi-city, mixed-methods study
Background People who inject drugs (PWID) are at high risk of contracting and transmitting and hepatitis C virus (HCV). While accurate screening tests and effective treatment are increasingly available, prior research indicates that many PWID are unaware of their HCV status. Methods We examined characteristics associated with HCV screening among 553 PWID utilizing a free, multi-site syringe exchange program (SEP) in 7 cities throughout Wisconsin. All participants completed an 88-item, computerized survey assessing past experiences with HCV testing, HCV transmission risk behaviors, and drug use patterns. A subset of 362 clients responded to a series of open-ended questions eliciting their perceptions of barriers and facilitators to screening for HCV. Transcripts of these responses were analyzed qualitatively using thematic analysis. Results Most respondents (88%) reported receiving a HCV test in the past, and most of these (74%) were tested during the preceding 12 months. Despite the availability of free HCV screening at the SEP, fewer than 20% of respondents had ever received a test at a syringe exchange site. Clients were more likely to receive HCV screening in the past year if they had a primary care provider, higher educational attainment, lived in a large metropolitan area, and a prior history of opioid overdose. Themes identified through qualitative analysis suggested important roles of access to medical care and prevention services, and nonjudgmental providers. Conclusions Our results suggest that drug-injecting individuals who reside in non-urban settings, who have poor access to primary care, or who have less education may encounter significant barriers to routine HCV screening. Expanded access to primary health care and prevention services, especially in non-urban areas, could address an unmet need for individuals at high risk for HCV.
Background
Infection with hepatitis C virus (HCV) is the most common cause of end-stage liver disease and the most frequent reason for liver transplantation in the United States [1]. Between 3 and 4 million Americans are chronically infected, many of whom will develop cirrhosis and liver cancer in the coming decades. Because of non-sterile injecting practices, HCV is highly concentrated among people who inject drugs (PWID) [2,3]. The HCV prevalence in a study of young PWID in four large US cities was 35%, ranging from 14% in Chicago to 51% in New York City [4]. Among some cohorts of older PWID, HCV prevalence reportedly exceeds 90% [5,6]. Despite this high prevalence, prior research has shown that many PWID, particularly those younger than 30, are unaware of their status [7,8].
Health care costs associated with HCV infection are substantial and forecasted to rise dramatically over the next decade as "baby boomers," the birth cohort with the highest HCV prevalence, age into the 7th and 8th decade of life [9]. HCV-infected persons have been estimated to incur twice the annual health care expenses and require hospitalizationat three times the rate of HCV-uninfected individuals, after controlling for age and sex [10].
In May 2011, the U.S. Food and Drug Administration (FDA) approved the first two HCV protease inhibitors for the treatment of chronic HCV infection in combination with standard interferon-based therapy [11]. Availability of direct-acting, antiviral drugs represent a new era in therapeutics when most patients with chronic HCV can be cured using agents for a shorter duration and that have a more favorable side effect profile than prior regimens. The prospect that these advances will translate to population-level declines in HCV disease is currently limited by the fact that 50% to 75% of all HCV-infected individuals in the U.S. are unaware of their serostatus [1]. National initiatives to increase case finding have been proposed, including recommendations for routine screening in health care settings [12]. Many PWID and other high-risk individuals lack insurance, however, and may be systematically underserved by clinicbased approaches [2]. Therefore, community-based approaches are also needed to ensure PWID receive HCV screening.
As PWID are a difficult-to-reach population, little is known about the characteristics of those who are and are not screened for HCV. Understanding facilitators and barriers to HCV screening that are encountered by PWID may help guide the construction of interventions aimed at reducing the burden of unrecognized HCV infection. The objectives of this study were to (1) identify individual characteristics associated with HCV screening among PWID who utilized a free needle-exchange program and (2) identify perceived barriers and facilitators of HCV screening among a convenience sample of PWID in the Midwestern United States.
Study participants
We surveyed PWID utilizing a free, multi-site syringe exchange program (SEP) operating in Southern Wisconsin between June and August 2012. The Lifepoint Needle Exchange operates through office-based locations in the cities of Madison and Milwaukee, and via mobile van units that serve the Milwaukee suburbs, rural communities surrounding Madison, and the cities of Kenosha, Waukesha, Janesville and Beloit. Consecutive individuals who speak and read English, were 18 years or older, and reported a history of injecting drugs were invited to participate. Participants provided verbal informed consent and were paid $10 in cash as compensation for completing the survey. The study protocol was approved by the Institutional Review Board at the University of Wisconsin School of Medicine and Public Health.
Survey administration
We developed an 88-item questionnaire designed to elicit previous experiences with HCV testing. Survey items assessed demographic characteristics, drug use behaviors (e.g., frequency of injection, sharing needles or equipment, and overdose history), and access to medical care (e.g., emergency room utilization, having a primary care provider). Participants were queried about the frequency of previous HCV testing, the results of past HCV tests, and the locations they had received testing. Multiple-choice and short-answer question items were self-administered by the client, who read the survey and recorded responses using a tablet computer. This allowed respondents to provide information dealing with sensitive subjects such as illicit drug use in a private manner, decreasing the likelihood of socially desirable responding.
A second phase of the assessment was a brief interview consisting of several open-ended questions that evaluated participants' previous experiences with HCV testing. Development of the brief interview items was guided by the Health Belief Model [13][14][15] and focused on barriers, facilitators and previous experiences with seeking and receiving testing for HCV. The two question items relevant to the current analysis were (1) "What makes it harder for you to get tested for hepatitis C?" and (2) "What makes it easier for you to get tested for hepatitis C?" Responses were hand-transcribed by the interviewer in real time on the tablet computer. Interviewers were instructed to record participants' responses verbatim. The text of each response was linked to an anonymous identification number assigned to the participant's survey responses and saved for subsequent thematic analysis, as described below.
Quantitative data analysis
Descriptive statistics were used to characterize the study sample with respect to the main variable of interest, which was self-report of receiving HCV screening during the previous 12 months. After excluding respondents who reported already knowing they were HCV-positive, we categorized the study sample in two groups, those who reported having received an HCV test in the past year and those who had not. The latter group includes those who have never tested and whose last HCV test was more than one year prior to the study, because the health behavior of the latter group is inconsistent with HCV testing recommendations.
We compared demographic and behavioral characteristics of the two subsets of respondents using t-tests for continuous variables and chi-squared tests for categorical variables. We used simple logistic regression to generate odds ratios and 95% confidence intervals representing bivariate associations between past-year HCV testing and individual characteristics we hypothesized would be important determinants of testing. An alpha level of 0.05 was assumed to indicate statistical significance. To identify factors independently associated with past-year HCV testing, we used multiple logistic regression models to estimate adjusted odds ratios. Variables with significant bivariate associations and those considered a priori to be likely predictors of HCV testing were included in an initial multivariate model. A final model was determined by sequentially eliminating covariates with non-significant P-values. Statistical analyses were conducted using STATA Version 11 (Cary, NC).
Qualitative data analysis
Two investigators (JB and MB) conducted the qualitative analysis using an inductive thematic approach [16,17]. First, investigators independently read all interview transcripts for main themes and subcategories. They then met to develop consensus over a coding scheme used for further analysis. Both investigators independently coded all transcripts line-by-line using the coding scheme and discrepancies were resolved by discussion to reach consensus. Interrater reliability was 81%. To explore whether barriers and facilitators are perceived differently by respondents tested for HCV in the past year compared to those who were not, we compared the frequency of specific codes among the two subsets of respondents using chisquared tests.
Quantitative results
Over the 8-week study period, 862 consecutive syringe exchange clients were invited to participate in the study and 553 eligible PWID (64%) agreed to complete the survey. For the present analysis, we excluded 33 respondents who reported knowing they were HCV-infected and received their diagnosis more than 1 year ago because they would have no reason to be tested in the past 12 months, yielding a final study sample of 520. Most respondents resided in the City of Milwaukee (34.9%) or the Milwaukee suburbs (19.2%). A smaller proportion was recruited from the Madison-based office (19.5%), which serves the City of Madison and surrounding, predominantly rural communities.
Characteristics of the study participants are shown in Table 1, stratified by whether they reported testing in the past year. The median age was 28; most participants were male (69%) and white (83%). The neighborhood of residence was described as "suburban" by 42.7%, "urban" by 40% and "rural" by 15.3% of respondents. Overall, 88% of IDUs indicated they had ever received a HCV test, and 73.8% had done so in the past year. Respondents who had reported HCV testing in the past year were asked to specify the location where they received a HCV test most recently. Of 329 PWID tested in the past year, 64 (19.5%) received their test at the SEP. Nearly one third (32.5%) received testing at a primary care medical clinic, and 34 (10.3%) received testing in a correctional facility. The remaining respondents reported they received testing at other health care and public health venues. Table 2 shows the results of univariate and multivariate logistic regression models measuring the association of pastyear HCV testing and selected participant characteristics. Those who reported recent testing were more likely to live in urban or suburban areas, to have health insurance, and to have received some education beyond high school. There were no differences in past-year testing according to age, gender, or race. In the final, adjusted model, having a primary care provider (PCP) was independently associated with past-year HCV testing (adjusted OR 2.0, 95% C.I. 1.3 -3.0), as was higher educational attainment (adjusted OR 1.9, 95% C.I. 1.4 -2.5), residence in Milwaukee (adjusted OR 2.3, 95% C.I. 1.5 -3.5), and lifetime occurrence of opioid overdose (adjusted OR 1.8, 95% C.I. 1.1 -2.8). Moreover, among those who had a PCP, those attending a medical appointment with a PCP during the six months before the study had nearly three times greater odds of having been tested for HCV (univariate OR 2.9, 95% C.I. 1.3 -6.4).
Qualitative results
Of the 553 individuals who agreed to complete the survey, 362 (65% of survey respondents) also responded to the brief interview questions. Of 31 respondents that completed the brief interview who reported having a previous positive test for HCV, 13 had been aware of their positive antibody status for more than 1 year, and were excluded from past-year testing analysis. Barriers and facilitators to past-year testing derived from thematic analysis of these responses are shown in Table 3 and Table 4, respectively. There were few differences in the type and frequency of barriers reported by PWID who were tested in the past year compared to those who were not. The frequency of codes representing facilitators of HCV testing was also similar among respondents in the two groups. Commonly-reported barriers and facilitators, emphasized with illustrative quotations, are described below.
Based on responses to the interview questions, we observed that many PWID described an internal motivation regarding their own and/or another person's health that influenced their decision to get tested for HCV. One person who had been tested in the past year stated: Knowing [my HCV status] is something that I need to do to stay healthy. Knowing that I'll feel better about myself if the results are good makes it easier to get tested.
Similarly, lack of awareness of one's HCV status was described as a source of anxiety for some respondents. One who had not been tested in the past year bluntly stated, "Not knowing sucks. It doesn't feel good when you don't know if you have it or not." Many participants described a sense of altruism regarding potential health consequences their drug use may have for significant others and community-at-large, and cited this as motivation to seek HCV testing. One participant who had not recently been tested stated, "Knowing that there's an epidemic and that it can be passed on [makes it easier to get tested]." Another participant who had been tested said, "if I knew I was positive, then I would take caution to not infect my family." While such comments may not reflect accurate knowledge of how HCV is transmitted, they demonstrate a role that concern for others may play in the decision to be tested for HCV.
Respondents commonly reported that fear was an important psychological barrier to HCV testing. Simply being "not ready" was a common response and numerous PWID indicated they were "scared of the result." One recently tested participant remarked, "I worry about Hep C more than HIV. I'm afraid of what the result might be." One individual not tested in the past year admitted, "I'm in denial. I don't want to hear that I have it".
While some were fearful of their result, others perceived their risk of contracting HCV as low despite injecting drugs and, therefore, considered HCV testing unnecessary. Low risk assessments were based on 1) never sharing needles, 2) lack of symptoms, and 3) prior negative test result.
Health care factors played an important role in the decision of many PWID to undergo HCV testing. Several respondents pointed to the accessibility of nonjudgmental care providers (i.e., mobile testing, SEPs, and PCPs) as an important facilitator of HCV screening. Those who found HCV testing "easy" described having regular contact and a positive rapport with their primary care provider. One tested individual explicitly stated this as a facilitator to testing: "I'm All values are n (%) unless otherwise noted. *P value from chi-squared of independence between selected covariate and receipt of HCV test in the preceding 12 months. **Sharing works was defined as any report of using a syringe, cooker/container, or cotton filter after another person had already used it.
extremely honest with and have a very good relationship with my doctor." Some appreciated having screening offered to them as part of routine health maintenance rather than having to take the initiative to ask for testing. One tested individual answered that "when it's [HCV testing] offered to me on a regular basis [it makes it easier to get tested]." A nonjudgmental and confidential atmosphere was reported to be a facilitator of HCV testing in both traditional medical clinics and community-based settings. Participants identified community-based organizations such as the SEP, mobile testing, and public health departments as organizations that facilitated HCV screening. One individual tested in the last 3 months stated, "I can come here [SEP] and the staff does it for free and it's confidential." Similarly, "having a safe environment where people aren't going to 'notice you', such as here [SEP], where you know that other people are here for the same reason" provided participants with comfort and eased their concerns about testing.
Most participants who discussed their experience at a SEP felt that the program provided a safe environment, which fostered communication and improved feasibility of testing. Few participants reported negative experiences in health care settings as barriers to receiving HCV testing. Stigma associated with both injection drug use as well as HCV infection was a barrier among these participants. Those who identified stigma as a barrier used words such as "shame," "embarrassment," and "taboo" to describe their experiences. Negative feelings such as embarrassment or a feeling that one is being judged were perceived obstacles to seeking HCV testing. One participant who had never been tested stated: People know that most of the time you get tested for Hep C because you're an IV user. People judge you no matter what your results are. That's the worst feeling ever.
Participants identified other tangible perceived barriers and facilitators to testing. Independent of past-year testing, lack of transportation, time constraints, lack of knowledge surrounding testing, and cost of the test were identified barriers. Conversely, access to transportation, awareness of testing locations, and availability of free testing were facilitating factors for both groups.
Discussion
In this cross-sectional survey of PWID in Wisconsin, we found that most respondents had been previously tested for HCV. Those who were tested for HCV in the past year were more likely to have a PCP, to have completed some education beyond high school, and to reside in the city of Milwaukee. Qualitative analysis of interview responses *Employed full-or part-time as compared with unemployed. **Compared to a collapsed single group of rural and suburban. ***Compared to a collapsed single group of people who injected less than daily (i.e. combined once a week and 2-6 times a week. ±Sharing works was defined as any report of using a syringe, cooker/container, or cotton filter after another person had already used it. All values are n (%) unless otherwise noted. *P value from chi-squared of independence between selected barrier and receipt of HCV test in the preceding 12 months. **Some respondents reported more than one barrier.
reinforced the important roles of HCV test availability and health care access in general to facilitate regular screening for PWID. The findings from our study may provide insight into individual-and structural-level barriers and facilitators to routine testing for high-risk individuals, and inform future efforts to promote HCV testing among PWID. Compared with respondents from other Wisconsin cities, residents of Milwaukee had more than twice the odds of receiving HCV testing in the year prior to the study. Numerous factors may account for this disparity: Milwaukee is the largest and most densely populated city in Wisconsin and has a higher burden of communicable diseases such as HIV and sexually transmitted infections than most other areas of the state. Appropriately, prevention services such as the Lifepoint Needle Exchange are more numerous and accessible to Milwaukee residents, and PWID in this area may therefore have greater knowledge of available resources. Additionally, individuals living in cities with a higher burden of drug use and HCV may be more likely to encounter peers who have utilized prevention services in the past, to have medical providers who have greater familiarity with the needs of drug-using patients, and to have easier access to primary care or urgent care centers where testing can be performed. There may be unmet needs for communitybased services and a paucity of health care providers with experience caring for PWID in less densely populated areas.
We found quantitative evidence that access to health care is an important determinant of regular HCV screening for PWID. Respondents who reported that they have a primary care provider had twice the odds of receiving a test for HCV in the past year as those without at PCP. Though there was no difference in past-year testing, both groups commonly identified access to healthcare professionals as a facilitator to testing. Previous research has noted that continuity of care with a provider has fostered regular screening and, in some cases, adherence to treatment [18]. Our results suggest that PWID are more apt to receive HCV screening when it is offered as a part of routine care, rather than when it is only available "on-demand," thereby requiring individuals to take initiative for screening themselves. This is consistent with a recent qualitative study indicating that provider-initiated HCV screening is substantially more successful than selfinitiated screening among drug users in New York and San Francisco [19]. The previous study, involving focus groups of drug users recruited in both clinical and nonclinical settings, found that while provider-initiated HCV screening was more successful, there was a perceived lack of settings for self-initiated HCV testing yet an eagerness to have access to voluntary testing. This differed from testing for HIV, which individuals perceived as much more easily accessible and were more likely to seek based on their own initiative. While we cannot determine from our data whether HCV testing in Wisconsin is more commonly provider-initiated or patient-initiated, our findings highlight a potentially important role that PCPs have in screening for HCV. Particularly, PCPs could initiate the discussion by talking about the benefits of testing, providing information regarding voluntary testing locations, and being explicit about the lack of judgment on the part of the practitioner.
Study participants reported a range of beliefs related to HCV testing, many of which are consistent with previous work on HIV and HCV testing [20][21][22][23]. Fear of a positive test result played a role in the decision of many respondents who were resistant to testing. Low perceived risk also contributed to past-year testing in some cases. Medical providers and SEP staff can be instrumental in supporting participants' testing in both of these groups. Staff can allay fears about a positive result by citing new HCV treatments as well as support group if found to be HCV positive. Motivational interviewing techniques providing feedback regarding drug-injecting behaviors and actual HCV risk may be useful to help those with perceived low risk get tested [24]. All values are n (%) unless otherwise noted. *P value from chi-squared of independence between selected facilitator and receipt of HCV test in the preceding 12 months. **Some respondents reported more than one facilitator.
Stigma was a theme in respondents' discussion of barriers to HCV testing, which is consistent with previous research [23]. Participants in our study did not frequently express concerns about stigma from medical professionals or needle-exchange staff, as has been reported previously [25][26][27]. In fact, nearly one-third of those tested in the past year in our study had been tested in primary care clinics. Based on our data, it does not appear that healthcare settings are a major impediment to testing. Rather, in depth analysis of "stigma" statements revealed that those respondents described perceiving a more generalized, societal stigma of HCV as a "junkie disease" [25]. This perception highlights an opportunity for health care providers and community-based organizations to help foster safe and accepting environments for testing. This may include assurances of confidentiality, education campaigns regarding other risk factors for HCV, and improved provider-patient communication.
There are several limitations to our study. Despite having a large sample size and a reasonably high response rate of 64%, the respondents to our survey were a convenience sample, which may not be fully representative of PWID in the communities we targeted. Our study was performed in a single Midwestern state with a mix of rural, suburban, and urban participants. The findings, therefore, may not be generalizable to drug-using communities in other regions. As all participants were clients at a SEP, our study sample may exclude a subset of PWID who do not use prevention services and may have a higher risk of HCV. While we attempted to minimize bias due to socially-desirable responding by having participants privately self-administer most sections of the survey, the responses to the in-person interview questions may have been influenced by participants' knowledge of the study's main goal, which was to collect information useful for promoting HCV testing among PWID who have not been tested previously.
In theory, early detection of HCV can facilitate referrals to treatment and may reduce the future burden of morbidity from liver disease and even decrease HCV transmission [28]. In the past, treatment for HCV has not been widely available or affordable to PWID, many of whom lack health insurance and generally have poor access to health services. Linking PWID who test positive for HCV to care and evaluating for treatment may, therefore, be difficult. However, currently evolving health insurance reforms could eventually make HCV treatment available to a growing number of patients. In this setting, strategies to improve detection of asymptomatic HCV infection as part of routine primary care could yield substantial public health benefit. Moreover, some evidence suggests that detection of asymptomatic infection and subsequent education may lead to safer injection practices and reduce frequency of injecting among high-risk PWID, thereby promoting HCV prevention even among those who do not access treatment [29,30].
Conclusions
Our study suggests that access to medical and preventive health services that are responsive to the needs and vulnerabilities of people who inject drugs are important determinants of HCV testing among PWID. Increasing the proportion of PWID who receive screening for HCV in the future will require expanding access to programs that provide voluntary counseling and testing, and promoting recognition among medical providers that HCV screening is an important part of routine preventive care. Given that a plurality of PWID previously tested for HCV in our study had been tested in clinical settings, increasing access to primary care is an important strategy for detecting previously undiagnosed cases of HCV. For PWID who are not routinely engaged in medical care, SEPs may also be an underutilized resource for HCV screening.
Abbreviations PWID: People who inject drugs; HCV: Hepatitis C virus; PCP: Primary care provider; SEP: Syringe exchange program. | 2016-03-14T22:51:50.573Z | 2014-01-14T00:00:00.000 | {
"year": 2014,
"sha1": "b134a0715ec8cf141bd1a6588c3041948f9b1e78",
"oa_license": "CCBY",
"oa_url": "https://harmreductionjournal.biomedcentral.com/track/pdf/10.1186/1477-7517-11-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d65935e2bb5f3064b34dc69888672748ed6e55ab",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49902113 | pes2o/s2orc | v3-fos-license | Aspirin Disrupts the Crosstalk of Angiogenic and Inflammatory Cytokines between 4T1 Breast Cancer Cells and Macrophages
The tumor microenvironment is rich in multiple cell types that influence tumor development. Macrophages infiltrate tumors, where they are the most abundant immune cell population and secrete a number of cytokines. Aspirin acts as a chemopreventive agent against cancer development. This study investigated whether aspirin regulates crosstalk between breast cancer cells and macrophages. To study these interactions in a tumor microenvironment, a conditioned media was employed using 4T1 breast cancer cells cultured in RAW 264.7 cell-conditioned medium (RAW-CM), and a cocultured model of both cells was used. When 4T1 cells were cultured in the RAW-CM, there were increases in cell viability and secretion of the cytokines VEGF, PAI-1, TNF-α, and IL-6. Treatment with aspirin inhibited 4T1 cell growth and migration and MCP-1, PAI-1, and IL-6 production. In the coculture of both cells, aspirin inhibited secretion of MCP-1, IL-6, and TGF-β. Furthermore, aspirin significantly decreased the M2 macrophage marker CD206, but increased M1 marker CD11c expression. In summary, aspirin treatment inhibited the crosstalk of 4T1 and RAW 264.7 cells through regulation of angiogenic and inflammatory mediator production and influenced the M1/M2 macrophage subtype. This highlighted that aspirin suppresses the tumor favorable microenvironment and could be a promising agent against triple-negative breast cancer.
Introduction
Breast cancer is the most frequently occurring cancer in women worldwide, especially in developed countries, and the incidence is increasing globally. In 2015, the World Health Organization performed a statistical analysis that revealed approximately 570,000 women die from breast cancer annually, indicating that up to 15% of all deaths in women are due to cancer [1]. Breast cancer has a heterogeneous pathology comprised of multiple components, including tumor cells and neighboring stromal cells, such as adipocytes, fibroblasts, macrophages, and other immune cells, that play fundamental roles in normal mammary development as well as breast carcinogenesis [2,3]. Moreover, tumor microenvironment changes, such as changes in the extracellular matrix, soluble factors, and signaling molecules, stimulate carcinogenesis and resistance to the immune response [2]. These diverse microenvironments play critical roles in tumor progression and metastasis.
The complicated interactions between tumors and the immune system have attracted the attention of scientists over the past decade. Briefly, the dynamic interactions between innate and adaptive immunity play an important role in tumor progression and inhibition [4]. Mononuclear phagocytes are innate immune cells that protect individuals from harmful pathogens and repair injured tissues. However, in the tumor microenvironment, malignancies recruit circulating monocytes by producing tumor-derived chemotactic factors such as macrophage chemoattractant protein-1 (MCP-1), vascular endothelial growth factor (VEGF), and macrophage colony-stimulating factor (MCSF) and then induce monocytes to differentiate into tumor-associated macrophages (TAMs) [5]. In the tumor microenvironment, multiple mediators are secreted and contribute to cell proliferation, migration, angiogenesis, remodeling of endothelial cells [4], providing favorable conditions for tumor growth and metastasis, and suppression of adaptive immunity [6].
Macrophages that produce mediators are crucial initiators of chronic inflammation in the tumor microenvironment. Macrophage heterogeneity includes categorization into M1 and M2 macrophages based on two distinct phenotypes that are a result of macrophage polarization and the development of different characteristics [7]. M1 macrophages produce inflammatory cytokines that evoke the adaptive immune response. Conversely, M2 macrophages promote angiogenesis and wound healing and suppress the adaptive immune responses [7]. Interestingly, TAMs resemble M2 macrophages and have protumor properties in tumor microenvironments. Several studies on murine tumor models have shown that TAMs promote tumors [8] and produce cytokines and chemokines that sustain and amplify the inflammatory state [9]. Therefore, agents with the potential to adjust this microenvironment have been proposed as effective future cancer therapies [3,8].
Aspirin, acetylsalicylic acid, is a nonsteroidal antiinflammatory drug commonly used to reduce inflammation and prevent heart attack and stroke [10,11]. However, over the past two decades, studies have shown that regular use of aspirin may have an additional promising role against cancers [12]. This chemoprevention by aspirin was reported for inflammation-associated cancers such as colorectal, breast, lung, prostate, stomach, and ovarian cancers [10]. Moreover, accumulating epidemiological evidence has revealed that aspirin has effects when used against breast cancer [13,14]. Although aspirin is a promising chemopreventive agent, gastrointestinal side effects and optimal doses are important factors to consider for clinical applications. Therefore, alternatives using aspirin, such as lower doses or combinations with treatments, have been continually proposed.
Currently, little is known about the role of aspirin in immune regulation of tumors, especially in terms of the tumor microenvironment. The main goal of this study was to better understand breast cancer chemoprevention by aspirin, which may regulate immune responses in both malignant cells and macrophages in the tumor microenvironment, as well as interfere with crosstalk between these cells. These insights might provide potential strategies for ameliorating triple-negative breast cancer, such as 4T1 cells, which is a highly aggressive type of breast cancer with resistance to treatments [15]. [16]. Supernatants were collected, and cell debris was removed by centrifugation prior to use in experiments.
Materials and Methods
2.3. Cell Viability Assay. The 4T1 cells were seeded into 96-well plates at a density of 2 × 10 3 cells/well (Becton Dickinson, Franklin Lakes, NJ, USA) and were concurrently treated with 0.5, 1, or 2 mM of aspirin in media containing 20, 50, or 75% unstimulated or LPS-stimulated RAW-CM and 1% FBS/DMEM for 24, 48, and 72 h. After treatment, the cells were incubated in a 0.5 mg/mL 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide MTT (Sigma) solution for 3 h. Supernatants were aspirated, DMSO was added to solubilize the formazan crystals, and absorbance was measured at 540 nm using a spectrophotometric microplate reader (BioTek, Winooski, VT, USA). The control was considered to be 100%, and cell viability of each sample is presented as percentage of control based on the formula A blank , and A control refer to the absorbance of the sample, blank, and control at 540 nm, respectively.
Cell Migration
Assay. Migration of 4T1 breast cancer cells was measured using wound-healing assays. To determine the optimal concentration of RAW-CM for 4T1 cell migration, 4T1 cells were cultured in media containing 20, 50, or 75% RAW-CM and 3% FBS/DMEM for 24 h. Cells were seeded in 24-well plates and incubated until 80% confluence was reached. This monolayer of cells was gently scratched using a 20 μL pipette tip, and the media was replaced with 0.5, 1, or 2 mM aspirin in fresh medium, 50% unstimulated RAW-CM, or 50% LPS-stimulated RAW-CM for 24 h. Cells were viewed and imaged through a microscope equipped with a camera (WS500, Whited, Taoyuan, Taiwan) at 100x magnification. Then, the healing in the image was measured with a microscale of image software (Whited).
2.5. Cytokine Production as Measured by ELISA. The 4T1 cells, 2 × 10 4 cells/well, were seeded in a 48-well plate overnight and then treated with 2 mM aspirin in complete medium or 50% RAW-CM for 72 h. Culture supernatants were collected, and levels of cytokines, including MCP-1 (BioLegend, San Diego, CA, USA), VEGF (Peprotech, Rocky Hill, NJ, USA), PAI-1, TNF-α, IL-6, and TGF-β (R&D, Minneapolis, MN, USA), were measured by ELISA according to the manufacturer's instructions. Briefly, plates were coated overnight with capture antibodies and then washed and blocked. After washing, the culture supernatants were added to the plates and the plates were incubated for 2 h. After washing, the plates were incubated first with detection antibodies, next with horseradish peroxidase-conjugated streptavidin, and finally with substrate solution. Absorbance was measured using a microplate reader (Molecular Devices, Sunnyvale, CA, USA). Cytokine levels were calculated based on cytokine standard curves.
2.6. Cocultures of 4T1 Cell and RAW 264.7 Cell. To define the role of the mammary microenvironment in tumorigenesis, the experimental models consisted of 4T1 murine breast cancer cells cultured alone in RAW-CM or cocultured with RAW 264.7 cells. To mimic a physiological environment where macrophages infiltrate into the areas surrounding breast cancer cells, RAW 264.7 and 4T1 cells were cocultured in the same well of 6-well plates at densities of 1 × 10 5 and 4 × 10 5 cells/well. The cells were then maintained in 1% FBS/DMEM and treated with 2 mM aspirin for 72 h. Culture supernatants were harvested and stored at −20°C until cytokine levels were measured by ELISA.
2.7. RAW 264.7 Cell Characterization. Macrophages were incubated in the presence or absence of aspirin for 72 h and cultured in either control medium, the presence of LPS for the last 24 h of the incubation, or cocultured with 4T1 cells for 72 h. To assess surface marker expression, RAW 264.7 and 4T1 cells were collected after 72 h of coculturing and stained by incubating with fluorescein FITC anti-mouse CD11c and Alexa Fluor 647 anti-mouse CD206 monoclonal antibodies (Sony Biotechnology Inc.) at 4°C in the dark for 30 min. After washing, viable cells were stained with Hoechst 33342 (ChemoMetec, Allerød, Denmark) and subjected to FlexiCyte fluorescence-activated cell sorting analysis. The frequency of cells expressing each surface marker was determined by NucleoCounter NC-3000 (ChemoMetec) and analyzed using NucleoView NC-3000 software (ChemoMetec). Expression was quantified using median fluorescence intensity for the marker of interest.
2.8. Statistical Analysis. Results are presented as mean ± SEM and are a compilation of at least three independent experiments. Statistically significant differences among groups were identified by one-way ANOVA with least significant difference post hoc tests using IBM Statistical Product and Service Solutions (SPSS version 19). A p value of less than 0.05 was considered statistically significant.
RAW 264.7 Cell-Conditioned Media Affects 4T1 Breast
Cancer Cell Viability and Migration. To mimic the physiological tumor environment of macrophage infiltrates into tumor tissues and to study the effect of macrophage mediators on 4T1 cell viability, breast cancer 4T1 cells were cultured in RAW 264.7 cell-conditioned media (RAW-CM), as shown in Figure 1. The 4T1 cells were cultured in different concentrations of RAW-CM in the presence or absence of lipopolysaccharide (LPS) stimulation, and cell viability was assessed using MTT assays. The culture condition lacking LPS stimulation mimicked macrophage infiltration into the breast cancer microenvironment, while the culture condition with LPS stimulation mimicked infiltrating macrophages that are active due to inflammatory responses.
A progressive increase in the number of 4T1 cells occurred with an increase in concentration of unstimulated RAW-CM. This increase in cell number, compared to the control (0% RAW-CM), occurred in a dose-dependent manner with the incubation time ( Figure 1(a)), suggesting the macrophages present promoted breast cancer cell growth. The opposite result was observed when 4T1 cells were cultured in the LPS-stimulated RAW-CM, where 4T1 cell viability significantly decreased during incubations of 24 to 72 h (p < 0 05, Figure 1(b)). This suggests that mediators were secreted by active macrophages that caused toxicity, and thereby decreased cancer cell numbers.
Wound-healing assays were used to analyze cell migration, which is an indicator of cancer metastasis. Cells were grown until a confluent monolayer and scraped, and then the distance of healing by the cell layer was measured. The 4T1 cells cultured in 3% FBS/DMEM, that is, control, were used to culture 4T1 cells. Cells were cultured for 24, 48, and 72 h, and cell viability was measured using MTT assays. Data are from at least three independent experiments and presented as mean ± standard error of the mean (SEM). Statistical analysis was performed using one-way ANOVA and least significant difference (LSD) post hoc tests. # p < 0 01 and † p < 0 001 versus control (0% RAW-CM).
exhibited apparent healing, while the cells cultured in serumfree media, that is, negative control, did not. The distance of 4T1 cell migration over 24 h was measured for each treatment condition, including cells incubated in 20, 50, and 75% RAW-CM. RAW-CM was collected from cells that were not stimulated with LPS as a spontaneous condition and was found to have no effect on cell migration (Figure 2(a)). Meanwhile, RAW-CM collected from LPS-stimulated cells inhibited healing after scraping in a dose-dependent manner (Figure 2(b)). The migration distance was measured by microscope under a microscale, and the results are shown in Figure 2(c). The 50 and 75% LPS-stimulated RAW-CM Data are shown as mean ± SEM and are from three independent experiments. Statistical analysis was performed using one-way ANOVA and LSD post hoc tests. * p < 0 05, # p < 0 01, and † p < 0 001 versus vehicle control.
conditions significantly inhibited cell migration (p < 0 01), which is consistent with the effect this conditioned media had on 4T1 cell viability.
Aspirin Inhibited 4T1 Breast Cancer Cell Growth and
Migration in RAW 264.7 Cell-Conditioned Media. Subsequently, we investigated whether aspirin treatment influences 4T1 breast cancer cell growth when cultured under different macrophage-related conditions. The 4T1 cells were cultured in RAW-CM to mimic a microenvironment with macrophage infiltration into areas surrounding breast cancer cells, and then cell viability and migration were assessed. The 4T1 cells treated with 1 and 2 mM of aspirin had decreased cell viability when incubated in both unstimulated and LPS-stimulated RAW-CM for 24 h, while 4T1 cell numbers were not affected by aspirin in the complete medium ( Figure 3(a)). Cell number displayed more apparent decreases of 23% (p < 0 001) and 40% (p < 0 001) in unstimulated RAW-CM compared to cells in control medium, when cells were treated for 72 h with 1 or 2 mM aspirin, respectively (Figure 3(b)). However, only the high dose of 2 mM aspirin inhibited cell viability in the LPS-stimulated RAW-CM.
To investigate the effects of aspirin on 4T1 cell migration in RAW-CM, wound-healing assays were utilized. The 4T1 cells were cultured in fresh medium (Figure 4(a)), unstimulated RAW-CM (Figure 4(b)), or LPS-stimulated RAW-CM (Figure 4(c)) to mimic the macrophage-infiltrated microenvironment. Aspirin had no effect on cell migration in the fresh medium (Figure 4(a)). In the unstimulated RAW-CM, 0.5 to 2 mM aspirin significantly delayed scratch-healing form in a dose-dependent manner (p < 0 05) compared to the vehicle group (Figures 4(b) and 4(d)), while healing was not affected by aspirin in the LPS-stimulated RAW-CM (Figures 4(c) and 4(d)).
Therefore, the unstimulated RAW-CM, which mimicked the tumor microenvironment, promoted growth of 4T1 cells and was suitable to use for future experiments. Meanwhile, LPS stimulation triggered RAW 264.7 cells to exert an acute inflammatory response that inhibited growth and migration of 4T1 cells. On the basis of these studies, aspirin is an effective chemopreventive agent in the tumor microenvironment but did not exert an anticancer effect during the acute inflammatory stage.
3.3. Aspirin Inhibited 4T1 Cell Production of Angiogenic and Inflammatory Cytokines. Cytokines related to breast cancer carcinogenesis in the cultured supernatants were measured by ELISA. Cytokine levels are listed in Supplementary 1, and data are presented relative to the vehicle control in Figure 5. First, 4T1 cells were cultured in fresh medium (control) or RAW-CM and the supernatants were analyzed ( Figure 5(a)). The RAW-CM only allowed background levels of mediators in the original conditioned medium to be measured. VEGF, plasminogen activator inhibitor-1 (PAI-1), tumor necrosis factor (TNF-α), and interleukin (IL-6) secretion were significantly higher when the 4T1 cells were cultured in 50% RAW-CM, suggesting that macrophagerelated mediators in the conditioned media promoted carcinogenic and inflammatory cytokine production by the breast cancer cells (p < 0 05).
To investigate the effects of aspirin treatment on secretion of these cytokines, cytokine levels relative to tumor characteristics were analyzed (Figures 5(b) and 5(c)). As shown in Effect of RAW-CM on cytokine production of 4T1 cells. Cells were cultured in 50% RAW-CM for 72 h, and then cytokines in the supernatants were measured by ELISA. (b) Aspirin was used to treat 4T1 cells, which were cultured in control medium (1% FBS/DMEM) for 72 h, and cytokine levels in the supernatants were measured. (c) Aspirin was used to treat 4T1 cells, which were cultured in 50% RAW-CM for 72 h, and then cytokine levels in the supernatants were measured. Data are shown as mean ± SEM. Statistical analysis was performed using independent sample t-tests, where statistically significant differences are indicated as * p < 0 05, # p < 0 01, and † p < 0 001 versus control. Figure 5(b), when the 4T1 cells were cultured in fresh medium as a control condition, aspirin treatment significantly decreased MCP-1 (p = 0 001), PAI-1 (p = 0 019), and IL-6 (p < 0 001) levels and slightly decreased VEGF level (p = 0 063). As shown in Figure 5(c), when the 4T1 cells were cultured in 50% RAW-CM, aspirin treatment only decreased MCP-1 and PAI-1 production (p < 0 001 and p = 0 004, resp.). subpopulations based on surface marker expression. Cluster of differentiation (CD)11c is a marker of M1 macrophages, while CD206 is a marker of M2 macrophages. RAW264.7 cells were cultured in control medium, LPSstimulated RAW-CM, or cocultured with 4T1 cells and then characterized. Histograms and fluorescence intensity plots are presented in Figures 6(a) and 6(b), while quantitative data is presented in Figure 6(c). CD11c expression increased by 181% in RAW 264.7 cells following LPS stimulation (p < 0 001), but CD206 marker expression was not affected. When RAW 264.7 cells were cocultured with 4T1 breast cancer cells, CD206 expression significantly increased by 281% (p = 0 002). After treatment with aspirin, CD11c significantly increased by 32% (p = 0 012) and CD206 decreased by 41% (p = 0 046) compared to the vehicle control in cocultured RAW 264.7 cells, suggesting aspirin altered the macrophage prolife when in the presence of neoplastic cells, but not the condition of LPS stimulation (Figure 6(c)). Data are shown as mean ± SEM. Statistical analysis was performed using independent sample t-tests. Statistically significant differences are indicated as * p < 0 05, # p < 0 01, and † p < 0 001 for treatment versus co-control vehicle.
Aspirin Inhibits Crosstalk and Production of
The effects of aspirin treatment in the coculture model were apparent compared to the RAW-CM model, suggesting cocultures containing both types of cells can effectively crosstalk. These data indicate that aspirin disrupted secretion of mediators associated with carcinogenesis in both RAW-CM and cocultures. A schematic of factors with a possible active role in aspirin treatment is proposed in Figure 8.
Discussion
Breast cancer is the most prevalent malignant tumor currently found in women. The breast tumor microenvironment includes neoplastic, neighboring stromal, and recruited immune cells, such as macrophages and lymphocytes, where crosstalk among these cells is involved in tumor progression and metastasis [2]. Interestingly, macrophages, the most abundant immune cell type present in solid tumors, infiltrate and secrete many cytokines while neoplastic cells form. This creates chronic inflammation that provides conditions in this microenvironment conducive to tumor development and angiogenesis [17,18].
The breast cancer cell line 4T1 is triple-negative, which is a form of breast cancer associated with a poor prognosis because the cells lack effective therapeutic targets, behave aggressively, and are accompanied with overexpression of inflammation-related mediators [15]. This has motivated scientists to identify effective agents against this type of cancer. In this present study, aspirin was determined to be a potential chemopreventive agent with antiangiogenic and anti-inflammatory properties in a tumor microenvironment created using RAW-CM and cocultures of RAW 264.7 macrophages and 4T1 breast cancer cells. The results of the present study suggest aspirin interfered with crosstalk between these two cell types and, thus, inhibited cancer cell growth and migration.
Normally, macrophages have a critical role in host defense that involves connecting innate and adaptive immune responses, as well as tissue repair. Macrophages secrete multiple cytokines that participate in inflammatory responses, tissue damage, pathogen clearance, tissue homeostasis, and disease development [19,20]. LPS, that is, bacterial endotoxin, is a common agent that activates macrophages involved in the innate immune response and causes immune cell infiltration and inflammation [21,22]. A number of studies have shown that endotoxin may be anticarcinogenic, possibly due to its ability to recruit and activate immune cells and proinflammatory mediator production [22]. Tumorigenesis accompanies macrophage infiltration. Therefore, RAW-CM may mimic the microenvironment associated with chronic disease, including the presence of multiple inflammatory mediators [17]. In the RAW-CM model, LPS stimulation triggered RAW 264.7 cells to undergo an acute inflammatory response and, thus, inhibit 4T1 cell growth and migration, which is consistent with other evidence. LPS activates TLR4 signaling in tumor cells, leading to tumor evasion from immune surveillance and tumor growth delay In 4T1 breast cancer cell environment, RAW264.7 macrophage infiltration increased VEGF, PAI-1, TNF-α, IL-6, and TGF-β levels, and M2 macrophage expression, resulting to, benefit to tumor progression. Aspirin treatment decreased angiogenic and inflammation-associated cytokine VEGF, PAI-1, MCP-1, IL-6, IL-10, and TGF-β production. In addition, treatment of aspirin increased M1 expression and decreased M2 expression in macrophages, resulting to interference of the communication in this microenvironment and blunted tumor progression. [23]. Meanwhile, unstimulated RAW-CM, which may mimic the tumor microenvironment, promoted 4T1 cell growth. This suggests that aspirin is a promising chemopreventive agent and it is not only anti-inflammatory but also anticarcinogenic. These anticancer properties have also been exhibited in human breast cancer MDA-MB-231 cells [24].
In a previously published study, mice were inoculated with 4T1 cells and implanted with sponge discs for 1 or 24 days to create acute and chronic inflammatory environments [25]. Tumor progression and circulating levels of VEGF and TNF-α were greater in the presence of chronic inflammation than acute inflammation. In addition, VEGF and TNF-α molecules are critical for the proliferation, angiogenesis, macrophage recruitment, and metastasis associated with tumor progression [25]. Populations of macrophages, dendritic cells, and lymphocytes were significantly larger in mice with chronic inflammation [25], suggesting that chronic cell infiltration is important for tumor progression. In an obesity-related breast cancer study, 4Tl cell proliferation was significantly observed when cells were cultured in adipocyte-conditioned medium without any stimulation, indicating that spontaneous adipocyte infiltration contributed to 4T1 cell growth [16].
Our previous study demonstrated that aspirin treatment significantly inhibits the proliferation and migration of 4T1 cells, as well as causes an associated decrease in MCP-1 and VEGF production [26]. In this present study, PAI-1 and IL-6 production by 4T1 cells was also inhibited by aspirin treatment. In the RAW-CM model, VEGF, PAI-1, TNF-α, and IL-6 production by 4T1 cells significantly increased, indicating there are carcinogenic mediators in the RAW-CM. After aspirin treatment, production of MCP-1 and PAI-1 decreased, suggesting that aspirin interfered with interactions between macrophages and breast cancer cells and, thus, inhibited tumorigenic signals. Moreover, in an obesity-related breast cancer study involving 4T1 cells cultured in 3T3-L1 adipocyte-conditioned medium and cocultured with adipocytes, aspirin decreased the production of MCP-1 and PAI-1 [26]. This is consistent with the data from this present study, supporting that these two cytokines have important roles in immune cell recruitment and tumor progression.
MCP-1, that is, CCL-2, is a chemokine that recruits and activates monocytes during inflammation. In tumor progression, MCP-1 plays an important role through facilitation of macrophage infiltration, which is involved in tumor progression and immunosurveillance [27,28]. In addition, a previous study reported that blocking MCP-1 signaling notably inhibited 4T1 cell migration [29]. PAI-1 is produced by multiple cells and is involved in several pathological conditions, including aging, obesity, and inflammation, and high levels have been demonstrated to accompany tumor progression [30]. Recently, TGF-β-treated endothelial cells were reported to induce PAI-1 secretion and promote metastasis of triple-negative breast cancer cells [31], illustrating the potential of PAI-1 as a target of breast cancer therapies. In addition, IL-6 and TNF-α are conductor cytokines that mediate and have multiple physiological functions in various pathogenic inflammatory diseases, where they are involved in tumor progression, angiogenesis, and migration [32]. Recently, it was revealed that proinflammatory cytokines in serum, such as IL-6, IL-8, and TNF-α, are associated with clinical stage and lymph node metastasis in breast cancer patients [32]. The levels of these cytokines are associated with the course of breast tumorigenesis, and, thus, these cytokines have potential as prognostic cancer biomarkers.
In this present study, aspirin suppressed MCP-1, PAI-1, and IL-6 production by 4T1 cells cultured in fresh medium and RAW-CM, suggesting to inhibit proliferation and migration of breast cancer cells. In the coculture model, treatment with aspirin significantly inhibited MCP-1, IL-6, and TGF-β and slightly inhibited VEGF, PAI-1, TNF-α, and IL-10 production. Production of these inflammatory and angiogenic mediators by 4T1 cells in fresh medium, RAW-CM, and coculture models was blocked by aspirin. On the basis of these results, the suppressive properties of aspirin interfere with community-associated factors in the breast tumor microenvironment. In addition, aspirin may also act through other pathways to exert its chemopreventive properties involving inflammation, cyclooxygenase-(COX-) 2, platelets, hormones, or PI3 kinase [33]. One of the most studied aspirin anticancer mechanisms is the partially downregulated COX-2 expression in many types of breast cancer cells, including MCF-7, MDA-MB-231, and SK-BR-3, contributing to inhibition of cancer cell proliferation [34].
Macrophages can divide into two distinct phenotypes of M1 and M2. M1 macrophages are promoted by T-helper cell type 1 (Th1) cytokines and produce proinflammatory cytokines that evoke an adaptive immune response. Meanwhile, Th2 cytokines polarize monocytes into M2 macrophages that promote angiogenesis, clean injured tissues, and suppress adaptive immune responses [7]. Imbalances in M1 and M2 macrophage populations may lead to pathological changes [35]. It has been demonstrated that mice that received 7,12-dimethylbenz(a)anthracene chemical carcinogens have higher F4/80+ macrophage recruitment in perigonadal adipose tissue compared to mice that did not receive any carcinogen, especially, the higher level of CD11c + M1 type [36]. In the present study, there was a significant increase in M2 cells when RAW 264.7 cells were cocultured with 4T1 cells, suggesting that this suppressive microenvironment promoted the growth of breast cancer cells. In the tumor microenvironment, malignancies recruit circulating monocytes that have differentiated into TAMs. TAMs resemble M2 macrophages and exert protumor functions through immunosuppressive actions [5]. Therefore, modifications, such as through suppression of TAM recruitment, switching of the TAM phenotype, and production of associated mediators, have been proposed as cancer therapeutic strategies [37].
Interestingly, aspirin treatment increased M1 marker expression, but decreased M2 marker expression in cocultures of the present study, suggesting that aspirin influences the macrophage profile in the neoplastic microenvironment away from a suppressive immune response, thus contributing to breast cancer cell suppression. Recently, it was demonstrated that macrophage phenotypes are regulated by aspirin in a model of RAW 264.7 cells cultured in pancreatic cancer cell line Panc02-conditioned medium. Aspirin significantly decreased protein and RNA levels of the M2 marker CD206 and prevented pancreatic carcinogenesis [38]. Burnett and colleagues reported that aspirin upregulates IL-10 gene expression in THP-1 cells, but not in cocultures of MCF-7 and THP-1 cells [39]. In a clinical trial on breast cancer patients, TGF-β expression was lower during the early stages of disease, but higher and associated with CCL2 levels during late stages. Moreover, TGF-β stimulated CCL2 expression and then induced monocytes/macrophages to secrete Th2attracting chemokines into a breast cancer MDA-MB-231 cell tumor microenvironment [40]. In the present study, aspirin inhibited TGF-β expression in the coculture model, resulting in decreases in MCP-1 production and Th2 accumulation that dampened downstream communication in the microenvironment.
Clinical trials have revealed that aspirin is an effective chemopreventive agent. Observational studies have shown that regular aspirin use reduces the incidences of several cancers, as well as distant metastases of these cancers [41]. Metaanalyses and systematic reviews have also proposed that aspirin's chemopreventive properties can be used to fight breast cancer [13,14]. In cardiovascular subjects of five large randomized trials, aspirin use decreased the risk of cancer mortality and metastases [33]. Recently, a larger cohort study that included 13 prospective studies with 857,831 subjects revealed that long-term (>5 years) regular use of aspirin 2 to 7 times/week prevented breast cancer [42]. Based on previous findings, regular use of aspirin (75 to 350 mg/day) reduces the incidence of and mortality from breast cancer in epidemiologic experiments [13,14,33,42]. Researchers need to pursue a comprehensive understanding of aspirin treatment-associated issues, such as gastrointestinal side effects, optimal doses, duration, and combinations with other compounds, to facilitate the use of aspirin as a cancer therapy.
Conclusions
Based on accumulating evidence, macrophages play a crucial role in the tumor microenvironment, which includes intricate crosstalk involving a series of inflammatory chemokines and cytokines and angiogenic mediators secreted from neoplastic cells and infiltrating macrophages. The findings of this study indicate that aspirin has chemopreventive properties that function through both 4T1 breast cancer cells and macrophages. Aspirin interfered with the connection between various cells by decreasing communication through proinflammation and angiogenic mediators and modulating M1/ M2 macrophage subtypes, suggesting that aspirin is a promising agent to prevent tumor progression.
Data Availability
The data used to support the findings of this study are all provided in the manuscript and supplementary file.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper. | 2018-08-06T13:17:57.045Z | 2018-06-24T00:00:00.000 | {
"year": 2018,
"sha1": "9a5ea4da8e0a4d7e30c01b1b21f6e38558db808e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2018/6380643.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c93cfdce746927274c715a9c892a917ec36060c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
2794356 | pes2o/s2orc | v3-fos-license | Poisson Cluster Process Based Analysis of HetNets with Correlated User and Base Station Locations
This paper develops a comprehensive new approach to the modeling and analysis of HetNets that accurately incorporates correlation in the locations of users and base stations, which exists due to the deployment of small cell base stations (SBSs) at the places of high user density (termed user hotspots in this paper). Modeling the locations of the geographical centers of user hotspots as a homogeneous Poisson Point Process (PPP), we assume that the users and SBSs are clustered around each user hotspot center independently with two different distributions. The macrocell base station (BS) locations are modeled by an independent PPP. This model naturally captures correlation that exists between the locations of users and their serving SBSs. Using this model, we study the performance of a typical user in terms of coverage probability and throughput for two association policies: i) power-based association, where a typical user is served by the open-access BS that provides maximum averaged received power, and ii) distance-based association, where a typical user is served by its nearest open-access SBS if it is located closer than a certain distance threshold; and macro tier otherwise. After deriving all the results in terms of general distributions describing the locations of users and SBSs around the geographical center of user hotspots, we specialize the setup to the Thomas cluster process. A key intermediate step in this analysis is the derivation of distance distributions from a typical user to the open-access and closed-access interfering SBSs. Consistent with the intuition, our analysis demonstrates that as the number of SBSs reusing the same resource block increases (higher frequency reuse), coverage probability decreases whereas throughput increases. Thus the same resource block can be aggressively reused by more SBSs as long as the coverage probability remains acceptable.
I. INTRODUCTION
Current cellular networks are undergoing a significant transformation from coverage-driven deployment of macrocells to a more user-centric capacity-driven deployment of several types of low-power BSs, collectively called small cells, usually at the locations of high user density (termed user hotspots in this paper) [2], [3]. The resulting network architecture consisting of one or more small cell tiers overlaid on the macrocellular tier is referred to as a HetNet [4]- [7].
The increasing irregularity in the BS locations has led to an increased interest in the use of random spatial models along with tools from stochastic geometry and point process theory for their accurate modeling and tractable analysis; see [8]- [11] for detailed surveys on this topic.
The most popular approach in this line of work is to model the locations of different classes of BSs as independent PPPs and perform analysis for a typical user assumed to be located independently of the BS locations. This model was first proposed in [12], [13] for downlink analysis of HetNets, and has been extended to many scenarios of interest in the literature, see [8]- [10] and the references therein. Despite the success of this approach for the modeling and analysis of coverage-centric deployments and conventional single tier macrocellular network [14], this is not quite accurate for modeling user-centric deployments, where the SBSs may be deployed at the user hotspots [2]. In such cases, it is important to accurately capture the non-uniformity of user distributions as well as their correlation with the SBS locations. Developing a comprehensive framework to facilitate the analysis of such setups is the main focus of this paper.
A. Related Work
The stochastic geometry-based modeling and analysis of HetNets has taken two main directions in the literature. The first and more popular one is to focus on the coverage-centric analysis of HetNets, in which the user locations are assumed to be independent of the BS locations [8]- [11], [15]. As noted above already, the locations of the different classes of BSs are modeled as independent PPPs. This approach has been extensively used for the analysis of key performance metrics such as coverage/outage probability [12], [16]- [22], rate coverage [23]- [26], average rate [27], and network throughput [28]. In addition, this approach enables the analytic treatment of numerous different aspects of both conventional single-tier networks as well as HetNets. For example, self-powered HetNets where SBSs are powered by a self-contained energy harvesting module were modeled and analyzed in [29]. The downlink coverage and error probability analyses of multiple input multiple output (MIMO) HetNets, where BSs are equipped with multiple antennas were performed in [25], [30]- [33]. The joint-transmission cooperation in HetNets was analyzed in [19]- [21], [34]. Since this line of work is fairly well-known by now, we refer the interested readers to [8]- [11] for more extensive surveys as well as a more pedagogical treatment of this general research direction.
The second direction focuses on developing tractable models to study user-centric capacitydriven small cell deployments, where SBSs are deployed at the areas of high user density. Due to the technical challenges involved in incorporating this correlation in the BS and user locations, the contributions in this direction are much sparser than above. One key exception is the generative model proposed in [35], where the BS point process is conditionally thinned in order to push the reference user closer to its serving BS, thus introducing correlation in the locations of BSs and users. While this model captures the clustering nature of users in hotspots area, it is restricted to single-tier networks and generalization to HetNets is not straightforward. Building on our recent contributions in developing analytical tools for Poisson cluster process (and Binomial point process) [36]- [39], we recently addressed this shortcoming and generalized the analysis of non-uniform user distribution to HetNets by modeling the locations of users as a Poisson cluster process, where the correlation between user and BS locations is captured by placing BSs at the cluster centers [40], [41] 1 . Although the models proposed in [35], [40]- [43] accurately characterize the correlation between the user and BS locations, the assumption of modeling small cell locations with a PPP is not quite accurate in the case of user-centric deployments. This is because some user hotspots are by nature large, which necessitates the need to deploy multiple SBSs to cover that area, thus introducing clustering in their locations. Similarly, subscriberowned SBSs, such as femtocells, are deployed on the scale of one per household/business, which naturally increases their density within residential and commercial complexes. Due to this clustering, Poisson cluster process becomes preferred choice for modeling SBS locations in user hotspots [44], [45]. While the effect of BS clustering has been studied in [46]- [53], none of these works provide exact analytic characterization of interference and key performance metrics for these networks. More importantly, the correlation between user and BS locations, which is the key in user-centric capacity-driven deployments, has not been truly captured in these works.
For instance, the locations of users are assumed to be independent of BS point process in [47].
On the other extreme, the users are assumed to be located at a fixed distance from its serving 1 The analytical tools developed in [36]- [38] are also being adopted to analyze the performance of clustered networks in the emerging paradigms in cellular communication, such as the uplink non-orthogonal multiple access [42]. BS in [51]- [53]. In this paper, we address this shortcoming by developing a new model for user-centric capacity-driven deployments that is realistic as well as tractable. The ability of the proposed analytical model to incorporate correlations in the locations of users and BSs bridges the gap between the spatial models used by the industry (especially for user hotspots), e.g., 3GPP [2], in their simulators, and the ones used by the stochastic geometry community (see above for the detailed discussion) for the performance analysis of HetNets. As discussed next, the main novelty is the use of Poisson cluster process for modeling both the users and the SBSs.
B. Contributions and Outcomes
Tractable model for user-centric capacity-driven deployment of HetNets: We develop a realistic analytic framework to study the performance of user-centric small cell deployments. In particular, we consider a two-tier HetNet, comprising of a tier of small cells overlaid on a macro tier, where macro BSs are distributed as an independent PPP. To capture the dependency between the locations of SBSs and users, we model the geographical centers of user hotspots as an independent PPP around which the SBSs and users form clusters with two independent general distributions. In this setup, the candidate serving BS in each open-access tier is the one that is nearest to the typical user. From the set of candidate serving BSs, the serving BS is chosen based on two association policies: i) power-based association policy, where the serving BS is the one that provides the maximum average received power to the typical user, and ii) distance-based association policy, where the typical user is served by its nearest open-access SBS if it lies closer than a certain distance threshold; and macro BS otherwise.
Coverage probability and throughput analysis: We derive exact expressions for coverage probability of a typical user and throughput of the whole network under the two association policies described above. A key intermediate step in the analysis is the derivation of distance distributions from the typical user to its serving BS, open-access interfering SBSs, and closedaccess interfering SBSs for the two association policies. Building on the tools developed for PCPs in [36], we prove that the distances from open-access interfering SBSs conditioned on the location of the serving BS and the typical user are independently and identically distributed (i.i.d.). Using this i.i.d. property, the Laplace transform of interference distribution is obtained, which then enables the derivation of the coverage probability and throughput results.
System design insights: Our analysis leads to several useful design insights for user-centric small cell deployments. First, it reveals that more aggressive frequency reuse within a given cluster (more SBSs reusing the same resource blocks in a given cluster) has a conflicting effect on the coverage and throughput: throughput increases and coverage decreases. This observation shows that more SBSs can reuse the same resource blocks as long as the coverage probability is acceptable. Thus, the strictly orthogonal resource allocation strategy that allocates each resource block to at most one SBS in a given cluster (e.g., see [47]) may not be efficient in terms of throughput for this setup. Second, our analysis reveals that there exists optimal distance threshold that maximizes the coverage probability of a typical user under distance-based association policy.
II. SYSTEM MODEL
We consider a two-tier heterogeneous cellular network consisting of macrocell and small cell BSs, where SBSs and users are clustered around geographical centers of user hotspots, as shown in Fig. 1(c). This model is inspired by the fact that several SBSs may be required to be deployed in the user hotspots (hereafter referred to as cluster) in order to handle mobile data traffic generated in that user hotspot [2]. The analysis is performed for a typical user, which is chosen as follows. We first choose a cluster uniformly at random from the network, which we term a representative cluster. We then choose a point (location of the typical user) uniformly at random from this cluster. For this setup, we assume that the typical user is allowed to connect to any macro BS in the whole network and any SBS located within the representative cluster.
Other SBSs (the ones located outside the representative cluster) simply act as interferers for the typical user. This setup is inspired by the situations in which SBSs are enterprise owned BSs intended to serve only the authorized users (who have permission to connect to that network).
Therefore, we will refer to the macro BSs and the SBSs within the representative cluster as open access BSs (with respect to the typical user) and the rest of the SBSs as closed access BSs. Note that while our model is, in principle, extendible to completely open access K-tier heterogeneous cellular networks, we limit our discussion to this two-tier setup for the simplicity of both notation and exposition.
A. Spatial Setup and Key Assumptions
We model the locations of macro BSs as an independent homogeneous PPP {z m } ≡ Φ m with destiny λ m . In order to capture the correlation between the locations of SBSs and users in hotspots, we model the locations of SBSs and users as two Poisson cluster processes with the same parent point process, where the later models the geographical centers of user hotspots. It should be noted that in reality user distribution is a superposition of homogeneous and nonhomogeneous distributions. For instance, pedestrians and users in transit are more likely to be uniformly distributed in the network and hence homogeneous PPP is perhaps a better choice for the analysis of such users. On the other hand, users in hotspots exhibit clustering behavior for which Poisson cluster process is a more appropriate model than a homogeneous PPP [41].
The framework provided in this paper can be extended to the case of mixed user distribution consisting of both homogeneous and non-homogeneous user distributions without much effort.
Besides, the analysis of homogeneous user distributions in such setups is well known [8]- [10], [12] , which is the reason we chose to focus on the more challenging case of correlated nonhomogeneous user distributions.
Poisson cluster process can be formally defined as a union of offspring points which are independent of each other, and identically distributed around parent points [54], [55]. Modeling the locations of parent point process (i.e., cluster centers) as a homogeneous PPP {x} ≡ Ψ p with density λ p , 1) the set of users within a cluster centered at x ∈ Ψ p is denoted by {y u } ≡ N x u (with y u ∈ R 2 ), where each set contains a sequence of i.i.d. elements conditional on x (denoting locations), and the PDF of each element is f Yu (y u ), and 2) the set of SBSs within a cluster centered at x ∈ Ψ p is denoted by {y s } ≡ N x s (with y s ∈ R 2 ), where each set contains a sequence of i.i.d. elements conditional on x, and the PDF of each element is f Ys (y s ).
The locations of SBSs N x s and users N x u conditioned on x ∈ Ψ p are independent. For this setup, after characterizing all theoretical results in terms of general distributions f Ys (y s ) and f Yu (y u ), we specialize the results to Thomas cluster process [56] in which the points are distributed around cluster centers according to an independent Gaussian distribution: From the set of SBSs located in the cluster centered at x ∈ Ψ p , we assume that the subset of B x s ⊂ N x s reuse the same resource block. This subset will be henceforth referred to as a set of simultaneously active SBSs, where the number of simultaneously active SBSs |B x s | has a Poisson distribution with meann as . Denote by x 0 ∈ Ψ p the location of the center of representative cluster. In order to simplify the order statistics arguments that will be used in the selection of candidate serving BSs in the cluster located at x ∈ Ψ p , we assume that the total number of SBSs (i.e., |N x 0 s |) in the representative cluster is fixed and equal to n s 0 , where B x 0 s ⊂ N x 0 s represents the set of simultaneously active SBSs in the representative cluster. Note that |B x 0 s | is truncated Poisson random variable with maximum value being n s 0 , and the serving SBS will be chosen from B x 0 s . The pictorial representation of our setup along with the system models used in the prior art are presented in Fig. 1.
B. Propagation Model
We assume that all links to the typical user suffer from a standard power-law path-loss with exponent α > 2, and Rayleigh fading. Thus the received power at the typical user (located at the origin) from the j th tier BS (where j ∈ {s, m}) located at z j is: where · −α models power-law path-loss, h j is exponential random variable with unit mean independent of all other random variables, and P j is transmit power, which is assumed to be constant for the BSs in tier j ∈ {s, m}. Note that index 'm' and 's' refer to macro tier and small cell tier, respectively. Denote by {z s = x 0 + y s ; y s ∈ N x 0 s } ≡ Φ s the locations of open-access SBSs. The candidate serving BS location from Φ j is: where z * j is the location of the nearest open-access BS of the j th tier (i.e., Φ j ) to the typical user. The serving BS is chosen from amongst the set of candidate serving BSs based on i) Set of SBSs in cluster centered at x ∈ Ψ p ; Set of users in cluster centered at x ∈ Ψ p B x s ⊂ N x s ;n as Set of simultaneously active SBSs in cluster centered at x ∈ Ψ p with meann as σ 2 s (σ 2 u ) Scattering variance of the SBS (user) locations around each cluster center P j ; h j ; α; β Transmit power; channel power gain under Rayleigh fading; path loss exponent; target SIR Association probability under power-based (distance-based) association policy, where j ∈ {s, m} P P cj (P D cj ) Coverage probability of a typical user served by tier j ∈ {s, m} under power-based (distance-based) association policy P P cT (P D cT ) Total coverage probability under power-based (distance-based) association policy Throughput under power-based (distance-based) association policy power-based, and ii) distance-based association policies. More details on these two association policies will be provided in the next Section.
1) SIR at a typical user served by macrocell: Assuming that the typical user is served by the macro BS located at z * , the total interference seen at the typical user originates from three sources: (i) interference caused by open-access macro BSs (except the serving BS) defined as: ously active open-access SBSs inside the representative cluster, which is defined as: I intra sm = ys∈B x 0 s P s h s x 0 + y s −α , and (iii) inter-cluster interference caused by simultaneously active closed-access SBSs outside the representative cluster, which is defined as: The SIR at the typical user conditioned on the serving BS being macrocell is: 2) SIR at a typical user served by small cell: Assuming that a typical user is served by the SBS located at z * = x 0 +y 0 , the contribution of the total interference seen at the typical user can be partitioned into three sources: (i) interference from open-access macro BSs defined as: = x∈Ψp\x 0 ys∈B x s P s h s x + y s −α . Therefore, the SIR at the typical user served by the small cell is:
III. SERVING AND INTERFERING DISTANCES
This is the first main technical section of the paper, where we derive the association probability of a typical user to macro BSs and SBSs. We then characterize the distributions of distances from serving and interfering macro BSs and SBSs to a typical user. These distance distributions will be used to characterize the coverage probability of a typical user, and throughput of the whole network in the next section. We now begin by providing relevant distance distributions.
A. Relevant distance distributions
Let us denote the distances from a typical user to its nearest open-access small cell and macro BSs by R s and R m , respectively. In order to calculate the association probability and the serving distance distribution, it is important to first characterize the density functions of R m and R s .
The PDF and CDF of the distance from a typical user to its nearest macro BS, i.e., R m , can be easily obtained by using null probability of a homogeneous PPP as [57]: However, characterizing the density function of distance from a typical user to its nearest open-access SBS, i.e., the nearest SBS to the typical user from representative cluster, is more challenging. To derive the density function of R s , it is useful to define the sequence of distances from the typical user to the SBSs located within the representative cluster as D x 0 s = {u : u = x 0 + y s , ∀y s ∈ N x 0 s }. Note that the elements in D x 0 s are correlated due to the common factor x 0 . But this correlation can be handled by conditioning on the location of representative cluster center x 0 because the SBS locations are i.i.d. around the cluster center by assumption.
The conditional PDF of any (arbitrary) element in the set D x 0 s is characterized next.
Lemma 1. The distances in the set D x 0 s conditioned on the distance of the typical user to the cluster center, i.e., V 0 = x 0 , are i.i.d., where the CDF of each element for a given V 0 = ν 0 is: and the conditional PDF of U is: where the PDF of V 0 is given by Proof: See Appendix A.
The density functions of distances presented in Lemma 1 are specialized to the case of Thomas cluster process in the next Corollary.
Corollary 1. For the special case of Thomas cluster process, the distances in the set D x 0 s are conditionally i.i.d., with CDF where I 0 (at)dt, and the PDF of each element is: where I 0 (·) is the modified Bessel function of the first kind with order zero. The PDF of V 0 is: Proof: For the special case of Thomas cluster process, the PDF of Y s with realization y s = (y s 1 , y s 2 ) can be expressed as: Substituting f Ys (·) into (8) and letting z 1 = u cos θ, we get Rician distribution as: Similarly, the PDF of V 0 can be obtained by substituting (1) in (9), which is Rayleigh distribution.
The conditional i.i.d. property of distances in the set D x 0 s enables us to characterize the distance from the typical user to its nearest open-access SBS located within the representative cluster.
This result is presented in the next Lemma.
Lemma 2. Conditioned on the distance of the typical user to its cluster center, i.e. V 0 , the CDF of distance from the typical user to its nearest open-access SBS, i.e., R s , for a given V 0 = ν 0 is: and the conditional PDF of R s for a given ν 0 is: where F U (·|ν 0 ) and f U (·|ν 0 ) are given by (7) and (8), respectively.
Proof: Conditioned on the distance of the typical user to its cluster center, the elements in . Thus the result simply follows from the PDF of the minimum element of the i.i.d. sequence of random variables [58, eqn. (3)].
These distance distributions are the keys to the derivation of the metrics of interest.
B. Association policies
As discussed above, the candidate serving BS in each open-access tier (i.e., all macro BSs and SBSs located within the representative cluster) is the one nearest to the user. Recall that the distances from a typical user to its nearest open-access small cell and macro BSs were denoted by R s and R m , respectively. In order to select the serving BS from amongst the candidate serving BSs, we consider the following two association policies.
1) Power-based association policy:
The serving BS is chosen from amongst the candidate serving BSs according to maximum received-power averaged over small-scale fading. The association event to macro BSs and SBSs can be formally defined as follows.
• A typical user is associated to a macrocell if m = arg max j∈{s,m} P j R −α j . The association event to macrocell is denoted by S P m , where 1 S P m = 1(m = arg max j∈{s,m} P j R −α j ). • A typical user is associated to a small cell if s = arg max j∈{s,m} P j R −α j . The association event to the small cell is denoted by S P s , where 1 S P s = 1(s = arg max j∈{s,m} P j R −α j ). Now, the density functions of distances f Rs (·|ν 0 ) and f Rm (·) obtained in the previous subsection are used to characterize the association probabilities to macro and small cells in the next Lemma.
Lemma 3 (Association probability under power-based policy). The association probability of a typical user located at distance ν 0 from its cluster center to macrocell is: with ξ sm = Ps Pm 1/α , and the association probability to the small cell is: where f Rm (·) and F Rs (·|ν 0 ) are given by (6) and (13), respectively.
The serving distance is simply the distance from the typical user to its nearest BS from associated tier. Denote by X P j the serving distance to tier j ∈ {m, s}. The density function of X P j is characterized in the next Lemma.
Lemma 4 (Serving distance distribution under power-based policy). For a typical user located at distance ν 0 from its cluster center, the PDF of serving distance X P m conditioned on the association to macrocell, i.e., event S P m , is: the PDF of X P s conditioned on the association to small cell, i.e., event S P s , is: where ξ sm = Ps Pm 1/α and ξ ms = Pm Ps 1/α . The density functions of distances f Rm (·), f Rs (·|ν 0 ), and F Rs (·|ν 0 ) are given by (6), (14), and (13), respectively.
Proof: See Appendix C. We define the set F P s (F P m ) to represent the sequence of distances from open access interfering SBSs (which by assumption belong to the representative cluster centered at x 0 ) to the typical user conditioned on the serving BS belonging to small cell (macrocell), such that the elements of W P s ∈ F P s and W P m ∈ F P m are greater than ξ ms x s and ξ sm x m , respectively. In the next Lemma, we deal with the conditional i.i.d. property of the elements of F P s and F P m , and their distributions. Under power-based association policy, the elements of F P m (F P s ) conditioned on V 0 and X P m (X P s ) are i.i.d., where the PDF of each element W P m ∈ F P m for a given V 0 = ν 0 and X P m = x m is: and the conditional PDF of each element W P s ∈ F P s is: where F U (·|ν 0 ) and f U (·|ν 0 ) are given by (7) and (8), respectively.
Proof: See Appendix D.
2) Distance-based association policy: In addition to received signal strength based association discussed above, it is often times desirable to define simple canonical association policies, which provide useful first order insights into otherwise complicated spatial networks. We define one such association policy next, which we term as distance-based association policy. Under this policy, the association event to the SBS and macro Bs is defined as follows. Lemma 6 (Association probability under distance-based policy). The association probability of a typical user located at distance ν 0 from its cluster center to the small cell is: and the association probability to the macrocell is: where F Rs (·|ν 0 ) is given by (13).
Proof: According to the definition of S D s , the conditional association probability to the small cell for a given V 0 = ν 0 can be written as: A D s (ν 0 ) = E Rs [1(R s < D)|ν 0 ] = P(R s < D|ν 0 ) = F Rs (D|ν 0 ), using which the association probability to macrocell is A D m (ν 0 ) = 1 − F Rs (D|ν 0 ). Using the macro and small cell association probabilities, the density function of serving distance is derived in the next Lemma.
Lemma 7 (Serving distance PDF under distance-based policy). Under distance-based association policy, the PDF of serving distance X D s when the typical user located at distance ν 0 from its own cluster center is served by a small cell is: and the PDF of serving serving distance X D s when the typical user is served by a macrocell is: where f Rm (·), f Rs (·|ν 0 ) and A D s (ν 0 ) are given by (6), (14), and (21), respectively.
Proof: For the typical user located at distance ν 0 from its own cluster center, f X D s (x s |ν 0 ) is the PDF of distance from the typical user to its nearest open-access SBS conditioned on the association to small cell tier S D s , which is equal to f Rs (x s |S D s , ν 0 ). However, the association to the macrocell is independent of the distance from the typical user to its nearest macro BS. Thus the PDF of serving distance when the typical user is served by macrocell is simply the PDF of distance to its nearest macro BS.
and the elements in the sequence of distances from open-access interfering SBSs to the typical user served by small cell, i.e., F D s , are conditionally i.i.d., where the PDF of each element W D s ∈ F D m for given V 0 = ν 0 and X D s = x s is: where F U (·|ν 0 ) and f U (·|ν 0 ) are given by (7) and (8), respectively.
Proof: The proof follows on the same lines as that of Lemma 5, and is hence skipped.
The locations of closed-access interfering SBSs are independent of access policy. Thus the distribution of distances from the typical user to the closed-access SBSs (also called inter-cluster interfering SBSs) is the same for both distance-based and power-based association policies. This distribution is presented in the next Lemma.
Lemma 9 (Distribution of distances from closed-access interfering SBSs). Denote by D x s = {t s : t s = x + y s , ∀ y s ∈ N x } the sequence of distances from the typical user to inter-cluster interfering SBSs within the cluster centered at x ∈ Ψ p . For a given v = x , the elements of Proof: The elements of the sequence N x , i.e., relative locations of the SBSs to the cluster centered at x ∈ Ψ p , are i.i.d. by assumption. Hence, for a given ν = x , the elements of the sequence D x s are i.i.d. The derivation of f Ts (·|ν) follows on the same lines as that of f U (·|ν 0 ) given by (8), and hence is skipped.
Remark 1 (Thomas cluster process). For the special case of Thomas cluster process, the elements in the sequence of distances from closed-access interfering SBSs to the typical user are i.i.d., where the PDF of each element is: which is Rician. The proof is exactly the same as that of Corollary 1. It should be noted that all the results can be specialized to Thomas cluster process by substituting F U (·|ν 0 ), f U (·|ν 0 ), f V 0 (·), and f Ts (·|ν) with the expressions given by (10), (11), (12), and (28), respectively.
IV. COVERAGE PROBABILITY AND THROUGHPUT ANALYSIS
This is the second main technical section of this paper, where we use the distance distributions and association probabilities derived in the previous section to characterize network performance in terms of coverage probability of a typical user and throughput of the whole network.
A. Coverage probability
The coverage probability can be formally defined as the probability that SIR experienced by a typical user is greater than the desired threshold for successful demodulation and decoding.
We specialize this definition to the power-based and the distance-based association policies in this subsection. We begin our discussion with the power-based association.
where f W P m (·|ν 0 , x m ) and f W P s (·|ν 0 , x s ) are given by Lemma 5.
Proof: See Appendix E.
Recall that the total number of SBSs in the representative cluster was assumed to be known a priori and equal to n s 0 . This assumption was made to simplify order statistic argument used in the derivation of the PDF of serving distance, but it constraints the maximum number of interfering SBSs in the representative cluster, which complicates the numerical evaluation of the exact coverage probability (will be presented in Theorem 1) due to the summation involved in the expressions given by Lemma 10. However, these expressions can be simplified under the assumption ofn as n s 0 , and the simplified expressions are presented in the next Corollary.
Corollary 2. Forn as n s 0 , the Laplace transform of interference given by (29) reduces to and the Laplace transform of interference given by (30) reduces to For numerical evaluation, we will use the simpler expression presented in Corollary 2 instead of Lemma 10. In the numerical results section, we will notice that the simplified expressions given by Corollary 2 can be treated as a proxy of the exact expression for a wide range of cases.
Using the PDF of distance derived in Lemma 9, we can derive the Laplace transform of interference distribution caused by closed-access interfering SBSs to the typical user (intercluster interference), which is stated in the next Lemma.
Lemma 11. The Laplace transform of interference distribution from closed-access interfering
SBSs to the typical user is: where index j ∈ {m, s} denotes the tier of the serving BS, and f Ts (·|ν) is given by (27).
where index j ∈ {m, s} denotes the choice of the serving BS.
Proof: The proof follows from that of [17, Theorem 1] with a minor modification. For completeness, the proof is presented in Appendix G of two column draft.
These Laplace transforms of interference distributions are used to evaluate the coverage probability in the next Theorem.
Theorem 1 (Coverage probability under power-based association policy). Coverage probability of a typical user served by the j th tier is P P where j ∈ {m, s}. Using P P c j , the total coverage probability is: where (·), and L P I mj (·) are given by Lemmas 1, 3, 4, 5, 11, and 12, respectively.
Proof: The coverage probability of a typical user served by the j th tier is: where (a) follows from Rayleigh fading assumption, i.e., h j ∼ exp(1), and the fact that the events 1{SIR j > β} and S P j conditioned on V 0 are independent. Recall that the association probabilities are given by Lemma 3. The final expression of P P c j given by (35) is obtained by using the definition of Laplace transform along with independence of open-access (intracluster), closed-access (inter-cluster) SBSs, and macro BSs interference powers, followed by de-conditioning over X P j given V 0 = ν 0 , followed by de-conditioning over V 0 . Now using P P c j , the total coverage probability is obtained by applying the law of total probability.
which forn as n s 0 simplifies to where f W D m (·|ν 0 ) is given by (25). Proof: The proof follows on the same lines as that of Lemma 10, where the nearest openaccess SBS is located at distance greater than D to the typical user served by macrocell.
As noted above, the interference caused by closed-access SBSs is independent of association policy, and hence its Laplace transform is the same for the two association policies. Now we are left with the derivation of the Laplace transform of interference caused by macro BSs, which is presented in the next Lemma.
and the Laplace transform of interference from macro BSs (except serving) at a typical user served by macrocell under distance-based association policy is the same as that of the powerbased association policy. Thus we have L D Imm (s) = L P Imm (s).
Proof: The proof follows on the same lines as that of Lemma 12. The main difference is that the distance-based association policy imposes no constraint on the location of interfering macro BSs to the typical user served by the small cell. Thus we have where the final expression is obtained by using [59, (3.241)].
Using these Lemmas, we now derive the coverage probability of a typical user under distance based association policy. The proof follows on the same lines as that of Theorem 1.
Theorem 2 (Coverage probability under distance-based association policy). The coverage probability of a typical user served by small cell is: and the coverage probability of a typical user served by macrocell is: using which the total coverage probability is: (·), and L D Imm (·) are given by Lemmas 1, 6, 7, 13, 11, and 14, respectively.
Remark 2 (Optimal SBS distance threshold D). Increasing SBS distance threshold has a conflicting effect on the association to macrocell and small cell: association probability to macrocell decreases whereas association probability to small cell increases. In the Numerical Results Section, we concretely demonstrate that there exists an optimal SBS distance threshold D that maximizes the total coverage probability.
Using these coverage probability results, we now characterize throughput in the next subsection.
B. Throughput
In order to study the tradeoff between aggressive frequency reuse and resulting interference, we use the following notion of the network throughput [28]: where λ is the number of simultaneously active transmitters per unit area. This metric roughly characterizes the average number of bits successfully transmitted per unit area. This definition is specialized to our setup in the next Proposition.
Proposition 1. Using the result of Theorem 1, the throughput under power-based association policy is: T P = (λ m P P cm + λ pnas P P cs ) log 2 (1 + β), and using the result of Theorem 2, the throughput under distance-based association policy is: Remark 3 (Number of simultaneously active SBSs within a cluster). Increasing number of simultaneously active SBSs boosts the spectral efficiency by more aggressive frequency reuse, whilst it leads to higher interference power. While it is straightforward to conclude from the analytical results that the coverage probability always decreases with the number of simultaneously active SBSs, we will demonstrate in the next section that the throughput increases with the number of simultaneously active SBSs in the regime of interest. This in turn implies that the usual assumption of strictly orthogonal channelization per cluster, i.e., only one simultaneously active SBSs per cluster (e.g., see [47]), should be revisited.
A. Validation of results
In this section, we validate the accuracy of the analysis by comparing the analytical results with Monte Carlo simulations. For this comparison, the macro BS locations are distributed as an independent PPP with density λ m = 1 km −2 , and the geographical centers of user hotspots (i.e., cluster centers) are distributed as an independent PPP with density λ p = 10 km −2 around which users and SBSs are assumed to be normally distributed with variances σ 2 u and σ 2 s , respectively. For this setup, we set the path-loss exponent, α as 4, the SIR threshold as 0 dB, power ratio P m = 10 3 P s , and study the coverage probability for the two association policies. As discussed in Section IV, the summation involved in the exact expression of Laplace transform of intra-cluster interference complicates the numerical evaluation of Theorems 1 and 2. Thus we use simpler expressions of Laplace transform of intra-cluster interference derived under the assumptionn as n s 0 presented in Corollary 2 and Lemma 13 for numerical evaluation of Theorems 1 and 2. As evident from Figs. 2 and 3, the simpler expressions can be treated as proxies of exact for wide range of parameters. Considering n s 0 = 10, the analytical plots exhibit perfect match with simulation even for relatively large values ofn as . Comparing Figs 2 and 3, we also note that the coverage probability under power-based association policy is higher than that of distance-based association policy. and resulting interference. To study this trade-off, we plot throughput as a function ofn as in Figs. 4 and 5. Interestingly in the considered range, throughput increases with the averaged number of simultaneously active SBSs per cluster. This means that more and more SBSs can be simultaneously activated as long as the coverage probability remains acceptable. From this observation, it can also be deduced that strictly orthogonal strategy (only one SBS per cluster reuses the same resource block) is not spectrally efficient.
C. Impact of SBS standard deviation
Under power-based association policy, the coverage probability as a function of scattering standard deviation of SBSs, σ s , is plotted in Fig. 6. The plot shows that σ s has a conflicting effect on P P cm and P P cs : P P cm increases and P P cs decreases. The intuition behind this observation is that by increasing σ s the association to macrocell increases while association to small cell decreases. In Fig. 7, we plot average association probability as a function of σ s to exhibit this trend. From Fig. 9, a similar observation can be made for distance-based association policy, where P D cm increases and P D cs decreases as scattering standard deviation of SBSs increases.
D. Optimal distance threshold
As evident from Figs. 8 and 9, there exists an optimal SBS distance threshold that maximizes the total coverage probability. The existence of the optimal value can be intuitively justified by the conflicting effect of the distance threshold on the association to macrocell and small cell, as discussed in Remark 3. From Fig. 8, we can observe that the optimal distance threshold decreases with the increase of averaged number of simultaneously active SBSs. This is because although both P D cs and P D cm decrease with the increase ofn as , the former decreases at a slightly higher rate. Thus it is desired to associate less users to SBSs. Interestingly, we notice that the optimal distance threshold for different values of σ s does not change in the setup considered in Fig. 9.
VI. CONCLUDING REMARKS
We developed a comprehensive framework for the performance analysis of HetNets with usercentric capacity-driven small cell deployments. Unlike the prior art on the spatial modeling of HetNets where users and SBSs are usually modeled by independent homogeneous PPPs, we introduced a tractable approach to incorporate correlation in the locations of the users and SBSs in HetNets, which bridges the gap between the simulation models used by industry (especially for the use hotspots), such as by 3GPP [2], and the ones used thus far by the stochastic geometry community. In particular, we assumed that the geographical centers of user hotspots are distributed according to a homogeneous PPP around which users and SBSs are located with two general distributions. This approach not only models the aforementioned correlation, but also captures the non-homogeneous nature of user distributions [2]. For this setup, we derived the coverage probability of a typical user and throughput of the whole network for distance-based and power-based association policies. Our setup is general and applicable to any distributions of the relative locations of the users and SBSs with respect to the cluster center. A key intermediate step is the derivation of a new set of distance distributions, which enabled the accurate analysis of user-centric small deployments. For numerical evaluation, we considered the special case of Thomas cluster process, which led to several design insights. The most important one is that the throughput increases with the increase of the number of SBSs per cluster reusing the same resource block in the considered setup. Therefore, the usual assumption of strictly orthogonal channelization per cluster should be revisited for efficient design, planning, and dimensioning of the system. This work has many extensions. From modeling perspective, it is important to extend the analysis to more general channel models, such as κ − µ shadowed fading channels [60], and correlated shadowing [61]. From analysis perspective, it is useful to study the connection between simplified protocol and physical layer interference models to gain more system design insights from analytical expressions. From application perspective, the results can be extended to the analysis of cache enabled network to study metrics like total hit probability and caching throughput; see [48], [62]. From point process perspective, it is of interest to jointly model the spatial separation between the locations of macro BSs and SBSs as well as clustering nature of SBSs, which brings forth a new set of challenges in the context of cluster point processes [63].
Finally, this framework can be extended to the analysis of other key performance metrics such as ergodic spectral efficiency [64] and bit error rate.
A. Proof of Lemma 1
Let us denote the location of SBS chosen uniformly at random in the representative cluster by z 0 = x 0 + y s ∈ R 2 , where z 0 = (z 1 , z 2 ) and x 0 = (ν 0 , 0). The conditional CDF of distance U with realization u = z 2 1 + z 2 2 ∈ R + is: where the PDF of U is obtained by using Leibniz's rule for differentiation [65]. Now recall that the typical user is located at the origin, and users are distributed around cluster center with PDF f Yu (y u ). Thus, the relative location of the cluster center with respect to the typical user, i.e x 0 , has the same distribution as that of Y u . The PDF of V 0 = Y u can be derived by using the same argument applied in the derivation of f U (.).
B. Proof of Lemma 3
The conditional association probability to the macro-tier for a given value of ν 0 is: using which the association probability to the small cell tier is A P s (ν 0 ) = 1 − A P m (ν 0 ).
C. Proof of Lemma 4
For a given typical user located at distance ν 0 from its cluster center, the event X P m > x m is equivalent to that of R m > x m when the typical user connects to the macro BS, i.e., event S P m . Thus, the conditional CCDF of X P m can be derived as: and hence the PDF of X P m is: (1 − P(X P m > x m |ν 0 )) = 1 A P m (ν 0 ) (1 − F Rs (ξ sm x m |ν 0 ))f Rm (x m ).
The derivation of f X P s (·|ν 0 ) follows on the same lines as that of f X P m (·|ν 0 ), and is hence skipped.
D. Proof of Lemma 5
Denote by {U j } 1−F U j (ξsmxm|ν 0 ) . Using this result, the PDF of W P m,j can be obtained by taking derivative of F W P m,j (w m,j |ν 0 , x m ) with respect to w m,j . In the final result, index j is dropped for notational simplicity. The derivation of f W P s (·|ν 0 , x s ) follows on the same line as that of W P m , and is hence skipped.
E. Proof of Lemma 10
The Laplace transform of intra-cluster interference distribution at a typical user served by macrocell conditioned on V 0 and X P m is: where (a) follows from the definition of Laplace transform, (b) follows from the expectation over h s ∼ exp(1). The final result follows from the change of variable x 0 + y s → w m , and converting coordinates from Cartesian to polar, followed by the fact that the elements of {W m } are conditionally i.i.d., with PDF f W P m (w m |ν 0 , x m ) given by Lemma 5, followed by the expectation over number of simultaneously active SBSs within the representative cluster, which is Poisson distributed conditioned on the total being less than n s 0 . The derivation of L I intra ss (.|ν 0 , x s ) follows on the same line as that of L I intra sm (.|ν 0 , x m ), where the serving SBS is removed from the set of possible interfering SBSs, and number of simultaneously active SBSs within the representative cluster is Poisson distributed conditioned on the total being greater than one and less than n s 0 .
G. Proof of Lemma 12
Recall that there is no interfering macro BSs within distance ξ mj x j (where j ∈ {m, s} denotes tier of the serving BSs) from the typical user located at the origin. Thus, the points process of interfering macro BSs can be defined as Φ m = Φ m \ b(o, ξ mj x j ), where b(o, ξ mj x j ) denotes ball of radius ξ mj x j centered at the origin, i.e., the location of the typical user. The Laplace transform of interference from macro BSs to the typical user is: where (a) follows form expectation over h m ∼ exp(1) and (b) follows from z m → u, and converting coordinates from Cartesian to polar, followed by PGFL of PPP. | 2016-12-21T19:32:33.000Z | 2016-12-21T00:00:00.000 | {
"year": 2016,
"sha1": "3d0ae9110b3e20dd9d849a266ae7e062dfd9c583",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://doi.org/10.1109/twc.2018.2794983",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "3d0ae9110b3e20dd9d849a266ae7e062dfd9c583",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
251554629 | pes2o/s2orc | v3-fos-license | Global Dynamics of Diffusive Hindmarsh-Rose Equations with Memristors
Global dynamics of the diffusive Hindmarsh-Rose equations with memristor as a new proposed model for neuron dynamics are investigated in this paper. We prove the existence and regularity of a global attractor for the solution semiflow through uniform analytic estimates showing the higher-order dissipative property and the asymptotically compact characteristics of the solution semiflow by the approach of Kolmogorov-Riesz theorem. The quantitative bounds of the regions containing this global attractor respectively in the state space and in the regular space are explicitly expressed by the model parameters.
Introduction
Starting from the well-known Hodgkin-Huxley equations [19] (1952), which provided a highly nonlinear four-dimensional model for general neuron dynamics, and the two-dimensional FitzHugh-Nagumo equations [14] (1961-1962) as a simplified model which explains periodic firing with refractory but not able to generate chaotic neuron burstings, scientists have proposed various types of mathematical neuron models based on the biological characteristics of neuron functions and the biophysical laws. Two key issues in any modeling of neuron dynamics and neuronal networks are the firing-bursting patterns of single neurons and the collective behaviors of neural networks, especially synchronization and chaotic dynamics. All these issues are closely linked to significant applications in many areas such as brain diseases, image and signal processing, encryption of communications, and mostly artificial neural networks and artificial intelligence.
The Hindmarsh-Rose equations [18] (1984) is originally a three-dimensional ODE model for neuron firing-bursting phenomena and has been studied through bifurcation analysis and numerical simulations by many researchers, cf. [6,10,12,18,21,22] and the references therein. This model exhibits rich and interesting spatial-temporal bursting patterns [5,21,33,37]. In particular, the three-dimensional complex bifurcations lead to numerically observed and sophisticated chaotic bursting behaviors.
Very recently the author's group studied global dynamics generated by the spatially diffusive Hindmarsh-Rose equations [23,24,25], random dynamics of the stochastic Hindmarsh-Rose equations [26], and synchronization of complex Hindmarsh-Rose neural networks and FitzHugh-Nagumo neural networks [27,28].
In this work, we propose and study global dynamics of the diffusive Hindmarsh-Rose equations with memristors, which is a new mathematical model for neuron dynamics in terms of a hybrid system of PDE and ODE featuring an additional component equation for memristors and its nonlinear coupling to the membrane potential equation for a neuron cell.
The concept memristor (meaning a memory resistor) was coined by Leon Chua [8] (1971), as an electrical device with two terminals, which denotes the relationship between time-varying electromagnetic flux and electric charges. General memristive system was initially tackled in [9] (1976) and has attracted broad scientific interests since the seminal paper [36] (2008) published in Nature.
The memristors are recognized and used in advanced neuron models to describe the electromagnetic induction effect caused by ions movement across the neuron cell membrane, which has been observed through fluctuations of extracellular calcium and potassium ions' concentrations in experiments [29,39]. Moreover, the memristor synapsis in a model carries and transmits dynamically memorized information, which serves as a different type of synapses in neuron networks beside the electrical synapses and chemical synapses well known in neuroscience [12,38,42].
In this paper, we consider the following new model of the diffusive Hindmarsh-Rose equations with memristor for a single neuron: where Ω is a bounded domain up to three dimension with locally Lipschitz continuous boundary (put in a general mathematical scope). The nonlinear term in a quadratic form ϕ(ρ) = c + γρ + δρ 2 , c, γ ∈ R, δ > 0, (1.5) presents the memristive coupling in the equation of membrane potential (1.1), where the memristive variable ρ(t, x) stands for the memductance of the memristor and ϕ(ρ) represents the electromagnetic induction flux with its coupling strength k 1 and self-coupling strength k 2 respectively. All the results proved in this paper are also valid for another type of memristor [41], ϕ(ρ) = tanh(ρ), simply by adjusting the estimates in proof.
In this system (1.1)-(1.4), the variable u(t, x) refers to the membrane electric potential of a neuron cell, the variable v(t, x) represents the transport rate of the ions of sodium and potassium through the fast channels and can be called the spiking variable, while the variables w(t, x) represents the transport rate across the neuron membrane through slow channels of calcium and other ions correlated to the interspike quiescence and can be called the bursting variable.
We impose the homogeneous Neumann boundary conditions for the u-component, 6) and the initial conditions of the components are denoted by In the listed and many other references, the methodology of investigations into the memristive Hindmarsh-Rose neuron models mainly consists of bifurcation and stability analysis supported with numerical simulations to imitate neuron burstingfiring patterns. Several commonly used methods in this area are bifurcation diagrams and Lyapunov exponents [1,3,13,31,35,39], generalized Hamiltonian functions and Lyapunov functions [1,38,39,44], center manifold theory [1,30,41,44], dissipativity analysis [41], algebraic invariant manifold for analytic solutions [1], etc.
Notably the proposed memristive neuron model of diffusive Hindmarsh-Rose equations in this paper reflects the structural features of a neuron cell that has the shortbranch dendrites receiving incoming signals and the long-branch axon (naturally viewed as a one-dimensional space) propagating outreaching signals, which justifies the diffusive partial differential equation of the membrane potential in (1.1).
We shall present in Section 2 the formulation of the system (1.1)-(1.4) and the preliminaries. In Section 3 we shall conduct uniform estimates to show the absorbing property of this solution semiflow. In Section 4 and Section 5 we shall prove the higher-order dissipativity and the asymptotic compactness of the solution semiflow by means of the Kolmogorov-Riesz theorem. Finally in Section 6, the main result on the existence and regularity of a global attractor, which characterizes the collection of all permanent regimes of the modeled neuron dynamics, will be proved and the quantitative bounds of the regions containing this global attractor respectively in the state space and in the regular space are explicitly expressed by the model parameters.
Formulation and Preliminaries
For the diffusive Hindmarsh-Rosse equations with memristor (1.1) -(1.4) proposed in this paper, we define the state space to be E = [L 2 (Ω)] 4 = L 2 (Ω, R 4 ), which is a Hilbert space and can be roughly called the energy space. Also define the mild space and H 2 (Ω) are the Sobolev spaces. The norm and inner-product of the Hilbert space L 2 (Ω) or E will be denoted by · and ·, · , respectively. The norm of Banach space L p (Ω) will be dented by · L p if p = 2. We shall use | · | to denote either a vector norm or a set measure in a Euclidean space.
× Ω, is called a weak solution to the initial value problem (2.1), if the following conditions are satisfied: Here the differential equation is satisfied in the distribution sense.
Lemma 2.2. For any given initial state g 0 ∈ E, there exists a unique weak solution g(t; g 0 ) = (u(t), v(t), w(t), ρ(t)), t ∈ [0, T ), for some T > 0, of the initial value problem (2.1), which satisfies Any weak solution g(t; g 0 ) becomes a strong solution for t > 0, which satisfies for any t 0 ∈ (0, T ). All the weak solutions have the continuously dependence property on the initial data in the state space E.
Proof. The existence and uniqueness of a weak solution local in time can be proved by the Galerkin approximation method for the PDE together with the basic existence theorem for ODE, based on the estimates similar to what we shall present in Section 3 and by the Lions-Magenes type of weak compactness argument [7,32]. The statement about strong solution follows from the parabolic regularity [32] of the evolutionary equations in (2.1).
The goal of this work is to prove the existence of a unique global attractor for the dynamical system generated by this problem (2.1). The global attractor qualitatively characterizes the longtime and global dynamics in terms of asymptotically permanent patterns of all the solution trajectories of the system. We refer to [7,32] for the theory details of infinite dimensional dynamical systems or called semiflow (if time t ≥ 0). Here just list a few concepts for clarity. Definition 2.3. Let {S(t)} t≥0 be a semiflow on a Banach space X . A bounded set B * of X is called an absorbing set for this semiflow, if for any given bounded subset Definition 2.4. A semiflow {S(t)} t≥0 on a Banach space X is called asymptotically compact, if for any bounded sequence {z n } in X and any monotone increasing sequences 0 < t n → ∞, there exist subsequences {z n k } of {z n } and {t n k } of {t n } such that lim k→∞ S(t n k )z n k exists in X .
Definition 2.5 (Global Attractor). A set A in a Banach space X is called a global attractor for a semiflow {S(t)} t≥0 on X , if the following two properties are satisfied: (i) A is a nonempty, compact, and invariant set in the space X , Proposition 2.6. [7,32] Let {S(t)} t≥0 be a semiflow on a Banach space X . If the following two conditions are satisfied : The Young's inequality in a general form will be used: For any nonnegative numbers x and y, if 1 where constant ε > 0 can be arbitrarily small.
Uniform Estimates and Absorbing Dynamics
The new feature in this four-dimensional memristive Hindmarsh-Rose neuron model (1.1) -(1.4) is the product coupling of the nonlinear memductance term k 1 ϕ(ρ)u in the membrane potential equation (1.1). In this section we first prove the global existence of all the weak solutions in time of the initial value problem (2.1). Through careful and sophisticated maneuver of uniform inequality estimates, it will be shown that there exists an absorbing set in the state space E for the solution semiflow. This dissipative dynamics result is valid without any conditions on all the 14 biological parameters in the model equations as naturally described.
Proof. Taking the L 2 inner-product (1.1), C 1 u(t) with a constant C 1 > 0, we get Taking the L 2 inner-products (1.2), v(t) and (1.3), w(t) and by Young's inequality (2.7), we have Taking the L 2 inner-products (1.4), ρ(t) and we get Then we estimate the following terms on the right-hand side of (3.1) by using Young's inequality (2.7) in an appropriate way: and by completing square, Substitute the above term estimates with (3.5) and (3.6) into (3.1) -(3.4). We obtain 1 2 where C 2 > 0 is the constant given by Note that Thus (3.7) yields the following uniform grouping estimate for all the solutions of the memristive Hindmarsh-Rose system (2.1), where for t ∈ I max = [0, T max ), which is the maximal time interval of solution existence. Furthermore, since Apply the Gronwall inequality to the differential inequality (3.11). We obtain The estimate (3.12) shows that all the weak solutions will never blow up at any finite time because it is uniformly bounded. Namely, for all t ∈ [0, ∞), it holds that Therefore the weak solution of the initial value problem (2.1) formulated from the diffusive Hindmarsh-Rose equations with memristor (1.1) -(1.4) exists globally in time for any initial data in the state space E and the time interval of maximal existence will always be always [0, ∞). The proof is completed.
The global existence and uniqueness of the weak solutions as well as their continuous dependence on the initial data enable us to define the solution semiflow [32] of the diffusive Hindmarsh-Rose equations with memristor (1.1)-(1.4) on the state space E as follows: where g(t; g 0 ) is the weak solution with g(0) = g 0 . We call this semiflow {S(t)} t≥0 the memristive Hindmarsh-Rose semiflow associated with the system (2.1).
The next result exhibits the globally dissipative dynamics of this solution semiflow in the state space E.
Proof. From the globally uniform estimate (3.12) in the proof of Theorem 3.1 we see that for all weak solutions of (2.1) with any initial state g 0 ∈ E. Moreover, for any given bounded set B = {g ∈ E : g 2 ≤ R} in E, where R is a finite positive number, there exists a finite time such that all the solutions g(t; g 0 ) of (2.1) satisfy u(t) 2 + v(t) 2 + w(t) 2 + ρ(t) 2 < K for time t > T 0 (B) and any initial state g 0 ∈ B. Thus, by Definition 2.3, the bounded ball B E is an absorbing set for the memristive Hindmarsh-Rose semiflow {S(t)} t≥0 in the phase space E and it is a dissipative dynamical system.
Higher-Order Dissipativity of Memristive Hindmarsh-Rose Semiflow
In this section, we explore higher-order dissipativity of the memristive Hindmarsh-Rose semiflow for the u-component in space L 4 (Ω). It will pave the way to prove the asymptotic compactness of this semiflow in the next section, which is the key condition for the existence of a global attractor in an infinite-dimensional state space.
Theorem 4.1. There exists a constant Q > 0 independent of any initial state, such that the u-component of the memristive HIndmarsh-Rose semiflow {S(t)} t≥0 has the uniform dissipative property that for any given bounded set B ⊂ E there is a finite time T u B > 1 and sup Proof. Take the L 2 inner-product (1.1), u 3 (t, ·) and use Young's inequality (2.7) appropriately to split the product terms in the integral below. For t > 0 we have where C a,b and C b are positive constants depending on a, b and on b, respectively. By Theorem 3.2 and (3.16), for any given bounded set B ⊂ E, there is a finite time τ B > 0 such that Since u 6 + 1 ≥ u 4 , from (4.2) and the above inequality it follows that Apply the Gronwall inequality to (4.3) and it yields It remains to bound the L 4 norm of the initial state u(t 0 ). By Lemma 2.2, for any weak solution of the memristive Hindmarsh-Rose evolutionary equation (2.1), the u-component has the regularity u(t, ·) ∈ H 1 (Ω) ⊂ L 4 (Ω), for t > 0.
One can integrate (3.11) over the time interval (0, t] to get where the constant M is given in (3.13). It follows that, for t = 1, Hence for any given bounded set B ⊂ E and any initial state g 0 ∈ B, there exists a time point t 0 ∈ (0, 1) such that where C is the embedding coefficient of H 1 (Ω) into L 4 (Ω) and B = sup g 0 ∈B g 0 . Finally, combining the inequalities (4.4) and (4.6), we conclude that for any given bounded set B ⊂ E and any initial state g 0 ∈ B, there exists a finite time T u B > max{1, τ B } such that the target result of the inequality (4.1) is valid with the uniform ultimate bound The proof is completed. u(t, ·) is bounded in L 4 (Ω) and precompact in L 2 (Ω). (4.8) Proof. Since the Sobolev embedding L 4 (Ω) ֒→ L 2 (Ω) is compact for the bounded region Ω, it is a direct consequence that the set in (4.8) is precompact in L 2 (Ω).
Asymptotic Compactness of Memristive Hindmarsh-Rose Semiflow
In this section, we prove the asymptotic compactness, cf. Definition 2.4, of the memristive Hindmarsh-Rose solution semiflow {S(t)} t≥0 . This is a challenging issue as the components v(t, x), w(t, x), ρ(t, x) of the memristive Hindmarsh-Rose equations formulated in (2.1) do not have the regularized property in x as time evolves.
The leverage we use to tackle the asymptotic compactness is the Kolmogorov-Riesz compactness Theorem below shown in [17,Theorem 5]. 2) For every ε > 0, there is some positive number d > 0 such that, for all f ∈ F and y ∈ R n with |y| < d, it holds that It is a convention that f (x) = 0 for x ∈ R n \Ω.
Theorem 5.2. The solution semiflow {S(t)} t≥ generated by the diffusive Hindmarsh-Rose equations with memristor (2.1) is asymptotically compact in the state space E.
Proof. As a setup in this proof, for any given bounded set B ⊂ H, let T 0 B > 0 be a finite time such that for any initial state g 0 ∈ B one has where the constant K is given in (3.15) of Theorem 3.2.
Step 1. First of all, (4.8) in Corollary 4.2 has shown that the u-component of this memristive Hindmarsh-Rose semiflow is ultimately uniform bounded in the space L 4 (Ω), which is compactly imbedded in L 2 (Ω). Hence, according to Definition 2.4, the u-component of this semiflow is asymptotically compact in the space L 2 (Ω).
We now deal with the component functions v(t, x), which is coupled with u(t, x) in the nonlinear differential equation (1.2). By the variation-of-constant formula, we have the expressions: for In view of (4.8) in Corollary 4.2 and that the embedding L 4 (Ω) ֒→ L 3 (Ω) is compact, according to Lemma 5.1, we can assert that for any ε > 0, there is some d > 0 such that, for any given bounded set B ⊂ E and all g 0 ∈ B, and for y ∈ R 3 with |y| < d, there exists a finite time Using the Hölder inequality we can infer that, for any t > T B and any g 0 ∈ B, wherein we have used following intermediate steps: t T B e −(t−s) ds < 1 for t > T B , and (u(s, · + y) − u(s, ·)) 2 On the other hand, taking account of (3.16), (4.3) and (5.1), we can do the following integration Thus (5.5) with (4.1) shows that Here L > 0 and K in (3.15), Q in (4.7) are uniform constants. Now combine (5.3), (5.4) and (5.6). Then we can confirm that for any ε > 0 there is a number d > 0 such that, for any given bounded set B ⊂ E and all g 0 ∈ B, and for y ∈ R 3 with |y| < d, it holds that Therefore, there exists a time Since ε > 0 is arbitrary, according to Lemma 5.1, here (5.8) implies that Step 2. Next we deal with the solution component w(t, x), which is linearly coupled with the component u(t, x) in (1. Similarly, by Lemma 5.1 and (4.8), for any ε > 0, there is some d > 0 such that, for any given bounded set B ⊂ E and all g 0 ∈ B, there is a finite time T B (≥ T 0 B ) and for y ∈ R 3 with |y| < d, it holds that as in (5.3) Then we can show that, for any g 0 ∈ B, where Hölder inequality is used for the double integral term in (5.11) as in the first inequality in (5.4), and Since ε > 0 is arbitrary, Lemma 5.1 and (5.11) confirm that Lastly we deal with the solution component ρ(t, x), which is linearly coupled with the component u(t, x) in (1.4): Again by Lemma 5.1 and (4.8), for any ε > 0, there is some d > 0 such that, for any given bounded set B ⊂ E and all g 0 ∈ B, there is a finite time T B (≥ T 0 B ) and for y ∈ R 3 with |y| < d, it holds that as in (5.3) Then, similar to (5.11), for any g 0 ∈ B we have (5.14) where Consequently, since ε > 0 is arbitrary, (5.14) confirms that Finally, put together (4.8), (5.9), (5.12) and (5.15). We conclude that there exists a finite time such that all the solutions g(t; g 0 ) = (u(t, ·), v(t, ·), w(t, ·), ρ(t, ·)) started from any given bounded set B ⊂ E satisfies the asymptotically compact property: We conclude that the memristive Hindmarsh-Rose semiflow {S(t)} t≥0 generated by (2.1) is asymptotically compact in the state space E. The proof is completed.
Global Attractor Existence and Regularity
In this section we finally achieve the main result on the existence of a global attractor in the state space E for the new proposed neuron model of the diffusive Hindmarsh-Rose equations with memristors. We shall also demonstrate the regularity property of the global attractor in the regular space Γ = H 2 (Ω) × L ∞ (Ω, R 3 ). Proof. Since Theorem 3.2 shows that there exists a bounded absorbing set B E = {g ∈ E : g 2 ≤ K} and Theorem 5.2 shows that the solution semiflow {S(t) t≥0 } generated by the diffusive Hindmarsh-Rose equations with memristors (2.1) is asymptotically compact in the space E, the two conditions required in Proposition 2.6 are satisfied. Therefore, by Proposition 2.6, there exists a unique global attractor A for this semiflow in the space E and this attractor is given by where the closure is take in the space E.
The following two results provide the regularity information about the global attractor A for this memristive and diffusive Hindmarsh-Rose neuron model. Proof. By Definition 2.5, the global attractor A is an invariant set so that In view of the inequalities (4.5) and (4.6) adapted to the integral over time interval [ 1 2 , 1], we can assert that for any given g 0 ∈ A there is a time point t 0 ∈ [ 1 2 , 1], which may depend on g 0 , such that the u-component u(t) of the solution S(t)g 0 satisfies where C emb is a Sobolev embedding constant for H 1 (Ω) ֒→ C(Ω), under the condition dim Ω = 1. Consequently, due to the compactness of A , the compactness of the time interval [ 1 2 , 1], and the strong continuity of u(t) in the space C(Ω) with respect to t, the inequality (6.3) infers that there exists a finite positive constant such that sup The compactness and invariance (6.2) of the global attractor A then implies that Here B R 3 (G) is the 3D bounded ball of radius G. It means that Proj (v,w,ρ) A is a bounded set in the space L ∞ (Ω, R 3 ) and the Lemma is proved. Proof. Consider all the solution trajectories {S(t)g 0 : g 0 ∈ A }, which are complete trajectories in terms of t ∈ (−∞, ∞) and all inside the global attractor A . In view of Lemma 6.3, it suffices to prove that the u-projection of the global attractor A is in the space H 2 (Ω). The proof goes through three steps.
Since the Laplacian operator ∆ with the homogeneous Neumann boundary condition (1.6) is self-adjoint and negative definite modulo constant functions, the Sobolev space norm of any function h(x) in H 2 (Ω) is equivalent to h + η ∆h . Therefore, the inequality (6.19) together with Theorem 3.2 and the fact S(t)A = A shows that the u-component of the global attractor A is a bounded set in H 2 (Ω). A quantitative bound of the equivalent H 2 -norm is given by sup g ∈ A u H 2 ≤ √ K + √ D+ aR + bR 3/2 + 2G + J e + k 1 (|c| + |γ|G + δG 2 ) √ R |Ω| 1/2 .
Conclusions. In this paper, the diffusive Hindmarsh-Rose equations with memristors are proposed as a new mathematical model of neuron dynamics, which is a hybrid system of coupled partial differential equation of the membrane potential for a neuron cell and three ordinary differential equations of the fast and slow ion channels plus a memristive variable featuring the dynamical memory due to the electromagnetic flux effect. The rationality of such a model is at least biologically explicit in view of the long axons of neuron cells in brain and nerve systems.
Global dynamics for the solution semiflow of this memristive Hindmarsh-Rose system is studied under no conditions of any naturally involved biological and mathematical parameters. The main result is the existence of a unique global attractor for this dynamical system or called semiflow generated by the weak solutions in the basic state space E = L 2 (Ω, R 4 ).
Due to the quadratic nonlinear memductance and its nonlinear coupling with the membrane potential variable in the main partial differential equation, the challenging proofs of dissipativity and asymptotic compactness are carried out through many steps of sophisticated a priori uniform estimates and the Kolmogorov-Riesz compactness approach.
Moreover, the spatial regularity of this global attractor in the space Γ = H 2 (Ω) × L ∞ (Ω, R 3 ) is also proved for one-dimensional domain. The quantitative bounds of the region containing this global attractor in the state space E and the region in the regular space Γ are explicitly provided, which can be used to facilitate further researches on stable or unstable equilibria patterns, coexisting chimeras, bifurcation and firing patterns, or chaotic local attractors. All the permanent regimes of the modeled neuron dynamics must be included in the global attractor and located in these two regions. | 2022-08-15T01:16:19.690Z | 2022-08-11T00:00:00.000 | {
"year": 2022,
"sha1": "27c6a7d7cfe7e1be379213887757ff6da05a6411",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "27c6a7d7cfe7e1be379213887757ff6da05a6411",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
} |
53291330 | pes2o/s2orc | v3-fos-license | Ischemic Cholangiopathy 11 Years after Liver Transplantation from Asymptomatic Chronic Hepatic Artery Thrombosis
Hepatic artery thrombosis is a concerning complication of orthotopic liver transplantation, and it most often occurs early in the posttransplant period. However, on rare occasions it can occur at a time remote from transplant. We present a case of ischemic cholangiopathy complicated by stricture and anastomotic bile leak from chronic hepatic artery thrombosis that occurred 11 years after the transplant. The initial biliary stenting helped with the resolution of the leak but she was found to have stones, sludge and copious pus at the time of stent exchange. Hepatic arteriography demonstrated complete occlusion of the transplant hepatic artery with periportal collaterals reconstituting intrahepatic hepatic arterial branches. The patient was subsequently referred for repeat liver transplantation.
INTRODUCTION
Orthotopic liver transplantation has become the standard of care for acute and chronic liver failure. While advances in both surgical and medical modalities continue to improve outcomes, both short-and long-term complications can occur. One of the most concerning complications is hepatic artery thrombosis (HAT), which is the most common complication necessitating repeat transplanation. 1 HAT is of greatest concern in the immediate transplant setting because it often presents with ischemia and necrosis. Early and late HAT are differentiated by a cutoff of 21 days. 2 Late HAT is less clinically conspicuous although still serious, most often presenting with biliary pathology with a median time to presenation of 4-6 months after transplant. [3][4][5] Over time, the incidence of HAT decreases, rarely occuring more than 5 years after transplant. However, patients remain at risk for the lifetime of the graft.
CASE REPORT
A 54-year-old woman with a history of well-controlled diabetes mellitus, tobacco abuse, and autoimmune hepatitis status post orthotopic liver transplant (OLT) 11 years prior was found on routine follow-up to have asymptomatic elevation in her liver tests: alkaline phosphatase 313 U/L (normal 25-125 U/L), alanine aminotransferase 92 U/L (normal 7-52 U/L), and total bilirubin 2.7 mg/dL (normal 0-1.0 mg/dL). Her posttransplant course had previosuly been benign outside of recurrent autoimmune hepatitis 8 years prior, which responded to steroids. No changes were made to her immunosuppressive regimen. She denied recent illnesses, alcohol, or illicit drugs, and she reported compliance with her medications. Evaluation with magnetic resonance cholangiopancreatography revealed severe donor-duct biliary dilation, anastomotic stricture, and periductal cystic changes with normal recipient common duct and pancreatic duct caliber ( Figure 1). Endoscopic retrograde cholangiopancreatography (ERCP) revealed a moderate biliary stricture at the posttransplant anastamosis, with marked dilation of the donor biliary tree and anastomotic bile leak concerning for ischemic cholangiopathy. No stones were noted, but they were not specifically addressed in view of the leak. An 8.5-Fr biliary stent was placed into the donor duct with a repeat ERCP planned at 2 months follow-up. Computed tomography with contrast was negative for biloma or mass, but visualization of the hepatic artery was poor. The patient was monitored overnight and discharged with significant clinical improvement.
Six weeks later she presented with complaints of malaise, nausea, and anorexia, but her evaluation showed normal liver tests and minimally impaired synthetic function (international normalized ratio 1.38). Repeat ERCP for stent exchange showed severe dilation of the donor ducts with stones, sludge, and copius pus (Figure 2). The leak had resolved. This raised concern for the integrity of the hepatic artery and ischemic cholangiopathy. Hepatic arteriography demonstrated complete occlusion of the transplant hepatic artery with periportal collaterals reconstituting intrahepatic hepatic arterial branches ( Figure 3). The presence of collateral circulation and the patient's asymptomatic clinical presentation suggested that she had developed chronic hepatic artery thrombosis (HAT) at a time remote from transplantation, which led to chronic ischemic cholangiopathy and bile duct disruption with a leak. Interventional radiology did not feel the lesion was amenable to intravascular intervention. After discussion with Transplant Surgery, evaluation for repeat transplantation was deemed the most appropriate course of action.
DISCUSSION
This case represents rare timing of a dangerous complication of OLT with classic clinical, radiographic, and endoscopic findings. HAT can be seen in up to 9% of OLT patients, 2 and while acute HAT is concerning due to its dramatic presentation, late HAT has its own serious consequences. Given the necessary severing of natural capsular collaterals with OLT, the biliary tree is dependent on the hepatic artery, leading to the risk of ischemic cholangiopathy with hepatic artery compromise. Smoking and diabetes, as seen in this case, may increase the risk for HAT, but the underlying cause is often unclear. This ischemia leads to biliary strictures, leaks, sludge, stones, and bile duct casts, all of which can be complicated by recurrent cholangitis. 6 Nonsurgical approaches to management, including percutaneous stricture dilation, stenting, and repeat therapeutic ERCP, have become the standard of care in these patients. However, patients who have failed these therapies or who present with recurrent cholangitis require repeat transplantation. 7,8 This case is instructive with regard to both basic hepatic vascular and biliary anatomy and the need for adequate understanding of clinical hepatology, transplant hepatology, and advanced endoscopy. Furthermore, it is imperative for the general gastroenterologist to recognize this entity because medical advances have led to longer complication-free posttransplant survival times. As a result, more of these patients will return to the general gastroenterology practice for longitudinal care. Financial disclosure: None to report.
DISCLOSURES
Informed consent was obtained for this case report. | 2018-11-16T19:36:54.470Z | 2018-10-24T00:00:00.000 | {
"year": 2018,
"sha1": "4b4531734db55dd42d226d6374820efba89b50e2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.14309/crj.2018.75",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b4531734db55dd42d226d6374820efba89b50e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247990809 | pes2o/s2orc | v3-fos-license | Evaluation of the North West London Diabetes Foot Care Transformation Project: A Mixed-Methods Evaluation
Introduction: Diabetes foot ulceration (DFU) presents an enormous burden to those living with diabetes and to the local health systems and economies. There is an increasing interest in implementing integrated care models to enhance the quality of care for people living with diabetes and related complications and the value of co-production approaches to achieve sustainable change. This paper aims to describe the evaluation methodology for the North West London (NWL) Diabetes Foot Care Transformation project. Description: A mixed methods design including: i) a quasi-experimental quantitative analysis assessing the impact of the implementation of the local secondary care multi-disciplinary diabetes foot team clinics on service utilisation and clinical outcomes (amputations and number of healed patients); ii) a phenomenological, qualitative study to explore patient and staff experience; and iii) a within-trial cost-effectiveness analysis (pre and post 2017) to evaluate the programme cost-effectiveness. Discussion and Conclusion: Demonstrating the impact of multidisciplinary, integrated care models and the value of co-production approaches is important for health providers and commissioners trying to improve health outcome. Evaluation is also needed to identify strategies to overcome barriers which might have reduced the impact of the programme and key elements for improvement.
INTRODUCTION
Diabetes foot ulceration (DFU) presents an enormous burden to those living with diabetes and to local health economies with an estimated prevalence of 2.5% in people with diabetes [1]. DFU is associated with 5-year lower limb amputation rates of up to 20% [2] (approximately 7000 amputations annually in England alone) [1] and 5-year survival rates of less than 60% [2], lower than breast or prostate cancer. DFU accounts for 86% of inpatient costs for people with diabetes (£322 million in 2014/15 in England and Wales) [3]. The cost of DFU to the NHS is an estimated £1 billion per year [4].
The implementation of integrated care models to improve the outcomes of people with diabetes is becoming increasingly common in many countries [5][6][7]. Integrated care models strengthen people-centred health systems through the promotion of the comprehensive delivery of quality services across the life course [8]. Integrated care models for diabetes and diabetes complications care have the potential to improve patient outcomes, promote patient safety, increase patient satisfaction and optimise the use of resources [6]. Improvements in DFU care could prevent 80% of amputations [9]. The Multi-disciplinary Diabetes Foot Team (MDFT) is a multidisciplinary, integrated approach that improves DFU outcomes. In the UK, the National Institute for Clinical Excellence (NICE) recommends MDFT review within 24 hours for acute DFU (NICE NG 19) [10][11][12]. Evidence from the UK National Diabetes Footcare Audit [13] found integration of community foot protection teams and secondary care MDFT services and ease of pathway navigation were associated with improved foot outcomes.
In 2017 the NWL (North-West London) Diabetes Footcare Transformation project was launched as part of a wider NWL diabetes transformation program. Despite well-established MDFTs across NWL, quantitative analyses showed that there was significant variation in diabetes foot outcomes. The gap analysis suggested that better integration of care was needed. The project was designed according to the four key components of Integrated Care for long term conditions described by Busetto et al. [14]: i) Self-management through user information and education, pathway navigation, and motivational support; ii) Delivery System Design through specification of integrated pathways, formalisation of shared care, and pathway harmonisation; iii) Decision support for health providers through guidelines, health professional education, and feedback; and iv) Clinical information systems, through building a specific database and performance monitoring dashboard (Figure 1). Table 1 summarises the project's objectives. The four areas for intervention were 1) Development of a Diabetes Foot Dashboard; 2) Harmonisation of pathways across NWL; 3) User engagement and 4) workforce development.
A co-production approach was used, creating a NWL Diabetes Foot Network of service users, foot teams, other health providers and commissioners (Appendix Figure 1) to design and implement an integrated care model for DFU across the 8 NWL Clinical Commissioning Groups, comprising a population of almost 150,000 people with diabetes (Appendix Table 1). At the inaugural Foot Network meeting priorities and actions for each of these intervention areas were produced. Specific actions were taken forward by a small, multi-stakeholder task and finish group (the NWL Diabetes Foot project group) and further refinement of interventions (e.g. creation of a NWL diabetes footcare service specification, design of visual training resources for health care professionals, development of service user facing digital resources for the NWL KnowDiabetes website [15]) carried out in the 4-monthly NWL Foot Network meetings. Six diabetes specialist podiatrists were recruited to form a new NWL MDFT to work across all acute and community sites in NWL and support the implementation of the project through job plans that crossed organisational boundaries, Support Foot Project groups and Network meetings and leading health professional training in diabetes footcare across acute and primary care sites.
Effective, integrated diabetes foot care involves multiple stakeholders. Co-production approaches support user centred solutions which are likely to lead to sustainable change and are being used increasingly in healthcare quality improvement [16]. However, coproduction is also time consuming and robust evaluation of co-production methods is needed [16]. To the authors' knowledge, robust methodologies for the evaluation of such a complex range of interventions developed using co-production for the care of diabetes complications have not been used before. This paper aims to describe the evaluation methodology for the NWL Diabetes Foot Care Transformation project. Central in developing this evaluation methodology was recognising the complexity of the multidisciplinary intervention in clinical, financial, strategic, and political contexts. In particular, due to the complexity in its implementation and challenges associated with the care of foot disease, this methodology was considered within the broader theory of complex intervention evaluation and also drawing from the UK Medical Research Council's guidance [17].
THE EVALUATION FRAMEWORK
The NWL Diabetes Foot Care Transformation project is a complex intervention whose framework is summarised in Figure 2, a logic model including the shared relationships among the inputs, resources, activities, outputs, and outcomes for the project. To evaluate the project, a mixed-method approach was used made up of three different work streams: Work Stream 1 -Impact on service utilisation and clinical outcomes; Work Stream 2 -Patient and Staff Experience; Work Stream 3 -Cost-effectiveness.
WORK STREAM 1 -IMPACT ON SERVICE UTILISATION AND CLINICAL OUTCOMES
The first work stream includes a quantitative quasiexperimental study aiming to assess the impact of the local MDFT service implementation within the Diabetes Foot Care Transformation project on service utilisation and clinical outcomes using a combination of data collected by the MDFT service and the Whole Systems Integrated Care (WSIC) dataset. The majority of people with diabetes foot complications will have neuropathy or arterial disease but no acute problem. They will be seen by the community foot protection team whose role is to prevent acute foot problems and rapidly escalate people when acute problems arise. The MDFT sees patients with complex diabetes foot complications. The majority of
PROJECT OBJECTIVES ASSOCIATED GOALS
Reduce the rate for diabetic foot amputation in NWL 50% reduction in amputation rates by 2021 Improve patient care pathways by increasing referral rates and foot checks, reducing time from referral to presentation Integrated pathways across primary, community and acute care services Reduce unscheduled hospital admissions for diabetic foot and the length of stay
Reduction of the unscheduled hospital admissions
Reduction of the length of stay by 1.5 days Reduce inequalities in access to care and related health outcomes Equitable service provision to ensure areas of greatest need are adequately resourced Improve expertise, awareness, and confidence in managing diabetes foot complications among service users Improve staff expertise via training on identification of foot emergencies Cultural change amongst key stakeholders regarding knowledge and importance of diabetes foot problems and commitment to sustainable quality improvement these (>95%) will be complex diabetes foot ulcers (e.g. chronic ulcers, moderate and severe infected ulcers including diabetes foot osteomyelitis and ischaemic ulcers).
Data sources
Multi-disciplinary Diabetes Foot Team service data Information about MDFT inpatient and outpatient activity clinics has been recorded in a dedicated database. Collected data includes patients' demographic characteristics, hospital site, date, and characteristics of the intervention, including diagnosis, treatment (routine treatment, dressing, ulcer debridement, vascular, neurological, or diabetic foot screen), and outcome (referral to Tier 3 -community foot services, Tier 4hospital services, discharge, other e.g. primary care).
The Northwest London Whole Systems Integrated Care dataset
The WSIC is one of the largest data sets in the UK, and comprises of linked coded data from primary care, secondary care, community, mental health and social care based in the NWL area [18]. Most of the 372 GP practices in NWL have subscribed to WSIC which contains the patient pathways and records of over 2.2 million patients in the area [19]. A NWL Foot Dashboard is in development which will draw data from the WSIC dataset to present key statistics related to diabetic foot patients across all pathways of diabetes footcare in NWL to support clinicians and commissioners.
Data analysis
In line with previous work evaluating similar interventions, for the evaluation of the NWL Diabetes Foot Care Transformation project the following process indicators will be selected Collected data from the MDFT database will be monthly and yearly averaged. First access to the MDFT clinic will be considered as baseline for each individual. Descriptive statistics at baseline year on access to MDFT clinics for foot disease and clinical characteristics of the referred patients will be stratified by age, sex, and year. To assess unadjusted differences between groups identified within the set of all participants that are referred, univariate statistics including Chi-square, t-test, analysis of variance, and Kruskal-Wallis test will be employed, as appropriate. Multivariate mixed-effect generalised Poisson regression model will be employed to model change in trends over the study period, including referrals to MDFT clinics, emergency admissions for foot disease, number of healed patients. NWL population size will be included as offset in the regression analysis. Multivariate mixed-effect linear regression models will be employed to assess differences in time to presentation to MDFT clinics over time. Models will be adjusted for age and sex. Where appropriate, interclass correlation (ICC) will be used to assess the proportion of the variation explained by referral to each different MDFT clinic. In case the ICC will be equal or greater than 10%, MDFT clinics will be included in the model as random effect. To model the impact of the programme since its implementation a before-and-after design will be employed. Specifically, baseline data collected in 2016, will be compared with data collected in 2019. Models will be adjusted for age and sex. For a more robust evaluation of the programme the WSIC database will be used to select external controls, sampling from areas where the intervention has not been implemented. Doubly robust methods such as the inverse probability weighting regression adjustment will be employed to compare outcomes between attendees and non-attendees. Covariate selection to generate propensity scores will be based on a combination of what is observed empirically (e.g. covariates explaining differences between groups) and by what has been previously used in previous research. This approach would be appropriate to reduce selection bias associated with likelihood of attending the program associated with specific socio-demographic characteristics [20].
WORK STREAM 2 -PATIENT AND STAFF EXPERIENCE
A phenomenological, qualitative study to explore the narratives of people with diabetes and staff and their experience in the NWL Diabetes Foot Care Transformation project will be conducted. The success of this project relies on meeting the different challenges and capacities of both service users and providers. Service users might have different perceptions of what co-production, multidisciplinary care and integrated care mean to them and these might be different to perception held by clinicians and academics. A whole-rounded inquiry into service user, provider, and commissioner experience will, therefore, be conducted. Semi-structured personal interviews alongside all stakeholder groups involved in the foot networks, including service users, primary care providers, commissioners and specialist foot teams will be conducted. By doing so the aim is to develop a deep understanding of the project with the view of providing suggestions for improvement. The authors have previously employed this approach in previous evaluations of integrated care initiatives) [21].
The inquiry into patient, provider and commissioner experience will focus on 5 areas of interest: 1. Exploring the concept of integrated care and meanings of multidisciplinary and integrated care within the context of DFU. 2. Current challenges in service provision and how integrated multi-disciplinary care could help to alleviate these challenges. 3. Motivations to join the project, as a service user or a provider. 4. The value of co-production approaches. 5. Perception of the actual changes in care which have been happening during the implementation of the project and whether these changes provide the right response to their needs and expectations. There will be a focus on communication which is key to the success of multi-disciplinary care work. For services users this would focus on communication with providers, and for providers it will focus on communication with service users and other providers and commissioners.
Patient and Public Involvement
The participants will be participants of the NWL Diabetes Foot Network (service users, providers and commissioners) and people with diabetes attending MDFT clinics. The aim is to interview a sample of 7-10 people with diabetes, 7-10 providers and 7-10 commissioners in personal interviews, alongside one patient focus group, one provider focus group, one commissioner focus group and one mixed patient-provider-commissioner group. In earlier events arranged by the research team, it was found out that events where both staff and patients attended provided useful insights and reviews.
The interviews and focus groups will be audiotaped and transcribed verbatim while ensuring the interviewees' anonymity. The interview will be designed to fit a period of 60 minutes. The protocol will be similar for the 3 groups, with adjustments to the dynamic of each interview or focus group. A coding process following by thematic content analysis will be carried out.
The qualitative inquiry will be complemented by a structured patient and staff survey, which will inquire into more general perceptions with a larger sample of participants. A survey to record patient experience, and a separate survey of provider experiences (NWL patient related experience measures survey; Appendix questionnaire 1) will be disseminated. Survey questions will explore similar issues to those in the qualitative strand, as detailed above. Outputs from the survey will be used to inform further interviews as required.
Ethics approval for this work will be sought. Before analysis, any identifying details will be removed from quotes. The survey would be anonymous, capturing nonidentifying personal details.
WORK STREAM 3 -EVALUATION OF THE PROGRAMME COST-EFFECTIVENESS
The economic evaluation will include NHS and personal social services [22]. The analysis will be a within-trial cost-effectiveness analysis (pre and post 2017) for the NWL Diabetes Foot Care Transformation intervention against no intervention. The analysis will use resource data including: (a) development of training for health professionals; (b) support provided by podiatrists and specialists foot teams respectively; and (c) creation of digital foot care dashboard, and (d) and health and social service use.
Data will be collected through key informant interviews, and review of trial management records and the MDFT and WISC datasets. Unit costs will be taken from the standard unit costs (e.g. Personal Social Services Research Unit 2019) [23], and published literature. Costs that do not vary by use (e.g. development of digital foot care dashboard) will be costed separately and apportioned to participants appropriately. The main outcome of the economic analysis will be an incremental cost per change in the process indicators (e.g. number of referrals to MDFT clinics, number of amputations). Results will also be presented in the form of a cost-consequence analysis (disaggregated costs next to the important outcomes). Deterministic sensitivity analysis will explore; i) varying the mean cost of intervention based on health professionals' input and ii) roll-out costs. Any subgroup analyses e.g. by medical condition, age group and gender will be exploratory.
Whilst this methodology is quite robust to assess the programme cost-effectiveness, a possible limitation will be the limited amount of data, reflecting that the programme has been implemented for less than three years. However, this approach will constitute an integrated part of the evaluation model which has to be considered as an ongoing process with updates in the evaluation given when longer follow-up data will be available.
DISSEMINATION
Findings from the different work stands will be discussed within the members of the Diabetes Foot Care Transformation Project and disseminated to patients and stakeholders through different partners including the NWL Diabetes Clinical Reference Group, the NWL Diabetes Foot Network, the NWL Clinical quality Leadership Group, the Imperial College Healthcare partners, and the NWL NIHR Applied Research Collaboration network.
DISCUSSION
This paper aims to describe the evaluation methodology for the NWL Diabetes Foot Care Transformation Project, a multidisciplinary and multifactorial programme aiming to improve health outcomes for individuals with diabetic foot complications needing care in NWL. Diabetes foot complications, such as DFU, place a huge burden on those affected in terms of quality of life and life expectancy and on health economies. Demonstrating the impact of multidisciplinary, integrated care models and the value of co-production approaches, such as in this transformation project, is important for health providers and commissioners trying to improve health outcome. Evaluation is also needed to identify strategies to overcome barriers which might have reduced the impact of the programme and key elements for improvement.
Diabetes and diabetes complications constitute a public health emergency not only in high income nations but also in lower-and middle-income countries. The World Health Organization recommends implementation of integrated care and multidisciplinary models and integration of these programmes across healthcare levels as a prerequisite of Universal Health Coverage. While fragmentation often characterise Health Systems in low-and middle-income countries (LMICs), several LMICs have attempted health system integration implementing such intervention [24]. Understanding how to implement such complex interventions in limited resource settings, where care pathways maybe very different is important. Another critical aspect to consider is the system ability to assess and evaluate the intervention, as data analysis and interpretation might depend on local business intelligence capacity.
Central to developing this evaluation methodology was to recognise the complex nature of the intervention [17]. Integrating care within the NWL diverse environment, would make attribution of cause and effect difficult. It is important to consider the tension between providing early evaluation results to inform decision makers against the need to undertake rigorous analytical methods. In this case, we multiple datasets will be analysed. Multi-disciplinary Diabetes Foot Team service data has been accurately recorded, as trained staff members continuously update it, but limitations associated with the use of this database have to be mentioned, including the lack of data on healed ulcersone of the study outcomes -which has to be extracted from the National Diabetes Foot Audit, the lack of other clinical data (e.g. blood glucose), and the absence of patient information for the period before the enrolment into the programme. Data extraction from another external database, the WSIC, will be used to provide an external control that might improve causality. However, a longer follow-up period and a larger sample might still be needed to demonstrate change for all identified project goals. It should be considered that comparing improvement with other areas in the UK where innovation is being actively encouraged means that it is difficult to confirm if the control groups are genuinely intervention free. Furthermore, selection bias might arise when conducting service evaluation using real-world data as people who attended the service might differ in socio-economic characteristics from those who did not [25,26]. Therefore doubly robust methods will be used considering they have been shown to reduce this bias and avoid model miss-specification [27].
CONCLUSION
The NWL Diabetes Foot Care Transformation Project, a multidisciplinary and multifactorial programme, was launched in 2017 in NWL to improve health outcomes for individuals with diabetic foot complications needing care in NWL. Evaluating this project will contribute to identify strategies to overcome barriers which might have reduced the impact of the programme as well as key elements for improvement. This evaluation is also important considering that diabetes and diabetes related complications constitute a public health emergency not only in developed nations but also in LMICs, where initiatives to promote Health System integration are being conducted. Understanding the true impact of such interventions in high income settings is important to consider translation and adaption in different settings where primary care might be quite different [28].
ADDITIONAL FILE
The additional file for this article can be found as follows: • Appendix.
REVIEWERS
Two anonymous reviewers. | 2022-04-07T15:14:33.558Z | 2022-04-05T00:00:00.000 | {
"year": 2022,
"sha1": "2861359decf68eee8261a9fb20b06282c49b5c38",
"oa_license": "CCBY",
"oa_url": "http://www.ijic.org/articles/10.5334/ijic.5956/galley/7456/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ced478743487695fbab19541abc632657f104f25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
71965078 | pes2o/s2orc | v3-fos-license | Who will judge?
Do you know why you're here?”
Truth be told, I thought that was all I did know; that and the fact that in life I had been a physician. I certainly didn't know where “here” was, and wasn't too sure who “I” was …
Can you remember who I was? can you still feel it?[1][1]
... or when “
D o you know why you're here?" Truth be told, I thought that was all I did know; that and the fact that in life I had been a physician. I certainly didn't know where "here" was, and wasn't too sure who "I" was … Can you remember who I was? can you still feel it? 1 ... or when "it" was. I supposed this was what it felt like to be disoriented times 3.
I nodded at my questioner. "I am here to be judged." If this was a dream, it was like no dream I had ever dreamt, for the fear I felt was like no fear I had ever felt. Fear of eternal damnationwhatever that meant.
My questioner looked back at me and sighed. "You are here to judge." In the silence that followed, I became aware of a nearby shadow. He hung his head in a gesture of resignation, helpless now to influence what was apparently to follow. I recognized him as a fellow physician I had known in life and realized that I was being asked to judge his behaviour in life simply because I, too, was a physician… Can you find my pain? can you heal it? Then lay your hands upon me now And cast this darkness from my soul. 1 My being cast in the role of judge struck me as odd and disappointing. I had not known him particularly well, and I vaguely recalled not liking him very much. I felt profound sadness and believe that I was crying. I had al-ways expected -or rather, I had always hoped -that in the end there would be perfect justice. Damnation of the wicked, salvation of the good, a discerning and just but merciful Deity. Now it appeared that justice would be imperfectly meted out by a peer -and a not impartial one at that -much as it had been in life. No "You have no choice. You must." He smiled. "Be just but merciful." He knew exactly what I was thinking. I stood my ground. "I am not competent to do so. I do not know him well enough." "You are beyond time now." Beyond time. In eternity. The concepts of past, present and future had no meaning; they existed simultaneously. The life of the man was suddenly there for me to examine in its entirety. It existed like a gestalt, like a written page or a painted canvass, capable of being experienced in an instant. But it was more than that. It was as if I could move effortlessly and instantaneously through time and space, experiencing any or all of his life in its minutest details, in all of its contextual richness. I was privy to all of the decisions he had made, all of the workings of his mind and heart and of the minds and hearts of those around him. I saw, heard, tasted, smelled, felt everything, the events of his life, professional and personal, and was able to read his motives at each moment. I felt all the joy and pain he had experienced and provoked in others.
I was confused, since he did not always appear as a doctor. He appeared to have various professions, simultaneously if that were possible, and disoriented though I was, I recalled the old nursery rhyme about the "tinker, tailor, soldier, [and] sailor." It seemed as if he had done nearly everything; but how, in a finite lifetime? It occurred to me that in my travels through space and time, I had witnessed all those occasions on which he had come upon 2 roads diverging; but unlike Frost's traveler, 2 he had -on all occasions except those involving moral choicetraveled both while still remaining one traveler, though unbeknownst to himself. And I realized that the blind man had been right after all, that the man's life had been a garden of forking paths, 3 meandering through many worlds, with choices in all of them. It had, however, been the moral choices that had mattered most because only they had foreclosed entire universes.
I knew that I was not qualified to judge his professional behaviour in those many worlds in which he was not a physician, yet somehow part of me divined that I was not expected to.
At a certain point I realized that I had seen all there was to see. The man stood before me.
I thought about all I had witnessed. The man had been an average internist. He had endeavoured to help his patients when he could, and had tried not to harm them. He had made mistakes, and some had cost patients dearly, but he had never been motivated by greed. He had had priorities outside of medicine -family and self -and was not always sure that he had struck the best balance. He had put himself before others more than he should have. He had taken some shortcuts that he regretted. The man hadn't always been kind, truthful, trustworthy, fair and honest, though he had aspired to be.
"He was a better man and doctor than some and not as good as others." Would that be enough?
Still he waited. What had he said before? "You are beyond time now." That being the case, there was nothing to be gained by procrastination. I supposed he was waiting for something less equivocal. I had spent a lifetime making hard decisions and living with their consequences, and implicitly asking my patients to live with them, too. The questioner nodded. I breathed a sigh of relief. "Where will he be sent?" I asked.
"Where will who be sent?" I looked around and realized that the shadow of the man was nowhere to be seen. I realized then that it was I who had cast the shadow; and, I supposed that like the denizens of Plato's cave I had only now been released and allowed to turn my head.
"Where should you be sent?" T he exhibition of 16 remarkably realistic figurative sculptures by artist Ron Mueck at the National Gallery of Canada is both stunning and contemplative. Often described as super-real, hyper-real or even ultra-realistic, Mueck's sculptures of the human figure contain such lifelike details that both art critics and the public define his work as extraordinary. Indeed, the wrinkles, moles, body hair, even rashes and stretch marks on his fibreglass and silicone figures have been crafted to such perfection that viewers often claim they instinctively expect his figures to begin breathing.
The interest in understanding his working process and how he achieves these lifelike effects technically (for example, the sculpting, moulding, casting and fabrication processes as well as the efforts involved in punching hundreds of tiny individual pores for individual whiskers and eyebrows) lingers in the minds of most viewers. Moreover, these realistic re-creations of the human figure provoke strong emotional responses in the viewer, who can easily transform these figures into fantastical facsimiles of him or herself.
Mueck's interest in figurative modeling techniques began in Australia where he worked as a puppet maker, both making and animating marionettes for children's television. After supervising special effects for films in the late 1980s and working with renowned puppeteer Jim Henson, Mueck set up his own business in London, England, creating models for the European advertising industry. After seeing one of Mueck's hyper-realistic sculptures of Pinocchio at the Hayward Gallery in London in 1996, contemporary art collector Charles Saatchi commissioned Mueck to make a group of work for his collec-tion. A year later, Mueck was included in the celebrated exhibition Sensation: Young British Artists from the Saatchi Collection at the Royal Academy of Arts in London.
Mueck's iconic work from the Sensation exhibition, Dead Dad, is on view in this exhibit at the National Gallery of Canada. Unlike the other sculptures, which Mueck claims were inspired by imagery from art history or magazine photographs, Dead Dad commemorates the moment when the artist heard the news that his father had died. Appearing smaller than lifesized, naked, drained of blood and laid-out on a stark museum platform with the palms of his hands facing upwards, Mueck's rendition of his father's corpse certainly elicits an emotional response. Indeed, many of his works create such a riveting sense of realism that they leave an indelible imprint on the viewer, by eliciting memories of one's own experiences. His work creates an imaginative opportunity to reflect upon the themes and cycles of life, death, suffering, longing, loneliness and desire.
This emotional impact related to viewing Mueck's sculptures is heightened by the fact that the artist never chooses to render his figures life-sized. The effect of this altered scale is profound; it changes how we relate to these figures physically and psychologically. In Spooning Couple, a half-lifesize man and woman (14 cm by 65 cm long, 35 cm wide) lie intimately together semi-nude as if on a bed. Due to their placement on a low pedestal, the viewer is invited to look down on this secluded couple from above and peer ever so closely at their facial expressions and imagine what kind of lives they are living. From this bird's eye view the shrunken sculpture represents a deeply touching depiction of loneliness within intimacy. Each individual is subtlety and delicately wrapped in their ownself, arms cradling their own body rather than the other's.
Other more imposing figures in the exhibition are enormous (twice, 3 times, even 10 times life-size) and tower above us. Cases in point are the pouting hulk, Big Man, and the 16 foot-long newborn baby, which still has its umbilical cord attached. In these oversized works a certain ambiguity once again comes into play as the figure's vulnerability is intertwined and seemingly reinforced by our own apprehension and empathetic involvement. In this regard, Mueck's "lifelike" sculptures embody, in one way or another, the colossal challenges and manifold perils of the human condition.
The 16 distinctive sculptures, created between 1996 and 2006, are part of an internationally touring exhibit organized by the Fondation Cartier pour l'art contemporain (Paris) in collaboration with the National Gallery of Canada, the Brooklyn Museum and the Scottish National Gallery of Modern Art. This exhibition marks a mid-career retrospective for the artist and is the largest collection of Mueck's works ever assembled in one place. As a relative newcomer who has already attained international acclaim after only a few years of exhibiting, Mueck has obviously been embraced by an awe-struck public as well as by an enthusiastic group of collectors and institutions.
Though Mueck is not the first artist to work with realist sculptural techniques in order to focus on humanistic themes, his theatrical works strike me as perhaps fitting within the long and rich artistic tradition of memento mori: artistic creations that remind people of their own mortality. In this regard, Mueck's expressive lifelike figures become allegorical and contemplative contemporary vanitas, modern symbols that both embody and evoke the illusory perfection of reality while simultaneously revealing the painful anxieties about death and the passage of time. The shrunken sculpture, Spooning Couple, is a deeply touching depiction of loneliness within intimacy. | 2019-03-09T14:03:28.767Z | 2007-04-24T00:00:00.000 | {
"year": 2007,
"sha1": "4cc29610314a852ebbce82b64e518ac0040d2c1e",
"oa_license": null,
"oa_url": "http://www.cmaj.ca/content/176/9/1312.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "a19c56d4ff1adf7a0d611b8e17b257e3c73223c4",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265226634 | pes2o/s2orc | v3-fos-license | Cell-Mediated Immunity (CMI) for SARS-CoV-2 Infection Among the General Population of North India: A Cross-Sectional Analysis From a Sub-sample of a Large Sero-Epidemiological Study
Background Cell-mediated immunity (CMI), or specifically T-cell-mediated immunity, is proven to remain largely preserved against the variants of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), including Omicron. The persistence of cell-mediated immune response in individuals longitudinally followed up for an extended period remains largely unelucidated. To address this, the current study was planned to study whether the effect of cell-mediated immunity persists after an extended period of convalescence or vaccination. Methods Whole blood specimens of 150 selected participants were collected and tested for Anti-SARS-CoV-2 Interferon-gamma (IFN-γ) response. Ex vivo SARS-CoV-2-specific interferon-gamma Enzyme-linked Immunospot (IFN-γ ELISpot) assay was carried out to determine the levels of virus-specific IFN-γ producing cells in individual samples. Findings Out of all the samples tested for anti-SARS-CoV-2 T-cell-mediated IFN-γ response, 78.4% of samples were positive. The median (interquartile range) spots forming units (SFU) per million levels of SARS-CoV-2-specific IFN-γ producing cells of the vaccinated and diagnosed participants was 336 (138-474) while those who were vaccinated but did not have the disease diagnosis was 18 (0-102); the difference between the groups was statistically significant. Since almost all the participants were vaccinated, a similar pattern of significance was observed when the diagnosed and the never-diagnosed participants were compared, irrespective of their vaccination status. Interpretations Cell-mediated immunity against SARS-CoV-2 persisted, irrespective of age and sex of the participant, for more than six months of previous exposure. Participants who had a history of diagnosed COVID-19 infection had better T-cell response compared to those who had never been diagnosed, in spite of being vaccinated.
Introduction
On the 30th of January 2020, the World Health Organization (WHO) announced that COVID-19 was a public health emergency of international concern.There still persist many questions regarding the key epidemiological and serologic characteristics of the novel pathogen, questions particularly pertaining to its transmissibility (i.e., ability to spread in a population) and its virulence (i.e.case severity) [1].As of 7 May 2023, worldwide over 765 million confirmed cases and over 6•9 million deaths have been reported owing to COVID-19 infection [2].However, these case counts certainly underestimate the true cumulative incidence of infection [3] because of the unavailability of diagnostic tests [4], barriers to testing accessibility [5], and asymptomatic infections [6].
Hence, seroprevalence studies are required to get refined estimates of the extent of infection, particularly through population-based serological surveys [7].These surveys can also provide an estimate of the proportion of the population still susceptible to the infection since antibodies are considered to be a proxy of immunity.Additionally, as the world moves through to an era of vaccine and virus variant, synthesizing sero-epidemiological findings are increasingly vital in tracking the spread of infection, identifying the disproportionately affected groups, and measuring the progress towards herd immunity [1].
However, the presence of anti-SARS-CoV-2 IgG does not necessarily imply protection against COVID-19 infection.It has been documented that despite the presence of anti-spike IgG, the functional neutralizing antibodies against SARS-CoV-2 were observed in only about 70% of the individuals [8,9].Besides, it has also been observed that the humoral antibodies wane over time, exhibiting a significant decline in antibody titers months after antigenic exposure [10][11][12][13].
The emergence of highly immune evasive sub-variants, such as Omicron, has led to the ineffectiveness of antibodies induced by the ancestral WA1 wild-type strain of SARS-CoV-2 in neutralizing Omicron and its sub-variants, including BQ.1, BQ.1.1,XBB, and XBB.1 [9,14,15].Neutralization by sera from convalescent and vaccinated individuals has been markedly impaired, with titers being lowered by up to 155 fold, even in individuals who received a booster vaccination with a WA1/BA.5 bivalent mRNA vaccine [16].This highlights the challenges posed by the evolving virus variants and the waning effectiveness of antibodies over time.
However, despite the ineffective antibody neutralization against SARS-CoV-2 variants and the decline in antibody levels, clinical data indicates that hospitalization and severe illness remain relatively uncommon [17].This suggests the involvement of another arm of the adaptive immune system in providing protection.
Cell-mediated immunity (CMI), specifically T-cell mediated immunity, has been shown to remain largely preserved against SARS-CoV-2 variants, including Omicron [18,19].Furthermore, the longevity of virusspecific T-cell response has been demonstrated by the detection of SARS-CoV-1 N-reactive CD4 and CD8 memory T cells in individuals who had recovered from SARS 17 years ago in 2003 [20].The persistence of cell-mediated immune response in individuals longitudinally followed up for an extended period, such as more than six months post-vaccination or infection, is still not well understood.
To address this gap in knowledge, the current study was designed to utilize a subset of 150 individuals from an existing cohort of 10,000 participants under the WHO Unity protocol [21,22].The aim was to investigate whether the effect of cell-mediated immunity persists after an extended period of convalescence or vaccination.Understanding the durability of cell-mediated immune response is crucial in assessing the long-lasting protection it may provide.
There remains an urgent need to comprehend the immune response that occurs following SARS-CoV-2 infection.This understanding is crucial for determining the significance of the immune response in the course of the disease and especially its potential for providing long-lasting protection.Continued research in this area is vital for informing public health strategies, vaccine development, and understanding the dynamics of immune responses against SARS-CoV-2.
Our primary objective revolved around assessing the duration of acquired immunity and examining the factors that contribute to a suboptimal humoral immune response against SARS-CoV-2.Moreover, we are currently engaged in evaluating the duration and dynamics of cellular immune responses to SARS-CoV-2 variants in individuals who have been vaccinated or have recovered from the infection.
Objective
The objectives of the study are: (a) to assess the cell-mediated immunity (CMI) response among the study participants, and (b) to compare the cellular immunity among SARS-CoV-2 antibody-positive and negative participants; and among symptomatic and asymptomatic participants of past COVID-19 infection.
Study design
The current study was part of a larger study under the WHO Unity protocol.A cohort of 10,000 individuals was assembled for a population-based, age-stratified sero-epidemiological study for COVID-19 virus infection from five selected states in India covering rural, urban, and tribal areas.As the information on the cell-mediated immune response to COVID-19 was limited, and due to logistic and operational reasons, it was planned to assess the cell-mediated immunity only in a subset of the original cohort from one selected site (All India Institute of Medical Sciences, New Delhi).A subset of 150 individuals was taken from the selected study site and blood specimens were collected between April-June 2022.Initially, it was planned to collect blood specimens from individuals based on their vaccination status.However, by the time data collection started, it was seen that almost all individuals in the study cohort were vaccinated.The study participants were finally selected according to the flowchart in Figure 1.
Participant characteristics
Whole blood specimens of 150 selected participants were collected and tested for anti-SARS-CoV-2 interferon-gamma (IFN-γ) response.However, only 125 samples were considered for statistical analysis since 25 samples were excluded due to internal quality control criteria.The participants consisted of two categories.Firstly, those with documented positive cases, labelled as "diagnosed", and tested either by realtime reverse transcription-polymerase chain reaction (RT-PCR) or rapid antigen tests (RAT) (n=51 out of 125 total participants).The second category labelled as "never diagnosed" were participants who had never been diagnosed with SARS-CoV-2 viral infection and also whose exposure status was unknown (n=74 out of 125 total participants).The blood specimens were collected after ensuring that six or more months had elapsed since convalescence and/or vaccination.
Peripheral blood mononuclear cell (PBMC) isolation and preparation
Approximately eight milliliters of whole blood specimen was collected from the participants.Peripheral blood mononuclear cells (PBMCs) were isolated from whole blood by using the ficoll-histopaque (Lymphoprep, SerumWerk, Bernburg, Germany)-based density gradient centrifugation method [23].Post isolation, the PBMCs were washed twice with sterile 1X phosphate buffered saline (PBS) and cryopreserved using a freezing media containing 90% Fetal Bovine Serum (FBS) (Gibco, Thermofisher Scientific, Waltham, USA) + 10% dimethyl sulfoxide (DMSO) (Merck, Sigma, Burlington, USA) and stored in liquid nitrogen until further use.
Interferon-gamma enzyme-linked immunospot (IFN-γ ELISpot) assay
Ex-vivo SARS-CoV-2 specific IFN-γ ELISpot (MabTech Human IFN-γ ELISpotPLUS kit (ALP), Nacka Strand, Sweden) was carried out to determine the levels of virus-specific IFN-γ producing cells in individual samples as described previously [24].The pre-coated wells in the ELISpot plate were conditioned using sterile complete RPMI 1640 medium supplemented with 10% FBS and then incubated for 18-20 hrs at 37°C and 5% CO2 with the 0.25 million PBMCs/well and appropriate stimulants.To test the T-cell response, the PBMCs were stimulated with the virus-specific pool of immunodominant human leukocyte antigen (HLA) class I & II-restricted T-cell epitopes of SARS-CoV-2 proteome (PanSARS-CoV-2 PepMix peptide pool, JPT Peptide Technologies, Berlin, Germany) at a concentration of 1 µg/ml of each peptide.As an internal negative control, each PBMC sample was also stimulated with an equimolar concentration of DMSO, whereas for positive control, PBMCs were stimulated with 5 µg/ml anti-CD3 antibody.Co-stimulants anti-human CD28 and anti-human CD49d at a final concentration of 1 µg/ml were added in the test and negative control wells.
Post incubation, the plate was developed as per the manufacturer's guidelines, and the spots were quantified using a CTL Fluorescence S6 Universal reader (CTL, Cleveland, USA).
Human ethics
The PBMCs were isolated from the samples which were collected from healthy convalescent/vaccinated participants after obtaining formal written informed consent.Institutional Ethical approval (IEC-959/04.09.2020) for the study protocol was taken from both the participating institutes (All India Institute of Medical Sciences, New Delhi and Translational Health Science and Technology Institute, Faridabad).
Data analysis
The socio-demographic information of the participants was exported to Microsoft Excel software (Microsoft Corporation, Redmond, USA), and data analysis was conducted using the statistical software STATA Version 12 (STATA Corporation, College Station, USA).A qualified data manager and the study investigators collaborated to perform data cleaning using both Microsoft Excel and STATA.Descriptive statistical analysis was carried out, and the results were presented as proportions for categorical variables and as mean (SD) with a 95% confidence interval (CI) for continuous variables.The seroprevalence was reported as a percentage with a 95% CI, categorized according to the study site, round, urban-rural area, age group, sex, presence of symptoms, and vaccination status.The adjusted prevalence with 95% CI was calculated after correcting for the test accuracy.
For the ELISpot data, the number of spots forming units (SFUs) per million PBMCs was calculated by multiplying the background subtracted spots per well by four.Negative values were set to zero.Samples with a low anti-CD3 response (<45 SFUs/million PBMCs) were excluded from the analysis.
To compare the levels of SARS-CoV-2-specific IFN-γ producing cells (SFUs/million PBMCs) between the study groups, the Mann-Whitney U test was employed.A significance level of p<0•05 was deemed statistically significant.The statistical analysis and graphical representations of the data were conducted using Graphpad Prism 8 software (Graphpad, San Diego, USA).
Results
In this study, a cohort of 150 healthy participants was initially enrolled.Following the application of internal quality control criteria, 25 participants were excluded, resulting in a final sample size of 125 participants for analysis (Table 1).The level of cell-mediated immunity was assessed using the ELISpot assay, which allowed for the quantification of anti-SARS-CoV-2-specific IFN-γ-producing T cells in terms of spot-forming units per million PBMCs.
Nearly all of the participants (99.2%, n=124/125) in the study had received vaccination.The majority of them (57.6%, n=72/124) were vaccinated with Covaxin (BBV152), while a significant portion (37.6%, n=47/124) received Covishield (ChAdOx nCoV-19).Among the vaccinated participants, 4.83% (n=6/124) received a single dose, while the remaining 95.16% (n=118/124) received the recommended double dose.It is worth noting that four participants had received vaccination but could not recall the specific type of vaccine they received.One participant reported receiving the Sputnik V vaccine, while only one participant remained unvaccinated throughout the study.This vaccination distribution reflects the diverse vaccination status within the participant pool, which is essential for examining the relationship between vaccination and immune response.
Overall, these results highlight the importance of both prior infection and vaccination in enhancing the cellular immune response, as indicated by higher IFN-γ levels.Moreover, they demonstrate the similar T-cell responses generated by Covaxin and Covishield vaccines, irrespective of prior infection.Additionally, the absence of a significant difference in IFN-γ levels between asymptomatic and symptomatic individuals suggests that the severity of symptoms may not be a major determinant of T-cell immune response.
The frequency of SARS-CoV-2-specific IFN-γ spot-forming T cells was compared among multiple study groups.The T-cell immune response was compared based on (a) Diagnosis with SARS-CoV-2 infection in vaccinated individuals, (b) Type of vaccination, and (c) Asymptomatic or symptomatic infection of SARS-CoV-2 in diagnosed indviduals.The y-axis represents the IFN-γ SFUs per million PBMCs stimulated overnight with the SARS-CoV-2 peptide pool.A statistically significant difference was observed between the diagnosed and never-diagnosed participants who had been vaccinated.No statistically significant difference was seen in the other two comparative groups (Figure 2).
When comparing participants who had been diagnosed with COVID-19 to those who had never been diagnosed, regardless of their vaccination status, a similar pattern of significance was observed.The levels of IFN-γ producing cells were significantly higher in diagnosed participants compared to undiagnosed participants (Median (IQR) SFU/million: 336 (138-474) vs 16 (0-102), respectively (Figure 3)).This finding suggests that prior infection plays a crucial role in enhancing cellular immune response, regardless of vaccination status.
Discussion
The persistence of T-cell mediated immunity for an extended duration, post-recovery is well documented in SARS-CoV-2 [25,26].However, only a few studies are available on the persistence of the SARS-CoV-2-specific T-cell immune response along with its correlation with demographic characteristics like age, sex, type of infection, and type of vaccination.We tried to fill this gap through the current study.150 selected participants were enrolled in the study out of which 125 samples were considered for statistical analysis since 25 samples were excluded due to internal quality control criteria.The participants were selected from an already existing 10,000 individuals being followed up under the WHO Unity protocol seroepidemiological study.In this study, the level and longevity of SARS-CoV-2-specific T-cell mediated IFN-γ response was determined using the ELISpot assay.We included individuals where more than six months had elapsed since infection and/or vaccination.We assessed the association between the persistence of T-cell immunity and age groups, sex, the type of vaccination, and symptoms when exposed to SARS-CoV-2.
Sette and Crotty (2021) reported that elderly individuals possessed a smaller naïve T-cell population and were also susceptible to immunosenescence contributing to the ineffective response against SARS-CoV-2 infection [27,28].However, Peluso et al. (2021) reported that participants older than 50 years of age had a higher percentage of SARS-CoV-2 N-and S-specific IFNγ+ CD4+ T cells four months post onset of illness measured by intracellular cytokine staining [29].Arankalle et al. (2022) found that participants older than 55 years exhibited robust T-cell responses to the whole virion-inactivated vaccine like Covaxin (BBV152) measured by ELISpot [30,31].We found that higher levels of IFN-γ producing cells among participants of the age group 14-18 years.However, this difference was not statistically significant across the multiple age groups studied (14-18y vs 19-60y vs >60y).One possible explanation for our finding could be the overall small number of participants, particularly in the age group 14-18 years (N=4).Therefore, we might have missed the difference, even if it truly existed.However, our finding is in agreement with a study by Yan et al. (2022), where IFN-γ persisted independent of age [25,26].
Takahashi et al. ( 2020) in a study among 98 hospital-admitted participants with confirmed COVID-19 diagnosis reported that females possessed more robust T-cell activation than male patients.In addition, the disease outcome was worse in males than in female patients [32].Scully et al. (2020) reported that the casefatality rate among males across 38 countries was 1.7 times higher than the average female case-fatality rate [33].Although there were notable differences in the immune response between males and females during an active SARS-CoV-2 infection, Killic et al. ( 2021) observed that the levels of IFN-γ production remained similar between the two genders, suggesting that a higher number of T-cells does not necessarily result in increased IFN-γ release [34].Similarly, in our study, we observed comparable levels of SARS-CoV-2-specific IFN-γ among males and females.However, we did find that males exhibited higher levels of spot-forming units (SFUs) compared to females, indicating a potentially greater T-cell response in males.
Convalescent phase SARS-CoV-2-specific CD4+ and CD8+ cells pre-dominantly express IFN-γ [17,27] They employed activation-induced marker assay/intracellular cytokine staining assays [29,35].The SARS-CoV-2-specific T cells persist for 17-18 months after the onset of the illness [26].Of the 125 convalescent participants of our study, 51 had documented previous exposure to SARS-CoV-2, 6-12 months prior to the date of sample collection.We used the ELISpot method and detected robust and significantly higher SARS-CoV-2-specific T-cell IFN-γ response in the diagnosed cohort compared to those who did not have previous documented exposure, irrespective of their vaccination status.
All the diagnosed and undiagnosed participants in the study were vaccinated except one participant in the never-diagnosed category who was unvaccinated.A marked difference was observed in the levels of IFN-γ producing cells between the participants of the diagnosed-vaccinated category and the never-diagnosedvaccinated category.This observation suggests that the T-cell immunity generated over the period of 6-12 months post-onset of illness is better among individuals who had been previously exposed to SARS-CoV-2 infection, irrespective of vaccination status.A similar finding has also been reported by Reynolds et al. (2021).They reported that at 42 weeks, enhanced T-cell immunity developed against SARS-CoV-2 spike protein in healthcare workers (HCWs) who were vaccinated and had previous exposure to infection compared to those who were only vaccinated [36,37].Another study by Zuo et al. (2021) using ELISpot and Intracellular cytokine staining on 100 convalescent donors 6 months post SARS-CoV-2 infection found that the T-cell response was higher in donors who had experienced a symptomatic infection from SARS-CoV-2 [25].
We compared the T-cell immunity among the diagnosed participants on the basis of symptomatic and asymptomatic infection, >6 months back.The T-cell immunity was comparable but slightly higher in those who had symptomatic infection.Our finding was thus comparable with previously reported findings [25,27].Covaxin recipients [30].We employed the same method and our cohort of Covaxin was larger compared to Covishield.We did not find any significant difference in the levels of SARS-CoV-2 specific IFN-γ producing cells between the two groups.
Limitations
The current study had several limitations that should be acknowledged.Firstly, the sample size was relatively small, with only 150 participants selected from an original cohort of 10,000 individuals.Additionally, 25 participants had to be excluded due to internal quality control issues, further reducing the effective sample size.Consequently, the findings should be interpreted with caution, and the generalizability of the results may be limited.
Another limitation of the study was the utilization of the ELISpot method to determine the SARS-CoV-2 memory response by measuring total IFN-γ producing cells.This approach can be considered a drawback since it did not allow for the specific characterization of CD4+ and CD8+ T-cell responses.An in-depth analysis of these T-cell subsets could provide a more comprehensive understanding of the immune response to SARS-CoV-2.
Despite these limitations, the present study contributes valuable insights into the immune response against SARS-CoV-2.Future research with larger sample sizes and more extensive methodologies, such as flow cytometry, could provide a more detailed characterization of T-cell subsets and improve our understanding of the immune response dynamics.
It is crucial to address these limitations in future studies to further elucidate the intricacies of the immune response to SARS-CoV-2 and obtain a more comprehensive picture of the protective mechanisms involved.Nonetheless, the current study provides initial evidence and paves the way for further investigations in this field.
Conclusions
The present study indicates that irrespective of age and sex of the participant, T-cell immune response against SARS-CoV-2 was observed for more than six months of previous exposure either through vaccination or natural infection.Over three-quarters of the participants demonstrated a CMI response characterized by the production of IFN-γ.Individuals who had experienced natural infection displayed a more robust CMI response, irrespective of their vaccination status.
This underscores the importance of a history of diagnosed COVID-19 infection in enhancing T-cell response compared to individuals who had never been diagnosed, regardless of their vaccination status.This suggests that prior exposure through natural infection confers a certain level of immunological advantage in terms of CMI.
The persistence of CMI suggests that vaccinated individuals and those who have recovered from previous infections may have a continued defence against the virus and potentially a reduced risk of reinfection.
organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
FIGURE 3 :
FIGURE 3: SARS-CoV-2 IFN-γ levels in diagnosed and never-diagnosed participants irrespective of vaccination status Comparison of the levels of SARS-CoV-2 specific IFN-γ producing cells among participants with pre-exposure to SARS-CoV-2 and those who were never diagnosed with SARS-CoV-2, irrespective of their vaccination status.****p-value<0.0001(statistically highly significant) IFN-γ SFUs/million PBMCs=interferon-gamma spot-forming unit per million peripheral blood mononuclear cells . The persistence of such T-cells is well-documented in studies by Peluso et al. (2021) and Sherina et al. (2021).
TABLE 1 : Distribution of participants by selected variables
*Excluding participants who took the vaccine, but the type is unknown **Symptoms include vomiting, nausea, rashes, conjunctivitis, muscle ache, joint ache, loss of appetite/smell/taste, nose bleeding, fatigue, seizures, and other neurological symptoms | 2023-11-17T16:10:04.576Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "bd7bbc59507ba7f3c5a01ceb39f8f156bfdba20c",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/197398/20231115-20034-rb1ccb.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c308df4b6b0a3d5e70c569a0cfbe3d2de8416ce0",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
115171557 | pes2o/s2orc | v3-fos-license | Construction of operator product expansion coefficients via consistency conditions
In this thesis an iterative scheme for the construction of operator product expansion (OPE) coefficients is applied to determine low order coefficients in perturbation theory for a specific toy model. We use the approach to quantum field theory proposed by S. Hollands [arXiv:0802.2198], which is centered around the OPE and a number of axioms on the corresponding OPE coefficients. This framework is reviewed in the first part of the thesis. In the second part we apply an algorithm for the perturbative construction of OPE coefficients to a toy model: Euclidean $\varphi^6$-theory in 3-dimensions. Using a recently found formulation in terms of vertex operators and a diagrammatic notation in terms of trees [arXiv:0906.5313v1], coefficients up to second order are constructed, some general features of coefficients at arbitrary order are presented and an exemplary comparison to the corresponding customary method of computation is given.
The fundamental left representation . . . . 22 Chapter 3: The model 25 3.1 The free massless field . . 3.4 Low order computations: Next to leading order . . 3.6 Comparison to customary method . .
Motivation
Various formulations of quantum field theory (QFT) have been proposed and established in the last century. The most popular ones can be split into two conceptually different categories: The path-integral and the operator approach.
The former uses the moments of some measure on the space of classical field configurations in order to construct correlation functions. As this measure is (formally) given in terms of the classical action, this formulation has the advantage of being closely related to classical field theory. In the operator approach, on the other hand, quantum fields are viewed as linear operators represented on some Hilbert space of states. Consequently, no corresponding classical theory, i.e. no Lagrangian formalism, is needed in this case. In this formalism special emphasis is put on the algebraic relations between the quantum fields. In fact, these relations may be viewed as determining the whole theory, as originally proposed by Haag and Kastler [3] in the framework of algebraic quantum field theory (AQFT). Algebraic approaches have also been useful in conformal quantum field theories (cQFT), see e.g. [4,5], and in view of the lack of a preferred Hilbert space representation turned out to be essential in the construction of quantum field theories on curved spacetimes [6,7,8,9,10].
In [1] a new approach to quantum field theory was proposed, where the algebraic relations between the fields (at short distances) are encoded in the Wilson operator product expansion (OPE) [11], which is elevated to the status of a defining element of the theory instead of an identity derived from it (see section 1.2). Furthermore the OPE coefficients have to obey certain constraints, in particular a factorization relation that was observed in the construction of the OPE on curved spacetimes [12]. An axiomatic formulation of this framework is given in [1]. The key observation is that consistency conditions arising from the mentioned factorization property can be used, in combination with field equations, as a constructive tool. In addition, graphical rules for the computation of OPE coefficients within this approach have been obtained [2]. The resulting algorithm for the construction of the OPE is different from standard ones relying on divergence properties of Feynman integrals [13], but one expects the results to be equivalent (see section 3.6). A remarkable feature of the new approach is the fact that it is inherently finite (see chapter 3.2), i.e. no renormalization procedure is needed.
This novel viewpoint has several advantageous features: As mentioned above, it is viable also on curved spacetimes, where it might even be necessary to elevate the existence of an OPE to axiomatic status [14]. In [1] the framework was also generalized to gauge theories, which play a central role in the description of particle physics. Another CHAPTER 1. INTRODUCTION interesting feature concerns the regularity of the OPE coefficients as opposed to quantum states. In the case of simple examples it is easy to show that OPE coefficients may depend analytically on certain parameters of the theory (like e.g. the mass of a particle) where the quantum states show non-analytic behavior. As a result of such considerations, it was recently conjectured in [15] that even the perturbation series for the OPE coefficients may converge, i.e. it might be possible to perturbatively construct interacting quantum field theories within this approach.
This thesis will be concerned with a low order perturbative construction of this kind for the case of a simple toy model theory. It thus constitutes the first specific application of the very young framework outlined above and gives first impressions of the calculational effort involved in the iterative construction of perturbation theory. The above mentioned cancellation of divergences and also the formation of certain patterns in the mathematical structure of the coefficients can be observed explicitly. This also gives insights into the expected structure of higher order coefficients.
Historical background
The singular behavior of products of quantum fields at coinciding spacetime points has been a major obstacle in the construction and understanding of quantum field theory since its discovery. Thus, the analysis of these short distance divergences was, and still is, of great interest.
The idea of a short distance expansion for products of quantum field operators was first considered by Wilson in 1964 when, according to himself inspired by axiomatic QFT, he translated some work he "had done on Feynman diagrams with some very large momenta (...) into position space" [16]. This work, however, never got published and it took five more years until Wilson revisited his theory of short distance expansions adding new ideas by Kastrup and Mack concerning scale invariance in QFT. Ironically, although in his 1969 paper Wilson introduces the OPE as an alternative framework to Lagrangian models, in the following years it became an important tool in the understanding of these theories.
In 1970 Zimmermann proved that an OPE holds in perturbative quantum field theory, thus giving its first validation also in usual Lagrangian theories [17]. He used this new tool in order to define normal products of interacting quantum fields as a generalization of the normal ordered products in the free theory. Now a sensible notion of composite fields, i.e. for example higher powers of fields, could be defined in terms of normal products [18].
In the following years the OPE was established in the most important branches of quantum field theory: It became a standard tool in the analysis of quantum chromodynamics (QCD), played a crucial role in the development of conformal quantum field theories, has been proven within various axiomatic settings [19,20] and has been shown to hold order by order in perturbation theory on curved spacetime [12]. The OPE has also been used in order to prove a curved spacetime version of the spin-statistics theorem and the PCT theorem [21] and played a crucial role in the formulation of an axiomatic quantum field theory on curved spacetime, where the OPE was elevated to a fundamental status and replaces the requirement for the existence of a unique Poincare invariant state [14]. This shift of emphasis onto the OPE as defining property of the theory is in the spirit of the new approach considered in this thesis. Here the OPE is no longer viewed as simply a calculational tool that has been derived from the theory, but as a central feature around which the theory is built.
Organization
This thesis is organized as follows: In the next chapter we introduce the new framework of quantum field theory in terms of consistency conditions as proposed in [1]. After a short motivation of the ideas leading to this approach, an axiomatic setting for quantum field theory is presented, followed by an analysis of perturbation theory in this framework. Finally, a recently discovered convenient formulation of the theory in terms of vertex operators -the fundamental left representation -is studied.
Chapter 3 presents the results of this thesis, namely the low order perturbative construction of OPE coefficients for a specific Lagrangian model theory. Our considerations naturally start at the non-interacting theory (i.e. zeroth perturbation order), where the model and some notation are introduced. We then explain a general method of perturbations via non-linear field equations, obtaining an iterative scheme for the construction of OPE coefficients. The application of this algorithm to a specific model theory gives the main results of this thesis, which are presented in section 3.4 and 3. 5. In the end we briefly compare our framework to ordinary methods for the calculation of OPE coefficients in terms of an example.
The thesis is closed by chapter 4, where our results are reviewed and interpreted. Conclusions as well as an outlook on possible future developments are given.
We refer to the appendix for the introduction to the main mathematical objects appearing in the calculations.
QFT in terms of consistency conditions
In this chapter a new formulation of quantum field theory, first proposed in [1], is described. The central object of this framework is the OPE subject to certain constraints.
Motivation
The operator product expansion states that the product of two operators may be written as 1.1 in terms of local quantum fields φ c and C-number distributions C c ab . Here a, b, c label the composite fields of the underlying theory , · ω is the expectation value in the state ω and "≈" means that this identity holds as an asymptotic relation in the limit x 1 , x 2 → y in a suitably strong sense 1 . In the following we will shorten the notation by formally rewriting equation 2. 1
.1 as
where we have implicitly made the choice y = x 2 . We furthermore assume the fields to live on a real Euclidean spacetime, which can always be achieved by analytic continuation provided the spectrum condition holds in the quantum field theory. In order to motivate the condition that lies at the heart of the new framework we should study the OPE of a product of three operators.
Now consider a situation as depicted in figure 2. 1. Defining the Euclidean distance between two points x i and x j as 1.4 figure 2.1 tells us that r 23 < r 13 . In this case we expect it to be possible to perform the OPE successively, i.e. we first expand the product of φ b (x 2 )φ c (x 3 ) in equation 2. 1 Similarly, as figure 2.1 also implies r 12 < r 23 , we expect that we can start by expanding the product φ a (x 1 )φ b (x 2 ) around x 2 , multiply the result by φ c (x 3 ) and finally expand around x 3 again. In other words: So for constellations as in figure 2.1, i.e. on the open domain r 12 < r 23 < r 13 , we obtain the consistency relation that both expansion 2. 1 when r 12 < r 23 < r 13 . We will adopt the labeling of this constraint as "consistency-" or "associativity" condition from [1]. The basic idea of the new framework presented in this chapter is that these conditions on the 2-point OPE coefficients are stringent enough to incorporate the full information about the structure of the quantum field theory or, stated conversely, that finding a solution to these conditions effectively means that one has constructed a quantum field theory.
But if the full information on the quantum field theory is to be encoded in the constraints on the 2-point OPE coefficients, then no further constraints should appear from higher order associativity conditions, i.e. from conditions on products of more than three fields. If, for example, we consider the OPE of four fields φ a (x 1 )φ b (x 2 )φ c (x 3 )φ d (x 4 ) and successively expand this product in a similar manner as above, we will obtain new relations for the 2-point OPE coefficients similar to eq. 2. 1.7. The question is now whether these constraints are genuinely new or can be deduced from eq. 2. 1.7. As it turns out, this problem is analogous to the analysis of the associativity condition in ordinary algebra and as in this case we will show that no further conditions arise (see chapter 2.3). These considerations will also yield a unique expression of the higher order coefficients such as C e abcd (x 1 , x 2 , x 3 , x 4 ) in terms of the 2-point OPE coefficients. This result is called the coherence theorem, because it states that the entire set of consistency conditions is coherently encoded in the associativity condition 2. 1.7. Now that we have identified the 2-point OPE coefficients as fundamental entities of our approach, it would be of interest to formulate perturbation theory in terms of these coefficients. With this aim in mind, let us assume the following setting: We are given a 1-parameter family of 2-point coefficients with parameter λ. These coefficients shall satisfy the associativity condition 2. 1.7 and we want to perturb around the quantum field theory described by the coefficients with λ = 0. In order to avoid messy equations we introduce an index free notation getting rid of the indices a, b, c . . . above. We view the 2-point OPE coefficients collectively as a linear map C(x 1 , x 2 ) : V ⊗ V → V , where V is the space of fields, whose basis components are given by C c ab (x 1 , x 2 ). Then the Taylor expansion in the parameter λ around λ = 0 is 1.8 If we expand the associativity condition, eq. 2. 1.7, in this way and assume that the condition holds at zeroth order, then we obtain the following constraint on the first order perturbation of the 2-point OPE coefficients 1.9 which is linear in the first order coefficients and holds on the domain r 12 < r 23 < r 13 . It was shown in [1] that this condition is of a cohomological nature and that one can identify the set of all possible first order perturbations satisfying this condition (modulo trivial field redefinitions) with the elements of a certain cohomology ring, which bears close resemblance to Hochschild cohomology [22,23]. This notion can be generalized to higher order perturbations, i.e. at each order the associativity condition is a potential obstacle for the continuation of the perturbation series. This obstruction is then again an element of our cohomology ring. The above definition of perturbation theory is very general in the sense that the physical meaning of the parameter λ is completely open. λ might for instance measure the strength of some coupling in a Lagrangian theory, like e.g. the self interaction in the theory described by the classical Lagrangian L = (∂φ) 2 + λφ 4 . The perturbation would in this case be around the free theory where the OPE coefficients are known. It is also possible to perturb more general, not necessarily Lagrangian, theories, e.g. more general conformal field theories. One could also take SU (N ) Yang-Mills theory as yet another example. Here one could chose λ = 1/N , where N is the number of colors of the theory, and perturb around the large-N -limit of the theory. This general theory of perturbations is described in more detail in chapter 2.4.
Axiomatic framework
The aim of the present section is to give a precise formulation of the ideas informally presented above. In this approach quantum field theory is defined by an axiomatic setup that was first proposed in [1] (in [14] a basically similar framework was introduced for quantum field theory on curved spacetime).
First we define the playground of our quantum field theory to be an infinite dimensional vector space V , whose elements can be thought of as the components of the various composite scalar, spinor and tensor fields. For example, in a theory containing only one scalar field ϕ, the elements of V would be in one-to-one correspondence with the monomials in ϕ and its derivatives. One would naturally assume V to be graded in various ways, as it should be possible to classify the different quantum fields in the theory by characteristic properties, such as spin, dimension, Bose/Fermi character, etc. As we are considering Euclidean quantum field theories, we expect V to carry a representation of the D-dimensional rotation group SO(D), or of its covering group Spin(D) respectively if spinor fields are present. This representation can be decomposed into unitary, finite-dimensional irreducible representations V S characterized by the eigenvalues S = (λ 1 , . . . , λ r ) of the r Casimir operators associated with SO(D). Thus, we introduce a grading by these irreducible representations (irrep's):
2.2.1
An additional grading, which will later be related to the dimension of the quantum fields, is provided by the numbers ∆ ∈ R + . The numbers N (∆, S) ∈ N, here supposed to be finite, express the multiplicity of the fields with a given dimension ∆ and spin S. Here we should remark that the infinite sums in this decomposition are understood without any closure taken, meaning that the elements of V are in one-to-one correspondence with sequences of the form (|v 1 , |v 2 , . . . , |v n , 0, 0, . . .), where only finitely many zeros appear and |v i is a vector in the i-th summand of the decomposition. As further structure on V we demand the existence of an anti-linear, involutive operation : V → V which should be thought of as hermitian conjugation of the quantum fields. Additionally we would like to have a linear grading map γ : V → V satisfying γ 2 = id which is to be thought of as a grading with respect to bosonic (eigenvalue +1) and fermionic (eigenvalue −1) vectors. Finally, we demand the existence of D derivations on V , i.e. linear maps ∂ µ : V → V with µ ∈ {1, . . . , D} satisfying the Leibnitz rule and ∂ µ • γ = γ • ∂ µ . These derivations increase the dimension ∆ of the vectors in V by 1. The linear space defined in this way is nothing more than a list of objects that we think of as labeling the composite fields of the theory, but so far the dynamics and the quantum nature of the theory, i.e. the information that is of most interest, have not been addressed. This information is now encoded in the OPE coefficients associated with the quantum fields. This is a hierarchy where the C(x 1 , . . . , x n ) are analytic functions on the configuration space M n := {(x 1 , . . . , x n ) ∈ (R D ) n | x i = x j , ∀ 1 ≤ i < j ≤ n} § ¦ ¤ ¥ 2. 2.3 taking values in the linear maps C(x 1 , . . . , x n ) : V ⊗ · · · ⊗ V n−factors → V . § ¦ ¤ ¥
2.2.4
For the one-point coefficient we set C(x 1 ) = id : V → V . By taking the components of these maps in a basis of V the OPE coefficients from the previous section can be retrieved. So, if {|v a } denotes a basis of V in adapted to the grading with the corresponding basis { v a |} of the dual space
2.2.6
using the customary bra-ket notation |v a 1 ⊗ · · · ⊗ v an := |v a 1 ⊗ · · · ⊗ |v an . In the following we express the basic properties of quantum field theory as axioms on the structure of the OPE coefficients: Axiom 1 (Hermitian conjugation)
2.2.9
where the shorthand notation for the n-fold tensor product was used again.
Axiom 4 (Identity element)
There exists a unique element 1 ∈ V of dimension ∆ = 0 satisfying the properties 1 * = 1, γ(1) = 1 and where 1 is in the i-th tensor position, with i < n. If 1 is in the n-th position, the relation has to hold, where t is a "Taylor expansion map" characterized below.
The reason for the slightly more complicated form of eq. 2. 2.11 is that x n is the point we expand around and thus the corresponding n-th tensor entry stands on a different footing than the other entries. In order to heuristically motivate the form of equation 2.2.11 and to specify the map t more precisely, we consider the following situation: Let φ a be a quantum (or classical) field. Then we can formally perform a Taylor expansion
2.2.12
with y = x 1 − x 2 . As each field ∂ µ 1 · · · ∂ µ i φ a is just another composite field of the theory denoted for example by φ b , we might rewrite equation 2.2.12 a are defined using the above Taylor expansion, at least up to potential trivial changes which take into account the fact that a derivative of the field φ a might correspond to a linear combination of other fields in the particular labeling we have chosen for the fields. Application of these ideas formally yields
2.2.19
holds on the open domain
2.2.20
At this point some remarks are in order. First one should note that the factorization identity 2. 2.19 expressed in a basis of V ⊗ · · · ⊗ V involves an r-fold infinite sum on the right side of the equation. In fact, it is the statement of the factorization property that these infinite sums converge on the indicated domain. Outside the domain the sums are not restricted at all and one would expect them to diverge. The axiom can be generalized to arbitrary partitions of {1, . . . , n} making use of the (anti-)symmetry axiom 7 below. If fermionic fields are included, then ± signs will appear. It is also important to remark that the factorization relation can be iterated on suitable domains, i.e. if for example the subset I j is itself partitioned into subsets, then the coefficient C(X I j ) will itself factorize on a suitable subdomain. Such subsequent partitions may naturally be identified with trees. A version of the factorization property in terms of trees was given in [12] and will also be given below 2. 3.11.
Axiom 6 (Scaling )
Let v a 1 , . . . , v an ∈ V be vectors with dimension ∆ 1 , . . . , ∆ n (remember the decomposition of V in eq. 2.2.1) respectively and let v b ∈ V * be an element in the dual space of V with dimension ∆ n+1 . Then the scaling degree of the C-valued distribution 2.2.6 should be estimated by
2.2.21
By scaling degree we mean
2.2.22
Further, if v b is an element of the one-dimensional subspace of dimension-0 fields spanned by the identity operator 1 ∈ V , and if n = 2 and v a 1 = v * a 2 = 0, then the inequality in the eq. 2.2.21 is required to hold.
Axiom 7 ((Anti-)symmetry )
Let τ i−1,i be the permutation exchanging the (i−1)-th and the i-th tensor factor in an element of V ⊗ · · · ⊗ V . Then we have (for 1 < i < n)
2.2.25
The last factor in eqs. 2.2.23 and 2.2.25 makes the OPE coefficients of bosonic fields symmetric and the OPE coefficients of fermionic fields anti-symmetric. Also notice that again we had to treat the case involving the n-th point and the n-th tensor factor separately. The reasons for this are obviously similar to those leading to eq. 2.2. 11.
Axiom 8 (Derivations)
The derivations ∂ µ on V are compatible with partial derivatives of the OPE coefficients with respect to the spacetime arguments x i ∈ R D in the sense that for i < n. If there is a derivative in the last tensor position, then the identity This axiom, which has not been required in [1], will later allow us to transform field equations, i.e. partial differential equations on V , into similar relations involving the OPE coefficients (see section 3.2).
This finishes our presentation of the axioms for the new framework. As already mentioned in section 2.1, the factorization property, axiom 5, lies at the heart of the theory. The stringent constraints it imposes on the possible consistent hierarchies (C(x 1 , x 2 ), C(x 1 , x 2 , x 3 ), . . .) carry the main information in the approach. The Euclidean invariance condition, axiom 2, implies translation invariance of the OPE coefficients and links the decomposition 2.2.1 of the field space V into sectors of different spin to the transformation properties of the OPE coefficients under rotations. Likewise, the scaling property links the grading of V with respect to the dimension to the scaling properties of the OPE coefficients. Furthermore, the (anti-)symmetry requirement, axiom 7, is a replacement for local (anti-)commutativity (Einstein causality) in the Euclidean setting. Note also that we have not required a spin-statistics or P CT -theorem to hold, as these can in fact be derived from the axiomatic framework just introduced (see [21]).
So in summary, a quantum field theory is defined as a pair (V, C) consisting of a vector space V with the above properties and a hierarchy of OPE coefficients C := (C(x 1 , x 2 ), C(x 1 , x 2 , x 3 ), . . .) satisfying axioms 1 to 8. Now the issue of equivalence of different theories of this kind arises. One would naturally identify quantum field theories that only differ by a redefinition of their fields, where, informally, a field redefinition means that the definition of the quantum fields of the theory is changed by a transformation of the form φ a (x) toφ a (x) = z b a φ b (x), where z b a is some matrix on field space. The following definition carries over these ideas to our framework.
2.2.29
for all n, where z n = z ⊗ · · · ⊗ z, then the two quantum field theories are said to be equivalent and z is called a field redefinition.
Before concluding this chapter one further condition is imposed, namely that the quantum field theory (V, C) exhibits a vacuum state. The appropriate notion of quantum state in our Euclidean setting is a collection of correlation functions, which we will write as φ a 1 (x 1 ) · · · φ an (x n ) Ω with arbitrary n and a 1 , . . . , a n . These functions should be analytic on M n and satisfy the Osterwalder-Schrader (OS) axioms for the vacuum state Ω (see [24,25]) and also the OPE in the sense that The symbol " ∼ " here indicates that the difference between the left and right side of this expression is a distribution on M n with smaller scaling degree than any given number δ provided the above sum goes over all of the finitely many fields φ b whose dimension is smaller than some number ∆ = ∆(δ). Then by the OS-reconstruction theorem the theory can be continued back to Minkowski spacetime and the fields can be represented as linear operators on a Hilbert space H of states. It may also be of interest in some settings (e.g. theories with unbounded potentials) to drop the notion of unique vacuum state by leaving out those OS-axioms which involve statements about invariance under Euclidean transformations.
Obviously, if we require a quantum state to satisfy the OS-axioms, new constraints on the OPE coefficients are expected to appear. These constraints will not be discussed here, as our focus is on the algebraic conditions satisfied by the OPE coefficients. The additional restrictions imposed by the OS-axioms are genuinely new, so in some contexts one might even want to drop them (e.g. the condition of OS-positivity does not hold in some systems in statistical mechanics and in gauge theories before taking the quotient by the BRST-differential).
The coherence theorem
It has already been mentioned a few times that we think of the factorization property, axiom 5, as the key condition on the OPE coefficients. These restrictions on the OPE coefficients C(x 1 , . . . , x n ) (with n ≥ 2) are expected to be very stringent and encode most of the non-trivial information of the theory. It is the purpose of this section to analyze the interdependence of these conditions for different n, i.e. for OPE coefficients coming from the expansion of a product of n fields. The result will be that all the higher constraints (i.e. larger n) are already encoded in the first non-trivial constraint arising for n = 3. As this implies that all the factorization constraints can be coherently described by a single condition, this result was named the coherence theorem in [1].
It is instructive to consider an analog from ordinary algebra before treating our case in more detail. Let us consider a finite dimensional associative algebra A. Thus holds, or rewritten in terms of the linear product map m : The first two domains in the above equations are clearly disjoint, but both have a non-empty intersection with the third domain. Thus, according to axiom 5, on these intersections both factorizations should give C(x 1 , x 2 , x 3 ) and hence should also be equal to each other. We can write this as where the spacetime arguments lie in the intersection 3.8 and the relation for r 12 < r 23 . So in the case n = 3 (i.e. three spacetime points) there exists only one independent consistency condition, eq. 2. 3.7, which has already been given in component form in 2. 1.7. Remember, the aim of this chapter is to analyze higher factorization conditions, i.e. axiom 5 for n > 3. We have seen in the analogous problem in ordinary algebra that all higher associativity conditions can be derived from eq. 2. 3.2, which is the analogue to our eq. 2. 3.7. In the following it will be shown that, as in the example of associative algebra, we will not encounter any higher factorization conditions and that the coefficients C(x 1 , . . . , x n ), the analogue of a product of n elements of our algebra A, are completely determined by the coefficients C(x 1 , x 2 ), which can be identified with a product of two elements of A.
Thus, our first task is to write all the factorization equations only in terms of the C(x 1 , x 2 ). In this context the language of rooted trees is natural and useful (see also [12]). By a rooted tree on n elements {1, . . . , n} we will mean a set {S 1 , . . . , S k } of nested subsets S i ⊂ {1, . . . , n}, such that each S i is either contained in another set of the family of subsets, or disjoint from it. The set {1, . . . , n}, called the root, is by definition not in the tree. One can then visually think of the sets S i as the nodes of the tree, which are connected by branches to all those nodes that are subsets of S i , but not proper subsets of any element of the tree other than S i . The leaves of the tree are the nodes that do not contain any other sets of the tree as subsets, so they are of the form S i = {i}. Let us introduce some further notation: If T is a tree on n elements of a set, then we denote by |T| the elements of this set. Furthermore, let T be of the form T = {T 1 , . . . , T r }, where each T i is itself a tree on a proper subset of {1, . . . , n}, so that |T 1 | ∪ · · · ∪ |T r | = {1, . . . , n} is a partition into disjoint subsets. For such trees we recursively define an open, non-empty domain of M n , by
2.3.10
Here m i is the maximum element element upon which the tree T i is built. Otherwise the notation is as introduced above axiom 5 In order to explicitly write down these identities, some further notation is useful. Let S ∈ T. Then we write l(1), . . . , l(j) ⊂ T S, if l(1), . . . , l(j) are the branches descending from S. By m i we denote the largest element in l(i) and assume an ordering of the branches, such that m 1 < . . . < m j . Also, as above in eq. 2.2.6, we will work in a basis of the linear maps . . , x n ). Then for each tree T an iteration of axiom 5 as described above will lead to the following factorization identity on the domain D[T]: where the sums are over all a S with S a subset in the tree excluding the root {1, . . . , n} and the leaves 1, . . . , {n}. For the latter we set a {1} := a 1 , . . . , a {n} = a n and a {1,...,n} := b. The hierarchical order by which the nested infinite sums are carried out is determined by the tree, with the sums corresponding to the nodes closest to the leaves coming first. Now, if we assume T to be a binary tree, i.e. one with exactly two branches descending from every node, then by the above formula we have expressed the n-point OPE coefficient C(x 1 , . . . , x n ) in terms of products of 2-point coefficients on the open domain D[T] ⊂ M n . Remembering that by definition C(x 1 , . . . , x n ) is an analytic function on the open, connected domain M n and using the fact that an analytic function on a connected domain is uniquely determined by its restriction to an open set, we propose:
Proposition 1
The n-point OPE coefficients C(x 1 , . . . , x n ) are uniquely determined by the 2-point coefficients C(x 1 , x 2 ). In particular, if two quantum field theories have equivalent 2-point OPE coefficients (see definition 2.1), then they are equivalent.
Thus, our first task is achieved. However, the question of higher factorization conditions is still not solved. It remains to show that the factorization condition 2. 3.11 for binary trees does not impose any further restrictions on C(x 1 , x 2 ) apart from eq. 2. 3.7. Therefore, let us study the following expression on the domain D[T] for any binary tree T. In other words, f T (x 1 , . . . , x n ) is just the expression for C(x 1 , . . . , x n ) in the factorization identity 2. 3.11 for the binary tree T. So, as we have just argued, f T can be analytically continued to an analytic function on M n , which we will denote by f T as well and which does not depend on the choice of binary tree T. As we want to analyze the constraints imposed on the 2-point coefficients C(x 1 , x 2 ) by the properties we just stated, we now drop these assumptions and only assume that the sums in eq. 2. 3.12 converge and define an analytic function f T on D[T], which can be analytically continued to M n for all n and all binary trees T on n elements. For the sake of the argument, we in particular do not assume the f T to coincide for different binary trees, except in the case n = 3, where the assumption that the f T coincide for the three possible binary trees on the respective domains is equivalent to the factorization condition for three points, eq. 2. 3.7, plus the symmetry and normalization conditions, eqs. 2.3.8 and 2. 3.9. These three conditions will be assumed to hold. Now we want to show that these assumptions suffice to deduce that all f T coincide for all binary trees T, thus implying the absence of further consistency conditions on C(x 1 , x 2 ) beyond those for n = 3. We graphically present the corresponding proof, which is not very difficult and quite similar to the proof of the analogous statement in our example of ordinary algebra. We start with the case n = 3. Here, the assumption that all f T agree for the three trees is graphically expressed by fig. 2.2. Each tree in this graph symbolizes the corresponding expression f T and the arrows denote the following relations: (i) the corresponding domains (see eq. 2. 3.6) are not disjunct and (ii) the expressions coincide on the intersection. Analyticity of the f T then implies that the f T 's agree on the whole of M n . Let us now move on to the n > 3 case and let T be an arbitrary tree on n elements. The idea of the proof is the following: Find a sequence T 0 , T 1 , . . . , T r of trees such that T 0 = T and T r = S, where S is the reference tree as drawn in fig. 2. 3. Furthermore, we want a relation as above, i.e. non-disjointness and equivalence on the intersection of the domains, to hold between elements T i and T i−1 in our sequence of trees. Making use of the analyticity properties of the f T as in the n = 3 case, this would yield f T = f S on M n , and hence all f T would be equal.
The construction of the desired sequence of trees is presented in the following by inductive methods. Our starting point is the binary tree T = T 0 as depicted on the left of fig. 2.4, where shaded regions represent subtrees whose particular form is irrelevant at this stage.
The next tree in the sequence, T 1 , is drawn on the right of fig. 2. 4. As stated above, we now 1 2 n-2 n-1 n The transformation of trees we used so far is no longer applicable on this tree, so we now perform a manipulation as depicted in fig. 2. 5. Again it is easy to convince oneself that the desired relation holds between these trees. This process can be repeated until we reach the tree T r 2 drawn in fig. 2. 6. By now it is clear how to proceed the iteration until we reach the desired tree S in fig As a result, we obtain the following theorem: Let f T be defined as in eq. 2. 3.12 on the domain D[T] for any binary tree T as a convergent power series expansion and assume that f T has an analytic extension to all of M n . Furthermore, assume that the associativity condition 2. 3.7 and the symmetry and normalization conditions, eqs. 2.3.8 and 2. 3.9, hold, i.e. suppose that all f T coincide for trees with three leaves. Then f T = f S for any pair of binary trees T, S.
General theory of perturbations
The concept of perturbations of a quantum field theory is essential in the extraction of explicit measurable predictions from the theory. Thus, we would like to implement this notion in our framework as well. This section will be concerned with the description of perturbation theory in the new framework. According to our definition, a perturbed quantum field theory should correspond to a perturbation series in some parameter λ for the OPE coefficients. As these coefficients are required to satisfy the constraints given in section 2.2, the perturbations of the coefficients will also have to satisfy corresponding constraints. It will be shown in this section that these constraints are of a cohomological nature. We have seen in the previous section that, up to technicalities related to the convergence of various series, the constraints on the OPE coefficients imposed by the factorization condition, axiom 5, can be formulated as a single "associativity condition" on the 2-point coefficients only, see eq.2. 3.7. The perturbed 2-point OPE coefficients will have to satisfy a perturbed version of this constraint, which turns out to be essentially the only constraint. We now want to study this perturbed version of the associativity condition.
The analogy between our framework (in particular the factorization condition) and ordinary algebra has already been emphasized in the previous section. We now carry this analogy a bit further, as the following discussion is closely analogous to the well-known characterization of perturbations, or in this context deformations, of an ordinary finite dimensional algebra. Let us therefore recall the basic theory of deformations of finite dimensional algebras (see [23,26]). Let A be a finite dimensional algebra over C, whose product is denoted as usual by A ⊗ A → A, A ⊗ B → AB for all A, B ∈ A. Then a 1-parameter family of products A ⊗ B → A • λ B, where λ ∈ R is a smooth deformation parameter, is called a deformation. We define the product A • 0 B to be the original product AB, but for non-zero λ we obtain a new product on A, or alternatively on the ring of formal power series C(λ) ⊗ A if we merely consider perturbations in the sense of formal power series. As argued above, this new product has to satisfy the strong constraints imposed by the associativity condition. Denoting the i-th order perturbation of the product by the associativity law yields to first order as a map A ⊗ A ⊗ A → A, in an obvious tensor product notation. Similarly, one obtains conditions for higher derivatives m i of the new product, which for i ≥ 2 are of the form In this discussion we want to exclude the trivial case, i.e. a simple λ dependent redefinition of the generators of A. Such a redefinition may be expressed in terms of a 1-parameter family of invertible maps α λ : A → A, such that the corresponding trivially deformed product can be written as So α λ can be viewed as an isomorphism between (A, • 0 ) and (A, • λ ), which suggests that the latter should not be regarded as a new algebra. To first order, the trivially deformed product is given by . Similar formulas hold for higher orders. We now want to give a more elegant formulation and interpretation of our conditions for the i-th order deformations of the associative product, eq. 2. 4.3, using the language of cohomology theory [26]. For this purpose, we introduce the linear space Ω n (A) of all linear maps ψ n : A ⊗ . . . ⊗ A → A and the linear operator d : Ω n → Ω n+1 defined by Using this definition and the associativity law for the original product on the algebra A one can show that d 2 = 0, i.e. d is a differential with a corresponding cohomology complex, the so called Hochschild complex (see e.g. [27]). More precisely, let Z n (A) be the space of all closed maps ψ n , i.e. those satisfying dψ n = 0, and B n (A) the space of all exact ψ n , that means those for which ψ n = dψ n−1 for some ψ n−1 . Then the n-th Hochschild cohomology HH n (A) is defined to be the quotient Z n (A)/B n (A). In this language, we may now identify the first order associativity condition, eq. 2.4.2, with the statement dm 1 = 0, or equivalently m 1 ∈ Z 2 (A). In addition, if the new product arises from just a trivial redefinition, as in eq. 2.4.4, then it follows that m 1 = dα 1 , which means m 1 ∈ B 2 (A). So indeed one finds that the non-trivial first order perturbations m 1 of the algebra product correspond to the non-trivial classes [m 1 ] ∈ HH 2 (A). Hence, HH 2 (A) = 0 is necessary for the existence of non-trivial deformations. We now want to continue our analysis at higher orders. For this purpose, let us assume a non-trivial first order deformation to exist and let us study the second order deformations. Thus, we consider eq.2. 4.3 for i = 2 and start with the right side of this equation, which can be viewed as a map ω 2 ∈ Ω 3 (A). Computation shows that dω 2 = 0, so ω 2 ∈ Z 3 (A). The left side of equation 2.4.3 for i = 2 turns out to be just dm 2 ∈ B 3 (A). Thus, if the second order associativity condition is to hold, we must have ω 2 = dm 2 ∈ B 3 (A), or in other words, the class [ω 2 ] ∈ HH 3 (A) must vanish. It follows that there is an obstruction to lift the perturbation to second order in the case of non-trivial deformations, i.e. for HH 3 (A) = 0. We can analogously continue to third order, obtaining the corresponding potential obstruction that [ω 3 ] ∈ HH 3 (A) vanishes, and so on. In summary, the space of non-trivial perturbations corresponds to elements of HH 2 (A), while the obstructions lie in HH 3 (A).
Let us now conclude this example and try to carry over the concepts we just used to the case of perturbations of a quantum field theory in our framework. A short reminder: A quantum field theory as defined in section 2.2 is given by the pair (V, C) of a vector space V , as defined in eq. 2.2.1, and a hierarchy of OPE coefficients C with certain properties. In section 2.3 we argued that all higher n-point coefficients are uniquely determined by the 2-point coefficients C(x 1 , x 2 ). Furthermore, we were able to show that, up to technical assumptions concerning the convergence of the series in eq. 2. 3.12, the key constraints on the n-point OPE coefficients are encoded in the associativity condition, eq. 2.3.7 for the 2-point coefficient, which we repeat for convenience: for r 12 < r 23 < r 13 § ¦ ¤ ¥
2.4.7
We want to study the following problem: When is it possible to find a 1-parameter deformation C(x 1 , x 2 ; λ) of the OPE coefficients which again satisfies the associativity condition, at least in the sense of formal power series in the deformation parameter λ? In fact, the symmetry condition 2. 3.8, the normalization condition 2. 3.9 and the axioms from section 2.2, except for axiom 5, should hold for the perturbation as well. However, as these conditions are linear in C(x 1 , x 2 ), they are much more trivial in nature than eq. 2. 4.7. Therefore, for the rest of this section, we will not include these conditions in our discussion, but instead continue with the main point, i.e. the implications of the associativity condition 2. 4.7 for the perturbed OPE coefficients. In analogy to the example from ordinary algebra, we will again find a characterization of perturbations in a cohomological framework. We now want to define a linear operator b, which defines the cohomology in question and therefore plays the role of the d in our example. However, because the definition of this operator will involve infinite sums (just as eq.2. 4.7) and as such sums are typically only convergent on certain domains, we have to specify a set of domains that will be stable under the action of b and is suitable for our application. The choice of a set of this kind is by far not unique, and different choices will yield different rings. For simplicity and definiteness, we chose the non-empty, open domains of (R D ) n defined by It is possible to express these domains in terms of the D[T] defined in eq. 2. 3.10, but this will not be necessary. Note also that the associativity condition 2. 4.7 holds on the domain F 3 {r 12 < r 23 < r 13 }.
We also need some objects for b to act upon. Therefore we define Ω n (V ) to be the set of all holomorphic functions f n on the domain F n that are valued in the linear maps f n (x 1 , . . . , x n ) : V ⊗ · · · ⊗ V → V, (x 1 , . . . , x n ) ∈ F n . § ¦ ¤ ¥
2.4.9
Now we are ready to introduce the boundary operator b : Ω n (V ) → Ω n+1 (V ) by the formula
2.4.10
where C(x 1 , x 2 ) is the undeformed OPE coefficient and a hat means omission. Note that this definition involves a composition of C with f n , which, when expressed in a basis of V , implicitly involves an infinite summation over these basis elements. It is therefore necessary to assume here (and also in the following for similar formulas) that these sums converge on the set of points (x 1 , . . . , x n+1 ) ∈ F n+1 . Thus, whenever we write bf n , it is understood that f n ∈ Ω n (V ) is in the domain of b. We now need the following lemma:
Lemma 1
The map b is a differential, i.e. b 2 f n = 0 for f n in the domain of b, such that bf n is also in the domain of b.
The corresponding proof is essentially straightforward computation and was given in [1], so it will not be repeated here. With the help of this lemma, we can define a cohomology ring associated to the differential b as
2.4.11
Now that we have introduced the necessary concepts from cohomology theory into our framework, we will, as in the case of our example from ordinary algebra, be able to find an elegant and compact formulation of the problem to find a 1-parameter family of perturbations C(x 1 , x 2 ; λ) such that our associativity condition, eq. 2. 4.7, continues to hold to all orders in λ. Introducing the grading of the 2-point OPE coefficients with respect to the perturbation order by 4.12 we note that the first order associativity condition which is valid for (x 1 , x 2 , x 3 ) ∈ F 3 , can equivalently be stated as
2.4.14
Here and in the following b is defined in terms of the unperturbed OPE coefficient C 0 . We conclude that C 1 has to be an element of Z 2 (V ; C 0 ). Let us consider coefficients C(x 1 , x 2 ) and C(x 1 , x 2 ; λ) connected by a λ-dependent field redefinition z(λ) : V → V in the sense of defn.2.1.
To first order, this implies 4.15 which is equivalent to bz 1 = C 1 , where again Thus, again in analogy to our example from the beginning of this section, the first order deformations of C 0 modulo the trivial ones are given by the classes in H 2 (V ; C 0 ). In order to generalize this result to arbitrary order in λ, we assume all perturbations up to order i − 1 to exist and state the associativity condition for the i-th order perturbation as the following condition for (x 1 , x 2 , x 3 ) ∈ F 3 : where ω i ∈ Ω 3 (V ) is defined by
2.4.17
At this stage we again encounter infinite sums when a basis of V is introduced into the above equation. We assume these sums to converge on F 3 as well. Then eq. 2. 4.16 can also be put into the elegant form
2.4.18
A solution to this equation defines the i-th order perturbation. It is obvious that a necessary condition on such a solution is bω i = 0, or in other words ω ∈ Z 3 (V ; C 0 ). In [1] it has been shown by the following lemma that this is indeed the case.
Lemma 2
If ω i is in the domain of b, and if bC j = ω j for all j < i, then bω i = 0.
Again, we do not repeat the proof of this lemma here, but refer the reader to [1]. Now if a solution to eq. 2. 4.18 exists, i.e. if ω i ∈ B 3 (V ; C 0 ), then any other solution will differ from this one by a solution to the corresponding homogeneous equation. A trivial solution to the homogeneous equation of the form bz i again corresponds to an i-th order field redefinition and is not counted as a genuine perturbation. We conclude with a summary of our findings in this section: We have found that the perturbation series can be continued at i-th order, if [ω i ] is the trivial class in H 3 (V ; C 0 ), which is a potential obstruction. In the case where this imposes no obstruction, the space of non-trivial i-th order perturbations is given by H 2 (V ; C 0 ). In particular, if we knew H 2 (V ; C 0 ) = 0 and H 3 (V ; C 0 ) = 0, then perturbations could be defined to arbitrary order in λ.
The fundamental left representation
In the previous sections we have often used examples from ordinary algebra in order to motivate concepts in our framework. In the following we will introduce another such parallel, namely a construction in our framework which has some features in common with a representation of an algebra.
In order to motivate certain aspects of our approach, we sometimes wrote formal relations like where summation over repeated indices is understood. It is, however, important to note that these relations were only heuristic in the sense that none of our required properties of the OPE coefficients relied on the existence or properties of the hypothetical operators φ a , which merely served as "dummy variables". Our viewpoint here is again similar to the standard viewpoint taken in algebra, where an abstract algebra A is entirely defined in terms of its product, i.e. a linear map m : A ⊗ A → A. But, as in our case, the algebra elements need not be represented a priori by linear operators on a vector space. Instead one is free to chose any representation, i.e. a linear map π : A → End(H), which preserves the product structure, π[m(A, B)] = π(A)π(B) for all A, B ∈ A. By this line of thought it seems natural to look for a construction similar to a representation in our context. As we will argue in the following, it is indeed possible to find a "canonical" construction of this kind, which will be called the fundamental left representation. This leads us to the following definition: We define "vertex operators" Y(|v , x) : V → V , also referred to as "left representatives" in subsequent chapters, by the formula for any two vectors |v , |w ∈ V and x = 0. Choosing a basis {|v a } of our vector space V , the matrix components of this map are then given by This notion will be referred to as fundamental left (or vertex algebra) representation.
Note that axiom 8 on the OPE coefficients implies where on the right side ∂ µ = ∂ xµ denotes the usual partial derivative with respect to x µ . Further, by the associativity condition on the OPE coefficients, eq. 2. 4.7, one can deduce 5.5 which implies that the linear operators Y(|v a , x) satisfy the operator product expansion. Thus, we may formally view them as forming a "representation" of the heuristic field operators, i.e. formally "π[φ a (x)] = Y(|v a , x)" is a "representation" of the algebra defined by the OPE coefficients. Note also that eq. 2.5.5 may equivalently be written as on the same domain as above. This is a standard identity in the theory of vertex operator algebras [4,5,28,29]. The relation between the approach to quantum field theory as outlined in this chapter and the mathematical theory of vertex operator algebras will be pursued further in [2].
The model
Whereas chapter 2 was concerned with the definition and general features of our new approach, we will in this chapter give an explicit construction of a model theory within this framework. Our model is a massless, scalar Lagrangian theory on 3-dimensional Euclidean space and the construction will be up to low orders in perturbation theory. We will proceed as follows: Our starting point will be the construction of a scalar, massless, non-interacting quantum field theory in arbitrary dimension D ≥ 3, which is presented in section 3. 1. In this context, we will make use of the fundamental left representation (see section 2.5) in order to introduce a formulation of the theory in terms of the familiar concept of creation and annihilation operators on a Fock space. This convenient formulation was developed in joint work by Hollands and Olbermann [1]. Then, in section 3.2, an algorithm for the iterative construction of perturbations of an interacting Lagrangian quantum field theory is developed (also first presented in [1]). Following this scheme, low order calculations for a specific 3dimensional toy model theory are carried out in section 3.4. Finally, in section 3.5 we give some results on OPE coefficients at arbitrary perturbation order, which follow from patterns emerging in the mentioned iterative scheme. These last two sections constitiute the main results of this thesis.
The free massless field
The aim of this section is to construct a quantum field theory in the sense outlined in chapter 2 for the "simplest possible example", namely for a free, massless scalar field on D-dimensional Euclidean space, which is classically described by a Lagrangian which leads to the field equation with = δ µν ∂ µ ∂ ν and where σ D = 2π D/2 Γ(D/2) is the surface area of the D-dimensional unit sphere. Here the prefactor − 1 σ D is chosen for later convenience, since it leads to the particularly simple form for the Green's function of the operator˜ , where r = |x|. In our framework, construction of a corresponding quantum field theory means that we are to find the OPE coefficients C(x 1 , x 2 ) satisfying the axioms of section 2.2 for this model. Additionally, also according to section 2.2, we need to define a vector space V as characterized in that section. We chose this vector space, assuming D ≥ 3 for convenience, according to the following definition: Let V be the unital, commutative C-module generated as a module (i.e. under addition, multiplication and scalar multiplication) by formal expressions of the form ∂ {µ 1 . . . ∂ µ N } ϕ and unit 1, where µ i ∈ {1, . . . , D} and curly brackets denote the totally symmetric, trace-free part, i.e. by definition The trace free condition has been imposed because any trace would give rise to an expression containing ϕ, which should vanish in order to satisfy the field equation on the level of V . The next step is to find a basis of V which is most convenient for our purpose. As an intermediate step, let us first consider a basis of R D in terms of totally symmetric, trace free, rank-l tensors.
so for example N (l, 3) = 2l+1 and N (l, 4) = (l+1) 2 . We denote the corresponding basis elements by (t lm ) µ 1 ...µ l , m ∈ {1, . . . , N (l, D)}, and for convenience require these to be orthonormal with respect to the natural hermitian inner product on (R D ) ⊗l induced by the Euclidean metric on R D . Let us first define where summation over repeated spacetime indices µ i is understood and where is a normalization coefficient chosen in a way to later obtain a simple form for the OPE coefficients. Then a basis of V as a C-vector space is given by 1, together with the elements } is a multi-index of non-negative integers a lm , only finitely many of which are non-zero. The canonical dimension of such an element shall be defined as One may formally view V as a "Fock-space", where a lm is the "occupation number" of the "mode" labeled by the quantum numbers l, m. That means, we may decompose V into subspaces of different "particle number" (sum of all "occupation numbers") , i.e.
3.1.10
where V n = ⊗ n V 1 is the "n-particle" subspace. As usual, it is also possible to define creation and annihilation operators, b † lm and b lm , on the Fock-space, as linear maps b † lm : V n → V n+1 and b lm : V n → V n−1 , whose action on the basis elements in our case is given by where e lm is the multi-index with unit entry at position l, m and zeros elsewhere. Thus, V n is generated from the unit element 1, i.e. the "vacuum", by span(b l 1 m 1 . . . b lnmn ). These operators satisfy canonical commutation relations where id is the identity operator on V .
We now want to present the explicit form of the OPE coefficients of the model in terms of the above operators. In order to further simplify the form of the coefficients, we introduce spherical harmonics in D-dimensions (see [30] and appendix D) and establish an isomorphism between the totally symmetric, trace-free tensors t lm and the mentioned spherical harmonics Y lm by where we integrate over the D − 1-dimensional unit sphere S D−1 . The constant c l can be determined by our requirement of orthonormality of the tensors t lm , with the result (see eq. D. 1.15).
3.1.16
With this notation in place, we want to proceed to the actual construction of the OPE coefficients C(x 1 , x 2 ). For this purpose, it is sufficient to consider the left-representatives Y(|v a , x) : are exactly the OPE coefficients, see section 2.5. We will start our investigation with the simplest non-trivial leftrepresentative, Y(ϕ, x), corresponding to the basic field, which is defined as where r = |x|,x = x/|r| and Y lm (x) = Y lm (x). Notice that this equation has the familiar form of a free field operator with an "emissive" and an "absorptive" part. However, this is less surprising if one remembers that Y(ϕ, x) is in a sense the "representative" of the (formal) field operator ϕ(x) on V . We will now "derive" eq. 3.1.17 from the standard quantum field theory formalism, i.e. the vectors |v a ∈ V are now really viewed as quantum fields in the usual sense. In order to determine Y(ϕ, x) let us consider the product of ϕ with an arbitrary element |v a ∈ V . Using Wick's theorem, this can be written as: Here we have simply inserted the explicit form of |v a as defined in eq. 3. 1.8, applied Wick's theorem and in the last step used the fact that the propagator in our theory is just G F (x) = r 2−D as stated in eq. 3.1. 3. Double dots : · : here denote the normal ordered product of the standard free quantum field theory formalism. Due to the analyticity properties of this normal ordered product, we can perform a Taylor expansion of the corresponding term in eq. 3. 1.18 in x around 0, which yields
3.1.19
This rather lengthy expression can be simplified by the observation that the term x µ 1 · · · x µ l = x ⊗l may be replaced by its trace-free, anti-symmetric part x {µ 1 · · · x µ l } due to the field equation 3.1.2. We may thus use the identity which holds as a result of eq. 3.1. 15. Furthermore, we will need the relation with t lm = t lm , which is derived in appendix D. By substitution of eqs. 3
3.1.22
A closer inspection of this equation yields the following simplifications: Remembering the definition of ϕ lm (x) in eq. 3.1.6, we can use in the first line of equation 3. 1. 22. Secondly, we can just drop the contraction over the tensors t l m in the second line, because in our above construction we chose these tensors to be orthonormal. Finally, if we insert the explicit form of the constants c l and F (l) into the equation and perform simple algebraic manipulations, we obtain the convenient expression
3.1.24
From the above equation we can simply read off the desired OPE coefficients.
, so using the ladder operators introduced above we finally obtain which is just eq. 3.1. 17. As stated above, this is only the simplest non-trivial left-representative. The corresponding formula for a general |v a ∈ V , as defined in eq. 3
3.1.27
Again double dots denote normal ordering, which in our Fock space formulation simply means that all creation operators are to the left of the annihilation operators. This formula can be derived in two ways: One can either proceed in analogy to the simple Y(ϕ, x) case presented above, or use the factorization condition, axiom 5, in order to iteratively construct Y(|v a , x) out of Y(ϕ, x). If we use the former approach, we again import information from the standard formulation of quantum field theory. We thus have to check whether the OPE coefficients found this way are compatible with our framework, i.e. we have to check whether the axioms of section 2.2 hold. This can indeed be done, where most effort again goes into the proof of the consistency condition, eq. 2. 4. 7. Here we will neither give this proof, nor the derivation of eq. 3.1.27 by this method. Instead we take the above mentioned alternative road to this equation. In this derivation we use the form of Y(ϕ, x) in eq. 3. 1.26, and proceed in our framework, as defined in chapter 2. So at this stage we impose all our axioms, in particular the factorization axiom and therefore also the associativity condition 2. 4.7, on the further OPE coefficients. In other words, our axioms will not have to be checked afterwards, but are required to hold initially. This iterative construction of Y(|v a , x) out of Y(ϕ, x) will be described in the next section, where it will serve as an instructive example of the procedure presented there. One should mention here that both approaches to the derivation of eq. 3. 1.27 work equally well and give the same results. Let us conclude this section with a brief summary of our results. We have constructed the full quantum field theory, i.e. is the pair (V, C), for the non-interacting model described by the field equation 3 . The corresponding vector space V is defined in eq. 3.1.8 and the OPE coefficients C(x 1 , x 2 ) can be obtained from the fundamental left representation, given in eqs. 3.1.26 and 3. 1.27, as described in definition 2.2. Furthermore, by the coherence theorem, or more specifically by proposition 1, the n-point OPE coefficients C(x 1 , . . . , x n ) are uniquely determined by the 2-point coefficients C(x 1 , x 2 ).
Perturbations via non-linear field equations
Having constructed free quantum field theory (for a massless scalar field), we now want to focus on the more interesting case of a theory with interaction. In section 2.4 we have discussed perturbations in our framework as an analog of deformations of an algebra. This setting was very general, in the sense that it also holds for non-Lagrangian models. In the following, we are going to deal with the special case of theories which have a classical counterpart with a Lagrangian where L f ree is given by eq. 3. 1
.1 and with the interaction part
This choice leads to an equation of motion of the form
3.2.3
In other words, we consider massless, scalar ϕ k+1 -theory with interaction parameter λ on Ddimensional Euclidean space. In this setting, the following theorem holds:
Theorem 2 (Perturbations via field equations)
An interacting quantum field theory (C, V ) obeying the field equation 3.2.3 can be constructed perturbatively up to arbitrary orders in the coupling constant λ from the underlying free theory by an algorithm which relies on the successive application of the associativity condition 2.4.7 and the differential equation 3.2. 3.
This algorithm will be outlined in the following.
Recall from the previous section that we defined the vector space V to be spanned by tracefree expressions of the form ∂ {µ 1 . . . ∂ µ N } ϕ. This was motivated by the fact that all expressions containing a trace would vanish in the free theory due to the field equation 3.1.2. In the present context, i.e. with a non-linear field equation of the form given above, this argument clearly does not hold anymore. Therefore we consider from now on the vector spaceV which is spanned by the unit element and all expressions of the form ∂ µ 1 . . . ∂ µ N ϕ. Then the vertex operators of the interacting theory are maps fromV to End(V ). In the remainder of this thesis we will drop the caret overV again to lighten the notation, but it is always understood that from now on also expressions containing traces are allowed.
Let us chose a basis of the vector space V with elements |v a as in the previous section, see 3.1. 8. Then we may transfer the field equation to the level of vertex operators by the identity 2. 5.4, which yields
3.2.4
On the level of OPE coefficients 2 , this implies As described in section 2.4, perturbations in our framework imply a grading of the OPE coefficients C as in eq. 2. 4. 12. In terms of the basis elements of the OPE coefficients, this grading takes the form or equivalently for the left representatives
3.2.7
Substitution of this grading into eq. 3.
and comparison of the outermost left and right expressions in this equation yields an infinite number of relations
at any order i > 0 in λ. The perturbation orders of the coefficients in this equation differ by one, as a result of the appearance of the parameter λ on the right of eq. 3.2. 5. It is this fact, which makes eq. 3.2.8 so powerful. At a first glance, it almost seems as if these relations were already enough in order to establish an iterative pattern, which allows for the construction of the perturbed OPE coefficients up to arbitrary order, starting from the free theory. However, it is easy to see that we quickly run into problems. Let us briefly go through the procedure until these obstacles appear: The zeroth-order coefficients are known from section 3. 1. Then we solve the differential equation 3.2.8 obtaining the coefficients (C 1 ) b ϕa . Obviously, we would now like to apply this equation again and proceed to second order. For this purpose, however, we would need the coefficients (C 1 ) b ϕ k a , which a priori we know nothing about. So already at first order our iteration seems to break down. At this stage we introduce the second ingredient into the construction, namely we assume that the associativity condition, eq. 2.4.7, holds. Suppose for the moment that all perturbations are known to order i − 1. Then, according to eq. 2. 4.16, the associativity condition on the i-th order perturbation can be written as (see also eq. 2. 1 on the domain F 3 = {r 12 < r 23 < r 13 }, see eq. 2.4. 8. Let us consider the special case |v a = ϕ = |v b .
i j=0 e
3.2.10
2 It follows from the Euclidean invariance axiom that C c ab (x, Next, we are interested in the limit x 1 → x 2 . Clearly, the coefficient (C j ) e ϕϕ (x 1 , x 2 ) on the left side of the above equation will be most dramatically affected by this procedure. By the scaling dimension condition, axiom 6, we have for this coefficient
3.2.11
where we also used the fact that |ϕ| = (D −2)/2 from eq. 3. 1.9. As all coefficients (C j ) e ϕϕ (x 1 , x 2 ) with negative scaling degree will vanish in the limit, only few terms, namely those with |e| ≤ |ϕ 2 |, will contribute to the sum on the left side of eq. 3.2. 10.
Let us consider the case j = 0 in the sum on the left side of eq. 3.2. 10. With the results of the previous section it can easily be derived that only one term in the sum over e will give a contribution in this case. This can be seen as follows: The condition |e| ≤ |ϕ 2 | restricts e to be either 1, ϕ lm with l ≤ (D − 2)/2 or ϕ 2 . However, orthonormality of our basis and the form of the free theory left representative Y 0 (ϕ, x) imply that the OPE coefficients (C 0 ) ϕ lm ϕ ϕ = ϕ lm |Y 0 (ϕ, x)|ϕ vanish for any value of l, since a single ladder operator does not suffice to transform the vector |ϕ into |ϕ lm . Now let us discuss the choice e = 1, i.e. we consider the product Here the second coefficient vanishes for i = 0, because the requirement for an identity element, axiom 4, necessarily requires Thus, for j = 0 and i > 0 the only contribution to the sum over e on the left side of eq. 3 . Applying the results of the previous section one finds ( to the right side of the equality sign in eq. 3.2.10, we have the equation 3
3.2.13
Now suppose we already know (C i ) b ϕa (x 1 , x 2 ) for all |v a , |v b ∈ V in addition to all the lower order coefficients. Then all expressions appearing on the right side of the above equation are known. That means, if we have all coefficients up to (C i ) b ϕa (x 1 , x 2 ), then we can uniquely determine (C i ) b ϕ 2 a (x 1 , x 2 ) by this equation, which is just the kind of identity we were looking for. Before we go on with our iterative procedure, let us first view the above equation from a different perspective. Remember, e.g. from eq. 2. 1.7, that the first sum just gives the 3-point Thus, the relation may equivalently be written as
3.2.14
for r 23 < r 13 . This suggests the following interpretation: Naively, one might try to obtain the desired coefficient (C i ) d ϕ 2 c (x 2 , x 3 ) by just letting two points of the above three point coefficient 3 Here and in the following the limit is understood as lim , i.e. |x1| approaches |x2| from below approach each other. Similarly, one might try to obtain ϕ 2 (x 2 ) as the coincidence limit of the product ϕ(x 1 )ϕ(x 2 ). It is a well known feature of quantum field theory that this naive limit does not exist, as one is dealing with distributional objects (operator valued distributions in the standard formulation, distributional OPE coefficients in our framework). So in order to make sense of expressions like ϕ 2 (x 2 ) or (C i ) d ϕ 2 c (x 2 , x 3 ), one has to subtract counterterms before performing the limit in order to obtain well defined objects. That is precisely the meaning of the sum on the right side of equation 3.2.14. So in a sense, this equation may be viewed as our analogue of renormalization, where the counterterms are represented by the finite sum that is subtracted. Note, however, that this identity is an intrinsic feature of our theory resulting from the associativity condition. Therefore we do not have to apply any kind of external renormalization as the framework is inherently finite at any order.
Let us conclude this interpretational interlude and come back to our iteration procedure. Remember that we have just found a way to express the coefficients of the form x 2 ) and lower order coefficients. This procedure can be repeated. That means we start at eq. 3.2.9 again, but this time we chose a = ϕ 2 and b = ϕ. Following the steps that lead to eq. 3.2.13, we again take the limit x 1 → x 2 . We will encounter one summand of the form The first factor here is again just 1, so we isolate
3.2.15
Again, all coefficients on the right side are known. Clearly, this scheme can be continued iteratively. Assume we know all coefficients up to those of the type (C i ) d ϕ k−1 c (x 2 , x 3 ), then we can uniquely construct the coefficients (
3.2.16
This procedure solves the problem we encountered in the discussion below eq. 3.2.8 in our approach towards a construction of the perturbed OPE coefficients just using the field equation.
There we got stuck already at first perturbation order, as we could not relate the coefficients (C 1 ) b ϕa and (C 1 ) b ϕ k a . This relation can now be achieved by k − 1 iterations of the above equation. Let us briefly sum up the algorithm we just found: The starting point of the iteration is the free quantum field theory (V, C 0 ) as described in section 3.1. Next we use the non-linear field equation, more precisely we solve the differential equation 3.2.8, in order to construct the first order coefficients of the form (C 1 ) b ϕa . Then we repeatedly apply eq. 3. 2.16, which follows from the associativity condition, obtaining the coefficients (C 1 ) b ϕ k a , which allow for the construction of the second order coefficients (C 2 ) b ϕa via the differential equation 3.2.8 again. Exploiting the associativity condition and the field equation once more yields the third order coefficients, and so on. The procedure is summarized schematically in the following diagram:
3.2.17
From the standard formulation of quantum field theory we have learned the lesson, that the construction of higher order perturbations should be expected to run into serious calculational difficulties already at rather low orders. These difficulties appear in the calculation of Feynman integrals containing loops, which has become an independent branch of theoretical physics over the last decades. Therefore, we should also expect serious calculational effort to go into the explicit construction of perturbed OPE coefficients in our framework. Where do we encounter these difficulties? Reviewing our algorithm, we find that it essentially consists of two calculational steps: Solving the differential equation 3.2.8 and performing the sums and the limit in eq. 3.2. 16. The former does not cause any trouble, as all 2-point coefficients can be expressed as a linear combination of terms of the form c · Y lm (x)r a (log r) b for some constant c ∈ C and parameters a, b, l, m ∈ N. As we will see below, it is not very difficult to invert the Laplace operator on such an expression (see appendix E). It turns out that most calculational work has to be put into eq. 3.2.16, especially into the first sum over e. This is an infinite sum over all basis elements of our vector space V , i.e. a sum, or rather a multiple sum, over the multi-index e. Before performing the limit in eq. 3.2.16, one has to put these sums into a form that makes it possible to control the cancellation of infinities with the counterterms. So we have in a sense traded the problematic higher loop Feynman integrals for multiple infinite sums. It is not clear a priori which way is more convenient for explicit calculations and it is one aim of this thesis to give some first impressions of the effort that goes into the explicit construction of perturbations of a quantum field theory in the iterative scheme outlined above.
Ambiguities
In the usual approaches to renormalization, certain ambiguities are typically present. It is therefore natural to ask "how unique" the results obtained from our method are. As mentioned above, our algorithm consists of two computational steps: Solving the differential equation 3.2.8 and applying the consistency condition (eq. 3.2.16). The latter uniquely determines the coefficients (C i ) b ϕ k a from the coefficients (C i ) b ϕ k−1 a , so possible ambiguities can only arise in the differential equation. In appendix E we present a special solution to this equation, which is used throughout this thesis, and the freedom in the choice of this solution is also briefly discussed. To sum up, we may add to any solution Y i (ϕ, x) of the field equation an End(V )-valued harmonic polynomial in x, i.e.
where K J ∈ End(V ). Our choice of left representatives Y i (ϕ, x) is further restricted by the axioms of our framework (see section 2.2). Most importantly, the scaling dimension constraint, axiom 6, restricts Y i (ϕ, x) to be of the general form
3.2.21
where | · | denotes the canonical dimension of the vectors in V as defined in eq. 3.1. 9. The fact that in the above condition only an inequality is required to hold means that we may add contributions of lower scaling dimension to any OPE coefficient. Thus, the grading of the OPE coefficients by dimension, which held in our construction of the free theory in the previous chapter, is replaced by a filtration at higher perturbation orders. The convention used for the solution of the differential equation within this thesis is justified by the following proposition: x) satisfies these axioms as well (except possibly the factorization axiom). Further, if the OPE coefficients at order i − 1 are graded by dimension (i.e. if the equality holds in the scaling dimension axiom), then this grading is preserved by the solution obtained with the help of −1 .
Remark: The factorization axiom can not be checked with the knowledge of Y i (ϕ, x) alone, see eq. 2.5. 6. Recall from the previous discussion that we assume this property to hold, which allows us to determine the other i-th order left representatives by the algorithm outlined above.
Proof : Axioms 1, 3, 4 and 7 are satisfied trivially, simply because they were also satisfied by the zeroth order coefficients and since the operator −1 does not change any relevant properties for these axioms. Similarly, the zeroth order coefficients satisfy the Euclidean invariance property. Since −1 acts rotationally invariant (i.e. it does not change the spherical harmonics Y JM ) our choice is also consistent with this axiom. Further, the constraint implied by axiom 8 is precisely the field equation as a differential equation on R D . Since −1 is a solution to that equation, it also satisfies this axiom. It remains to discuss the scaling axiom and the claim that the grading by dimension is conserved. Assume the axiom and the grading to hold at order i − 1. Then sd (C i−1 ) b ϕ k a = |a| + |ϕ k | − |b| holds. Using our solution to the field equation, which increases the scaling degree by 2, it then follows that Now we claim that in a renormalizable theory |ϕ k | − |ϕ| = 2 always holds. This can be deduced from the field equation as follows: Comparing the dimensions of both sides of the equation using eq. 3. 1.9 we can solve for the dimension 4 of the coupling constant λ.
By definition, a theory is renormalizable if |λ| = 0, super-renormalizable if |λ| < 0 and nonrenormalizable if |λ| > 0. Since we are interested only in the first case, the relation does indeed hold. Using this relation in eq. 3.2.22 we find the desired equation confirming axiom 6 and preserving the grading. Since both the axiom and the grading hold in the free theory, they hold to all perturbation orders using our solution.
Since the OPE is a short distance expansion, the following theorem should be of interest: 4 Recall that the derivations ∂µ on V increase the dimension by 1 Theorem 3 (Ambiguities at short distances) Then the most rapidly divergent part of the OPE coefficients Proof : In order to find the part of an OPE coefficient with the strongest divergence, we first extract the contribution of maximal scaling degree. From the scaling axiom we know that this is the contribution proportional to r |c|−|a|−|b| for a coefficient (C i ) c ab . Let us denote the projection on the contribution of scaling degree d by Sc d . Thus with d > |c| − |a| − |b|. As the OPE coefficients may also contain powers of logarithms, it is not guaranteed that all terms in Sc |a|+|b|−|c| (C i ) c ab (x) have the same divergent behavior for x → 0. In order to find the dominating contribution to Sc |a|+|b|−|c| (C i ) c ab (x) at short distances, we have to pick out the terms including the highest powers of log r.
Now the claim is that this contribution containing the highest power of log r among the terms of maximal scaling degree is uniquely determined by To finish the proof, we show in the following that −1 (C i−1 ) b ϕ k a (x) always contains higher powers of log r than any possible ambiguity. Recall from the discussion at the beginning of this section that the ambiguities have to be in the kernel of and in addition have to be compatible with the axioms of section 2.2. The elements in the kernel of the Laplacian are the harmonic polynomials, hence the ambiguities have to be of the form Since we only consider ambiguities of maximal scaling degree, we are only interested in J = |b| − |ϕ| − |a|. We distinguish two cases: is incompatible with the Euclidean invariance axiom. Thus, no terms of maximal scaling degree may be added to −1 (C i−1 ) b ϕ k a (x) and we have itself also satisfies the Euclidean invariance axiom. It follows from the definition of −1 in eq. E.7 that any contribution to −1 (C i−1 ) b ϕ k a (x) proportional to a harmonic polynomial includes at least one power of log r. Since the ambiguities contain no logarithms, they diverge less rapidly.
Remark: The theorem does not imply that −1 determines the short distance behavior of the complete OPE, since we excluded coefficients (C i ) b ϕa with (C i−1 ) b ϕ k a = 0. It has been mentioned a few times that our construction does not rely on any renormalization prescription. Nevertheless the ambiguities in our approach share some intriguing similarities with the well known remormalization ambiguities of standard quantum field theory.
• Renormalized composite quantum fields are filtered objects by scaling dimension (as opposed to graded). Similarly, our OPE coefficients are filtered by scaling dimension in the interacting case.
• One may change the definition of −1 by introducing a complex parameter µ in every logarithmic contribution, i.e. log r → log(µr). This solution also satisfies proposition 2.
The free parameter µ is reminiscent of the choice of renormalization scale in the standard formulation of quantum field theory (see also section 3.6).
A deeper understanding of the ambiguities and their relation to renormalization theory may be a topic for future research (see chapter 4).
Construction of
As already promised in the previous section, we conclude with the construction of the general left-representative Y(|v a , x) in the free theory, i.e. we give a proof of eq. 3. 1.27 (in this section we should actually write Y 0 (|v a , x) indicating zeroth perturbation order, but for convenience we will not do this in the following discussion). Our starting point is the left representative Y(ϕ, x),see eq. 3. 1.26 , which has been "derived" from standard quantum field theory. Remembering the identity [Y(|v a , x)] c b = C c ab , it is clear that the iterative scheme of this section, in particular eq. 3. 2.16, is exactly the needed tool for our purpose, as it allows for the construction of Y(ϕ k , x) starting from Y(ϕ, x). It is then possible, by taking the appropriate derivatives and multiplying the right constants, to determine Y(|v a , x) from Y(ϕ k , x). Thus, the following calculation is also a good practice for later applications of the consistency condition, eq. 3.2. 16. Let us begin with the first application of this equation:
3.2.30
This is just eq. 3.2.13 for i = 0 with an additional counterterm due to eq. 3.2. 12. We can rewrite this equation as
3.2.31
By the definition of Y(ϕ, x) in eq.3.1.26, we can write this product as
3.2.32
where r x = |x| and r y = |y| respectively. Let us focus on the following partial sum
3.2.33
This sum is of particular interest, because it is the only partial sum of eq. 3.2.32 with infinitely many non-vanishing contributions after taking matrix elements v b | · |v a . This is due to the fact that the successive application of creation and annihilation operator with the same indices on some vector |v a ∈ V gives, according to the definitions 3.1.11 and 3. 1.12, just a prefactor of (a lm + 1), where a lm is the "occupation number" of the respective mode. It is easy to convince oneself that all other contributions to Y(ϕ, x)Y(ϕ, y) have finite matrix elements. First, consider the partial sum of eq. 3.2.32 with two annihilation operators. Again, according to eq. 3. 1.12, the action of an annihilation operator b lm on a vector |v a gives a prefactor of (a lm ) 1/2 and reduces the index a lm by 1. Recall that in the definition of our basis V we demanded the multiindices a of the basis elements |v a to contain only a finite number of non-zero entries. Hence, the aforementioned prefactor resulting from the application of an annihilation operator makes sure that a sum of the type ∞ l=0 b lm |v a is always finite. So if we sum over the product of two annihilation operators, both of these sums will be finite. Concerning the case of two creations operators, we can deduce from orthogonality of the basis elements that v b |b † lm |v a = 0 for a + e lm = b, so the sum ∞ l=0 v b |b † lm |v a has at most one non-zero summand and is thus finite. Finally, let us come to the partial sums with a pair of one annihilation and one creation operator. If the former stands to the right of the latter, we can simply use the same argumentation as in the case of two annihilation operators and find that also here no infinite sums appear. This only leaves the case where the creation operator acts before the annihilator. However, if the indices on these two operators do not coincide, it follows from the commutation relations 3.1.14 that we may simply exchange their order, which leads us to the case that we just argued to be finite. Hence, the only possibility for infinities to appear in an arbitrary matrix element of eq. 3.2.32 is the matrix element v a | · |v a of the partial sum in eq. 3.2. 33.
We can further simplify the analysis of this infinite sum by exploiting the commutation relation of the ladder operators also in this case, which suggests that in addition to the term where the order of the two operators is switched we pick up a term with the identity operator replacing the pair of ladder operators, i.e.
3.2.34
Then, the first term is finite again, since now the annihilation operator acts first, and the remaining infinite sum has the form of eq. 3.2.33 with the ladder operators replaced by the identity. We now want to find a closed form expression for this sum. Using the addition theorem for the D-dimensional spherical harmonics (see eq. D. 1 where P l (D,x) is the D-dimensional Legendre polynomial and N (l, D) and σ D are defined as in section 3.1, we can perform the sum over m and arrive at the formula
3.2.36
In view of eq. 3.2.31, we are interested in the limit y → x in this expression, so for convenience we may chose x and y collinear, i.e.x =ŷ. As the Legendre polynomials are normalized, P l (D, 1) = 1, the equation further simplifies
3.2.37
Here C ν l (x) are the Gegenbauer polynomials (see e.g. [31] or [32]) and should not be confused with an OPE coefficient. It is useful to write the equation in this form, because the generating function of the Gegenbauer polynomials is well known:
3.2.38
Since we know that r x > r y in our case, we can apply this identity to eq. 3.2.37 with the result
3.2.39
so this partial sum of eq. 3.2.32 is in fact divergent in the limit y → x. Hence, there should either be another divergent part of this sum with opposite sign, or the "counterterm" has to cancel the divergence. Our formula for Y(ϕ, x), eq. 3. 1.26, yields for the counterterm which is indeed equal to eq. 3.2.39 in the limit y → x. As there are no additional counterterms, and as the rest of eq. 3.2.32 is finite, the intrinsic renormalization procedure has indeed worked out. Note that we can reduce the calculational effort drastically if we just drop the counterterm together with the partial sum considered above from eq. 3.2. 31. This can be achieved in a very elegant manner, namely by normal ordering, i.e. by simply rearranging the order of the ladder operators in eq. 3.2.32 in such a way that the annihilation operators always act first. If these operators are commuting, nothing changes in this trivial process. Hence the only case where this procedure actually manifests itself is if we put products of the form b lm b † lm into normal order. Usually we would have obtained an additional term with the ladder operators replaced by the identity, if we were to exchange the order of this expression by hand, due to the commutation relations. If we require normal ordering, this extra term is neglected. Now recall form eq. 3.2.33 and the subsequent discussion that it was precisely this extra term that was responsible for the divergence, which canceled with the counterterm. Hence, we may simply forget about this extra term and the counterterm altogether, which is expressed in the following formula:
3.2.41
As above, double dots denote normal ordering. This procedure can again be iterated. Let us proceed with the analog of equation 3.2.31 for the next left representative
3.2.42
which follows from eq.3.2. 15. The sum over e can be further specified by noting that the restrictions |e| ≤ |ϕ 3 | and e = ϕ 3 constrain e to be of the form 1, ϕ lm with l ≤ (D − 2) or ϕ lm ϕ l m with l + l ≤ (D − 2)/2. Eq. 3.2.41 suggests that Y(ϕ 2 , x) acts by two ladder operators, so it follows that only the option e = ϕ lm gives a non-vanishing OPE coefficient C e ϕ 2 ϕ , since in the other two cases no combination of the two ladder operators can transform ϕ into e. Therefore, the above equation may be simplified to
3.2.43
Now let us see what happens is we again just prescribe normal ordering to the product of the left representatives Y(ϕ 2 , y) = : [Y(ϕ, y)] 2 : and Y(ϕ, x). This time we encounter products of three ladder operators, two of which are already normal ordered. Normal ordering is trivial except for terms of the form Ignoring the sum over the primed indices in these expressions, we can carry out the sums over m and l just as we did in the calculations leading to eq. 3.2. 41. Therefore, the partial sum containing all expressions of the form just mentioned can be simplified to
3.2.44
where the term in brackets results from the sum over l and m, Y(ϕ, y) represents the sum over l and m and the additional factor 2 comes into the equation, because the sums over expressions lm are the same due to normal ordering. Since Y(ϕ, x) is analytic around y = x, we may perform a Taylor expansion
3.2.45
But this is exactly equal to the counterterms C ϕ lm ϕ 2 ϕ (x − y)Y(ϕ lm , x) in the limit x → y, so we can drop all these terms, require normal ordering and write
3.2.46
The generalization of this result to arbitrary powers uses the same argumentation, and it is then a straightforward calculation to verify the formula for the general left representative Y(|v a , x) given in eq. 3.1.27.
Diagrammatic notation
The algorithm presented in the previous section can be neatly visualized in terms of rooted trees, which should not be confused with the trees we used in chapter 2. 3. These trees will help us keeping track of all the infinite sums and subsequent inversions of the Laplace operator, which are the main steps in the scheme of the previous section. They might further be helpful to distinguish emerging patterns in our construction. In this section we introduce this diagrammatic notation, defining the explicit correspondence between parts of the graphs and the expressions appearing in the scheme of section 3.2. Its usefulness will become clear in the following sections, where it will be widely applied.
Let V be the Fock space defined in section 3.1 and let (see also appendix E) Then the following linear map defines our diagrammatic notation:
Definition 3.2 (Diagrammatic notation)
Let T be a rooted tree. To each leaf of this tree assign a label of the form ±lm, where l ∈ N and m ∈ {1, . . . , N (l, D)} and N (l, D) is defined as in section 3.1. Let us call these labeled trees T l . Further, let x ∈ R D , r = |x| andx = x/r as in the previous section. Then we define the linear map G x : T l → Y(x) by the following rules: Let us explain these relations: By the first two assignments, the leaves of the tree are closely related to Y 0 (ϕ, x), see eq. 3.1. 26. In fact, they are either the part of Y 0 (ϕ, x) containing the annihilation operator, or the part containing the creation operator. This is where one starts to calculate the expression corresponding to a tree. Whenever edges meet at a vertex, the outgoing (parent) edge is recursively determined by its children, i.e. the ingoing edges, by an inversion of the Laplace operator on the expressions associated to these ingoing edges (see appendix E for more information on this inversion). It should be noted that the root is not a vertex (although incoming edges meet), so no differential equation must be solved there.
As an example, let us write the left representative Y(ϕ 2 , x) in this notation (compare eqs. 3 • from here on we use the following shorthand notation (analogously for more complicated trees): • whenever we draw a tree with unlabeled leaves, it is understood that we sum over all possible labels for those leaves • whenever two leaves are connected ("from below") by a line with an arrow, then the labels of these leaves are contracted ; more precisely we attach the label −(lm) to the leaf that is pointed at by the arrow, attach the label +(lm) to the leaf at the other end of the line and sum over l and m = l,m • if two leaves are connected by a line from below and if this line is dashed, then we contract the labels of these leaves, but replace the corresponding ladder operators by the identity operator id on V These conventions substantially simplify the diagrammatic notation, e.g. eq. 3.3.2 may now be expressed as Y(ϕ 2 , x) = : x : § ¦ ¤ ¥
3.3.5
More complicated examples can be found in the following sections.
Low order computations: Next to leading order
From now on we focus on 3-dimensional scalar, massless quantum field theory with ϕ 6 -interaction (see 3.2.3). This allows for more explicit results and facilitates calculations. The choice to work on a 3-dimensional spacetime was made in order to keep representation theory simple, as the 3-dimensional rotation group SO(3) is familiar and well understood. The exponent of the interaction term in the field equation was chosen in such a way that the coupling constant is dimensionless.
Construction of Y 1 (ϕ, x)
Recall the results of the previous two sections: In sec. 3.1 we constructed the complete left representation, and hence the complete quantum field theory, for the non-interacting model obeying a field equation of the form 3.1.2. Secondly, in sec. 3.2, we presented an iterative scheme that allows for the perturbative construction of an interacting quantum field theory, with a non-homogeneous field equation of the form 3.2.3, given that the corresponding free theory is known (more precisely, we actually only need the left representative Y 0 (ϕ, x) as starting point of the iteration). Combining the information from these chapters, it is clear that we should be able to construct perturbations of the free theory in our setting. The first step beyond leading perturbation order is the calculation of the left representative Y 1 (ϕ, x) according to our algorithm, which is the aim of this section. Remember from section 3.2 that the field equation can be exploited in order to find relations between OPE coefficients of different perturbation orders, see eq. 3.2. 8. The corresponding relation connecting first order coefficients to those of the free theory is of the form
3.4.1
As mentioned above, for calculational ease we will in the following consider the specific theory with ϕ 6 -interaction, i.e. k = 5 in the above formula, in 3 dimensions. In this context we have the equation x 3 is the familiar three dimensional Laplace operator. In terms of left representatives, this equation implies Therefore, to reach the aim of this section, all we have to do is solve this differential equation.
Let us first recall our results for the free theory from section 3. 1. In D = 3 dimensions, the leading order of our theory is characterized by the equations which is the D = 3 case of the general expression for Y 0 (ϕ, x) in eq. 3. 1.26, and
3.4.5
An even more compact form of eq. 3.4.4 can be obtained with the help of the modified spherical harmonics defined as see [33] and eq. D. 2.3, which gives
3.4.7
In view of eq. 3.4.3, the left representative Y 0 (ϕ 5 , x) is of special interest to us. According to the equations above, it takes the form
3.4.8
Here normal ordering has already been carried out, which explains the numerical prefactors in lines 2-5, as e.g. the sums over : b † l 1 m 1 · · · b † l 4 m 4 b l 5 m 5 : and : b † l 1 m 1 · · · b l 4 m 4 b † l 5 m 5 : are equivalent after normal ordering. In our graphical notation, this could simply be written as Y 0 (ϕ 5 , x) = : x : § ¦ ¤ ¥
3.4.9
where we remind the reader that summation over all possible labels for the 5 unlabeled leaves is implicitly assumed.
Before solving the differential equation 3.4.3, we want to put Y 0 (ϕ 5 , x) into a form better suited for this task. The main obstacle in trying to invert the Laplace operator on eq. 3.4.8 is the product of the spherical harmonics. The differential equation is simplified decisively if these products are decomposed into their irreducible parts, i.e. if we "couple" the spherical harmonics using as intertwiners the familiar Clebsch-Gordan coefficients (or, equivalently, Wigner 3j-symbols). For details on the representation theory of the 3-dimensional rotation group, see appendix D.2 and references therein. The identity we are looking for is eq. D. 2.22, which was derived in the appendix, and we repeat it here for reference:
3.4.10
Here T is a tensor, which contains the consecutive application of the intertwiners as defined in eqs. D. 2.20 and D. 2. 23. Before we apply this equation on the product of the spherical harmonics in eq. 3.4.8, let us first introduce some abbreviations:
Definition 3.3 (Abbreviated indices)
Let q, i, l i ∈ N and m i ∈ {−l i , . . . , l i }. Then we define the abbreviated notation
3.4.11
Further and e ±(lm) = ±e lm (recall that e lm is the multiindex with unit entry at "position" (lm)).
With this notation and representation theoretic machinery, we can put eq. 3.4.8 into the convenient form
3.4.12
which is much better suited for our purpose, i.e. for solving the differential equation 3.4.3, since now we have expressed Y 0 (ϕ 5 , x) as an element of the ring of functions Y(x) defined in eq. E. 2. Assuming that any left representative can be written as an element of this ring, a solution to the differential equation is simply found by application of the operator −1 ∈ End(Y(x)), which we call ∆ −1 in three spacetime dimensions. Recall, however, that this solution is not unique. For a discussion of these ambiguities see section 3.2.1.
Before we give the result for the left representative Y 1 (ϕ, x), let us first introduce some more notation for convenience:
Definition 3.4 (Gradings of left representatives)
The decomposition of an arbitrary left representative Y n (|v a , x) with respect to its dimension is written as while the grading by irreducible representations of the rotation group SO (3) is introduced as
3.4.15
where we defined (see also eq. E. 8
3.4.16
for the sake of brevity. The diagram corresponding to this equation is simply
3.4.17
The difference to the diagram for the left representative Y 0 (ϕ 5 , x) in eq. 3.4.9 is the appearance of a vertex below the root, which according to the rules of def. 3.2 stands for the inversion of the Laplace operator. In the second equality we dropped the double dots, as from here on we implicitly assume normal ordering whenever 5 leaves meet at a vertex. In summary, we have successfully determined the first NLO left representative.
We end this section with the computation of the explicit form of the first order OPE coefficient (C 1 ) b ϕa . We will proceed as follows: After introducing some additional notation to keep the equations at a reasonable length, we first give a general result for the OPE coefficients of the free theory. Then the differential equation 3.4.2 will be applied in order to find the desired first order coefficients.
Definition 3.5 (Metric on V )
For two vectors |v a , |v b ∈ V we introduce the metric 5 g :
3.4.18
where | · | denotes the usual absolute value of an integer. This notion measures how much |v a differs from |v b , in the sense that at least g(a, b) ladder operators are needed to transform one of the vectors into the other. Note that g(a, b) is always finite, since we required the multiindices labeling our basis elements to have only finitely many non-zero entries (see eq.3.1.8).
As we will see below, OPE coefficients (C 1 ) b ϕ k a with equal values of g(a, b) exhibit structural similarities, so we will often classify OPE coefficients by this value. Further, the notion of multiset will be frequently used (see appendix B). In this context, we make the following definitions: Let l q i be defined as in def. 3.3 and let |v a , |v b ∈ V . We define the following three sets:
3.4.21
The set I may be interpreted as the set of all possible labels for any number of leaves of a tree (see section 3.3). Then I(n) is the subset of I containing all labels for trees with n leaves. Recall from section 3.3 that we associate ladder operators to the leaves of a tree. The set I b a (n) is the restriction of I(n) to the labels of those trees that transform the vector |v a into |v b (recall that our basis is orthogonal, thus v b |v a = 0 for a = b) after normal ordering of the corresponding ladder operators.
Proposition 3
From the definitions above it follows that
3.4.23
Proof : The first statement, eq. 3. 4.22 follows straightforwardly from the definition of g(a, b): The condition g(a, b) = n tells us that at least n ladder operators are needed to transform |v a into |v b . In other words, the multiindices a and b differ in n entries. Therefore the multiset A = l q 1 , . . . , l q n ∈ I b a (n) is uniquely fixed by the condition v b | : l q i ∈A b l q i : |v a = 0, so I b a (n) contains only one element if g(a, b) = n.
Let us come to the second part of the proposition. For g(a, b) > n the set I b a (n) is empty, because more than n ladder operators are needed to tranform |v a into |v b . Thus, the condition on the multisets A in the definition of I b a (n) cannot be satisfied due to orthogonality of our basis. Hence 4.24 Similarly, the set is empty for g(a, b) + n = odd, which can be seen as follows : If g(a, b) is an even number, then we need an even number of ladder operators to transform |v a into |v b (the minimum number g(a, b) plus pairs of contracted operators, i.e. creation and annihilation operators with the same indices). Thus, I b a (n) vanishes if n = odd in this case. Similarly, the set is empty for n = even if g(a, b) is odd, so 4.25 Definition 3.7 (Shorthand notation for dimension of OPE coefficients) Given any three basis vectors |v a , |v b , |v c ∈ V , we define
3.4.26
(recall the definition of | · | for multiindices from eq. 3. 1.9 and the definition of scaling degree from eq. 2. 2.22). Further, given a multiset A = l q 1 , . . . , l q n ∈ I(n) and two basis vectors
3.4.27
Definition 3.8 (Shorthand notation for coupling tensors) Let A ∈ I. Then we define
3.4.28
Note that on the left hand side we suppressed the magnetic quantum number M , which is uniquely encoded in the multiset A as the sum m 1 + . . . + m n , for the sake of brevity. The definition of T [A] is motivated by the fact that in many expressions we are free to choose a coupling order for the spherical harmonics appearing in the construction. For such cases we introduce the convention that always the spherical harmonics of highest degree are coupled first. Then the coupling tensor is uniquely characterized by a multiset of indices instead of a (ordered) tuple. Note that there seems to be an ambiguity in this prescription, e.g. if we consider the multiset A = +(l 1 m 1 ), +(l 2 m 2 ), −(l 1 m 1 ) . In this case, it is not clear whether to couple +(l 1 m 1 ) before −(l 1 m 1 ) or the other way around. However, in such cases, i.e. whenever l i = l j with i ≤ q < j, we may apply the addition theorem of the spherical harmonics, eq. D. 2.2, and omit the indices l q i and l q j altogether.
3.4.29
This solves the apparent ambiguity. Another property to be remarked is that zero entries may be dropped from the coupling tensor, i.e.
3.4.30
This may be seen either from the property
3.4.31
of the Clebsch-Gordan coefficients, or from the fact that the coupling of the spherical harmonic S 00 (x) = 1 is trivial. Is also follows that
Definition 3.10 (Prefactors from ladder operators)
Let f : V ⊗ V ⊗ I(n) → R be the map defined by the following property: Given two vectors |v a , |v b ∈ V , a number n ∈ N and a multiset A ∈ I b a (n), let
3.4.36
Explicit expressions for f b a (A) can be obtained using eqs. 3 4.2 (OPE coefficients (C 0 ) b ϕ k a ) Using the notation introduced above, the zeroth order OPE coefficients take the compact form
3.4.38
Remark: Here we also suppressed the magnetic quantum number of the spherical harmonic S JM (x). As in the definition of the coupling tensor T [A] J , the corresponding number M can always be uniquely retrieved from the elements of the multiset A.
Proof : Using the graphical notation of section 3.3, we may express the left representative Y 0 (ϕ k , x) by a sum over labeled trees with k leaves. If we take the matrix elements v b | · |v a of this expression, then all contributions to this sum which do not transform |v a into |v b vanish due to orthogonality of the basis. Thus, the labels l q 1 , . . . , l q k of the k leaves have to satisfy b = a + k i=1 e l q i . Further, recall that the operators corresponding to the leaves of the tree are normal ordered, so the order of the labelings does not matter. In other words, we obtain the same result for all permutations of the leaves. So instead of summing over all possible tress satisfying the mentioned property, we might as well restrict the sum to such trees that can not be transformed into one another by permutations of the leaves and multiply by a symmetry factor of the respective tree. These arguments imply the equation , § ¦ ¤ ¥ 3. 4.39 since the elements of the multisets in I b a (k) satisfy the desired property, since the order of the entries in a multiset does not matter and since the factor s[A] was defined to be the mentioned symmetry factor of the diagram. We found in proposition 3 that the set I b a (k) is empty for g(a, b) > k or g(a, b) + k = odd, which implies (C 0 ) b ϕ k a (x) = 0 in those cases just as we have claimed in the result. Now it remains to translate these diagrams into explicit formulae using the rules of section 3. 3. To begin with, the action of the ladder operators associated to the leaves of the tree on the vector |v a gives a numerical prefactor, which by definition 3.10 is denoted by f b a [A]. Further, for each leaf we obtain a spherical harmonic whose indices are determined by the label of the leaf. "Coupling" of these spherical harmonics, i.e. a decomposition into irreducible parts, yields a factor J T [A] J S J (x). It remains to determine the power of r, which by definition is just d = |b| − |a| − k/2, so we indeed arrive at eq. 3.4.37. With this result at hand, it is an easy task to find the ϕ a ) Using the right inverse ∆ −1 to solve the differential equation 3. 4
3.4.40
with
Proof : The differential equation to be solved is Inserting the concrete form of the zeroth order coefficient, we arrive at the equation 4
.43
Application of the operator ∆ −1 to the right side of the equation yields (C 1 ) b ϕa (x) = 0 for g(a, b) > 5 or g(a, b) = even and otherwise 4.44 Note that in the case at hand d = d A , so the result is confirmed.
In summary, we have found explicit expressions for all OPE coefficients of the type (C 1 ) b ϕa . Some particularly simple coefficients are tabulated in appendix A. As an illustration of how to obtain these expressions from result 3. 4
.3, we close this section with an example computation.
Example : We want to determine the OPE coefficient (C 1 ) ϕ p+2 ∂ l ϕ ϕ ϕ p with p, l = 0. As a first step, let us determine the set I ϕ p+2 ∂ l ϕ ϕ p 4
Construction of Y 1 (ϕ 2 , x)
According to our algorithm of section 3.2, it is now possible to construct all first order left representatives Y 1 (|v a , x) (and thus the complete quantum field theory at first perturbation order) just from Y 1 (ϕ, x) by application of the associativity condition. In this section we take the first step into this direction by constructing the left representative Y 1 (ϕ 2 , x). As we will see in the following, this task turns out to be comparably simple, as no "renormalization" is needed at this stage in our toy model. More serious calculational effort will be needed in the next section.
The relation between Y 1 (ϕ, x), which is known from the previous section, and the desired left representative Y 1 (ϕ 2 , x), is
3.4.52
which is just eq. 3.2.13 at first order expressed in terms of left representatives. Let us take a closer look at the counterterms, i.e. the expressions subtracted on the right side. The fact that (C 1 ) b ϕ a vanishes for g(a, b) =even together with the restriction |e| ≤ |ϕ 2 | suggests that only the terms including (C 1 ) 1 ϕ ϕ (x − y) or (C 1 ) ϕ 2 ϕ ϕ (x − y) survive the limit. Due to the specific form of Y 1 (ϕ, x) from the previous section, however, these matrix elements, or OPE coefficients, both vanish, see eq. 3. 4.40 (the set I b a (5) is empty in those cases) or table A. The absence of counterterms from eq. 3. 4.52 implies that the remaining expressions are finite, so we may simply perform the limit and write
3.4.53
In our diagrammatic notation of section 3.3 this equation takes the form
3.4.54
where again summation over all combinations of labels for the leaves is understood. Let us investigate this expression more closely and try to understand why no counterterms are necessary. As could be seen in section 3.2.2, infinities occur whenever infinite sums over a product of the form b lm b † lm are performed, i.e., diagrammatically speaking, whenever the indices of two leaves are contracted, with the leave on the right corresponding to a creation operator. In the free theory computations we then made use of the identity b lm b † lm = b † lm b lm + id, which follows from the commutation relation 3.1.14, to separate these expressions into a normal ordered (finite) part and a divergent part. We then saw that this divergent part precisely cancels with the counterterms, which suggests that the renormalization procedure is equivalent to normal ordering. Obviously, it would be convenient to carry over this approach to the first order calculations and to include also the sixth operator in the above diagrams into the normal ordering process. However, since no counterterms are present in eq. 3.4.53, it is not clear what to make of the terms which are neglected if we demand normal ordering. As it turns out, the infinite parts of these extra terms cancel out, but there is a finite remainder, which we call (R 1 ) ϕ 2 . Introducing the
Definition 3.11 (Additional grading of left representatives)
By Y n (|v a , x; d, q) we denote the contribution to Y n (|v a , x; d) comprised of q ∈ N annihilation operators.
we obtain Using the left representative Y 1 (ϕ, x) given in the previous section, one obtains
3.4.59
Here R (q=2) (d, j, r) is a finite, real valued function defined below in eq. G. 1.22. By sign(x) we denote the sign function
3.4.60
Proof : As mentioned above, we want to exploit the commutation relations of the ladder operators to bring the operators in eq. 3. 4.54 into normal order. This is trivial, unless we have to switch the order of a pair of the form b lm b † lm . Since the operators associated to the left representative Y 1 (ϕ, x) are already normal ordered (see section 3.4.1), only the two diagrams contain expressions of this kind (recall from chapter 3.3 that the arrows denote contraction). Here we have to apply b lm b † lm = b † lm b lm + id and obtain in addition to the normal ordered products a sum where the two contracted operators are replaced by the identity. In diagrams, this may be written as (recall that dashed lines denote replacement of the corresponding ladder operators with the identity) x x + x = : x : + : 4
.62
This formula is exactly equivalent to the first line of eq. 3.4.55 if we define (R 1 ) ϕ 2 to be equal to the two diagrams including the dashed contractions. The second line of eq. 3. 4.55 simply follows from the fact that the order of ladder operators inside normal ordering signs does not matter, so the two normal ordered diagrams in the equation above are in fact equal. The equation above may also be written in terms of the left representatives as where the line connecting the left representatives denotes contraction:
Definition 3.12 (Contraction of left representatives)
The commutators 4.65 are called "contractions" of the left representatives Y 0 (ϕ, x) and Y n (|v a , y), where Y ± 0 (ϕ, x) is the part of Y 0 (ϕ, x) containing only creation (+) or only annihilation (−) operators.
It remains to determine (R 1 ) ϕ 2 . This rather involved computation can be found in appendix G.
To sum up the results of the above discussion, we have most importantly verified that in accordance with eq. 3. 4.53 no subtraction of counterterms is needed in order to cure possible divergences in the calculation of Y 1 (ϕ 2 , x). Further we have succeeded in writing Y 1 (ϕ 2 , x) purely in terms of normal ordered expressions. As we will see in the following, this is particularly helpful in the calculation of matrix elements of this left representative, since no infinite sums have to be performed.
We conclude this section with explicit results for OPE coefficients obtained from the left representative Y 1 (ϕ 2 , x) by taking the according matrix elements. Again it is helpful to introduce some additional notation in order to keep equations short.
Definition 3.13 (Partitions of multisets)
By P i,j (A) we denote the set of partitions of any multiset A of cardinality i + j into two submultisets of cardinalty i and j respectively, whose sum is A, i.e. P i,j [A = a 1 , . . . , a i+j ] = P 1 = a n 1 , . . . , a n i , P 2 = a n i+1 , . . . , a n i+j P 1 P 2 = A § ¦ ¤ ¥
3.4.67
and otherwise
3.4.69
Proof : The first two statements, i.e. vanishing of the OPE coefficient for g(a, b) > 6 and g(a, b) =odd, follow simply from proposition 3. For the remaining values of g(a, b) we first note that 4
.70
Here the second equality holds for the following reasons: For the matrix element not to vanish, the labels of the six leaves above have to be in I b a (6) by definition, which explains the first sum on the right side. Now we have to attach five of those labels to leaves entering the vertex, and one to the leaf directly connected to the root. Therefore, we split the multiset A into two submultiset: The multiset P 1 of cardinality 5 and the multiset P 2 of cardinality 1. The labels in P 1 are then attached to the five leaves entering the vertex in any order and we have to multiply by the symmetry factor s[P 1 ] in order to account for all the diagrams that may be obtained by permutations of those five leaves. Analogously, we attach the one label in P 2 to the remaining leaf and multiply by s[P 2 ], obtaining the right side of the equation above. 4
.71
Recalling the definitions of ∆ 0 [A] J and ∆ 1 [A, r] J , we indeed verify the first line of eq. 3. 4.68. Now consider the contribution from the remainder term. Here we find where in the first equality eq. 3. 4.56 was used. The second equality follows from result 3. 4 4
.68.
To finish the proof we have to show that the OPE coefficient vanishes for g(a, b) = 0, i.e. for a = b. We start the computation in the first line of eq. 3. 4.68 and want to determine the set of multisets I a a (6). The condition a = a + 4.77 Here no spherical harmonics and coupling tensors appear, as we may simply apply the addition theorem three times. Further we used the fact that 4
.78
Similarly, we find for the second type of decomposition, eq. 3. 4.76, that J,J1,J2 where again we applied the addition theorem of the spherical harmonics and where we used 4
.80
Thus, the contributions from the two decompositions of any multiset A ∈ I b a (6) cancel each other, implying that the first line in eq. 3
.22 that
which follows from inspection of the summation limits in those expression. Therefore, we also have Λ 1 [ϕ 2 , B, r] J = 0 for all B ∈ I a a (4), so the second line of eq. 3. 4.68 vanishes as well. Hence, we have found that (C 1 ) a ϕ a (x) = 0.
Again we compute a specific example OPE coefficient in order to illustrate the application of result 3. 4
.5.
Example : Consider the coefficient (C 1 ) ϕ p ϕ 2 ϕ p+3 ∂ l ϕ for l = 0. We start with the determination of the multisets in I ϕ p ϕ p+3 ∂ l ϕ (6). Again this set consists of only one element The numerical prefactor then becomes 4
.83
Since all zero entries in the coupling tensors T may be dropped, only the product T [−(lm)] J S J (x) = S lm (x) remains in the case at hand (equivalently, the coupling of S 00 · · · S 00 S lm = S lm ). The multiset A may be split into two submultisets of cardinality 5 and 1 in three different ways: Note, however, that all indices in this multiset correspond to annihiliation operators (i.e. they have a negative sign), so we have to determine remainder terms of the form (R 1 ) ϕ 2 (d B − 3/2, J = l, q = 4), which vanish by eq. 3. 4
.57.
Putting all the pieces together (recall also the factor 2 in eq. 3.4.68), we arrive at the result 4.91 in accordance with appendix A.
Construction of Y 1 (ϕ 3 , x)
In the previous section we were able to perform the first iteration step at next to leading order, i.e. we constructed Y 1 (ϕ 2 , x) out of Y 1 (ϕ, x) and Y 0 (ϕ, x), without the need to subtract any counterterms, see eq. 3. 4.52. The first true example of our analog of renormalization will be encountered in the present section. The iteration step yielding Y 1 (ϕ 3 , x) is expressed by the formula (see 3.2.15)
3.4.94
are divergent in the limit y → x. Therefore in our analysis of the first two summands on the right side of eq. 3. 4.92 we have to find terms that precisely cancel these infinities, and the remaining expressions should then be finite. This procedure might be interpreted as a sort of renormalization of the products Y 0 (ϕ, x)Y 1 (ϕ 2 , y) and Y 1 (ϕ, x)Y 0 (ϕ 2 , y). Note that the two counterterms have different divergent behavior: Polynomial and logarithmic. We have already encountered polynomial counterterms in our constructions at zeroth order, and we will see in the following that indeed the polynomial divergences of the present section are very closely related to those of the free theory computations. The logarithmic counterterm, however, is a completely new feature at first perturbation order, and it will turn out to be considerably more complicated to find the corresponding expressions in eq. 3. 4.92 canceling this term. Our plan for this section is the following: As in previous calculations, it is the aim to write Y 1 (ϕ 3 , x) as a normal ordered product of left representatives plus some finite remainder (R 1 ) ϕ 3 , simliar to eq. 3.4.55. Basically, the procedure is analogous to the Y 1 (ϕ 2 , x) case: We exploit the commutation relations for the ladder operators, eq. 3.1.14, to bring all expressions into normal order, picking up additional terms whenever we have to exchange the order of a creation and annihilation operator with the same index. These additional terms include infinite sums, whose divergent behavior has to be analyzed. In contrast to the previous section, we expect these sums to be divergent in the limit y → x. This divergence should be cured by the subtraction of the counterterms, see eq. 3. 4.94. Hence, the above mentioned remainder (R 1 ) ϕ 3 is precisely the difference of the additional terms we pick up in the process of bringing all expressions into normal order, and the counterterms, where we take the limit y → x. The section is again concluded by a discussion of the OPE coefficients obtained from Y 1 (ϕ 3 , x) by taking matrix elements.
To begin with, we present Result 3.4.6 (Left representative Y 1 (ϕ 3 , x) in normal ordered form)
3.4.96
Proof : Starting at eq. 3. 4.92, we can bring Y 1 (ϕ 3 , x) into the desired form simply by applying the results of the previous chapters, and without the need to perform any new calculations. This can be seen from the following observations: First, we substitute eq. 3. 4.55 for the left representative Y 1 (ϕ 2 , x) into eq. 3. 4.92, obtaining 4
.97
Now let us try to bring the operators in the first line of this equation into normal order. For the first term we find 2 lim where we have already performed the limit in the (finite) normal ordered term. Similarly, we may write the last term in the first line of eq. 3.4.97 as 4
.99
Finally, remembering the definition of the remainder term (R 1 ) ϕ 2 , eq. G. 1.1, the last remaining term in the first line of eq. 3.4.97 may be put into the form 4
.100
Now recall from our construction of the free theory left representatives, in particular from the computation of Y 0 (ϕ 2 , x) in section 3.2.2 which implies for the last summand in eq. 3. 4.98 4
.102
A Taylor expansion of Y 1 (ϕ, y) around x shows that this contribution cancels with the polynomial counterterms in the second line of eq. 3.4.92. Furthermore, we can write the sum of the remaining expressions with one contraction as 2 lim where the definition of the remainder (R 1 ) ϕ 2 has been used. Thus, if we plug eqs. 3.4.98, 3.4.99 and 3.4.100 into eq. 3.4.97 and further use the two identities above, we end up with result 3.4.6 as claimed.
The first two terms in eq. 3.4.95 are known, so it remains to find an explicit formula for the new remainder term (R 1 ) ϕ 3 . Unfortunately, we have not been able to determine this operator completely, but we have found the following partial results.
3.4.105
and
3.4.106
where L 5 is the non divergent part of the hypergeometric series 6 F 5 , see eq. C. 7.
The derivation of this result can be found in appendix G. Remark: For d = j = 0 we obtain the simple results
3.4.108
This may be seen as follows: The second line in eqs. 3
3.4.109
Since there is no finite contribution to this hypergeometric series, we have L 5 = 0 in the case at hand. Concerning the logarithmic contribution, we derive the results given above using 0000|00 = 1. It remains to determine the operators (R 1 ) ϕ 3 (x; q = 1) and (R 1 ) ϕ 3 (x; q = 2). Unfortunately, in this case we have neither been able to find a closed form expression, nor have we been able to verify the cancellation of infinities. The additional difficulty here is due to the lack of relations like d ≥ j or −d − 3 > j, which were a result of the fact that the three ladder operators in (R 1 ) ϕ 3 (x, q = 0) and (R 1 ) ϕ 3 (x, q = 3) were either all creators, or all annihilators. Thus, we can not use the simplifications discussed in appendix F.
The end of this section is again devoted to matrix elements of the left representative Y 1 (ϕ 3 , x), i.e. to OPE coefficients of the form ( The matrix elements of the left representative Y 1 (ϕ 3 , x) given in result 3.4.6 are
3.4.110
and
3.4.112
Remark: The family of coefficients with g(a, b) = 1 is not considered here, since all these coefficients necessarily involve contributions from remainder terms (R 1 ) ϕ 3 (x, q = 1) or (R 1 ) ϕ 3 (x, q = 2), which are unknown as mentioned above.
Proof : The argumentation here is basically the same as in the previous section. Eq. 3.4.110 is just a consequence of proposition 3. Eq. 3.4.111 may be derived analogously to eq. 3. 4.68 of the previous section, so we will discuss it only briefly. In the first line the only difference is that the submultiset P 2 now has cardinality 2. This is due to the fact that in the case at hand we have one additional power of Y 0 (ϕ, x), so we sum over diagrams of the form The additional leaf directly attached to the root is the cause of the additional element in P 2 . The second line of the result accounts for the contribution from the product 3 : (R 1 ) ϕ 2 (x)Y 0 (ϕ, x) :. In total this product consists of 5 ladder operators, so we have to sum over all multisets B ∈ I b a (5). Then these multisets are split into a submultiset of cardinality 4, which is associated to the contribution Λ 1 from the remainder operator, and a submultiset of cardinality 1, which is the argument of the contribution ∆ 0 from the zeroth order term. Coupling all spherical harmonics, we obtain the second line of eq. 3.4.111. The third line of that equation follows if we insert eq. 3.4.104 for the remainder operator and use result 3.4.2 in order to determine the matrix elements of this expression.
Construction of Y 1 (ϕ 4 , x)
Next in line is the left representative Y 1 (ϕ 4 , x), which can be determined from known expressions by the formula (see 3.2.16)
3.4.114
The counterterms here take the values (see appendix A) 4.117
3.4.118
Recall that we have not found results for the OPE coefficients (C 1 ) b ϕ 3 a with g(a, b) = 1, so the two coefficients in eqs. 3.4.116 and 3.4.117 are unknown. Clearly this is a problem if one wants to determine Y 1 (ϕ 4 , x) completely. However, in this section our modest aim is to determine only the contribution acting by more than two ladder operators. As we shall see below, this can be achieved without knowledge of the mentioned counterterms.
By now the general procedure should be familiar: We exploit the commutation relations of the ladder operators (and possibly results of the previous sections as shortcuts) to bring the desired left representative into the form
3.4.120
Remark: According to the scheme outlined in section 3.2 all matrix elements of (R 1 ) ϕ 4 (x) should be finite. It should be noted that we have not been able to check this convergence in this thesis.
Proof : The shortest way to this result is again to first substitute Y 1 (ϕ 3 , x) in the form of eq. 3. 4.95 into eq. 3.4.114.
Now let us bring all expressions with a positive sign in this equation into normal order: 4
.125
Many of the contracted operator products in these equations have already been analysed in the previous chapters. To begin with, recall from 3. 4.101 4.126 which appears in eq. 3. 4.122 and eq. 3.4.124. With the definitions of (R 1 ) ϕ 2 and Y 1 (ϕ 2 , x), see eqs. G. 1.1 and 3.4.55, it follows that 4.127 which cancels with the polynomial counterterms in the last line of eq. 3. 4.114 after a Taylor expansion in y around x. Further we find for the sum of the products with one contraction in eqs. 3.4.122 and 3. 4.123 where we again applied the results of section 3.4.2. Remembering the definition of (R 1 ) ϕ 3 , 3. 4.96, we find for the following expressions from eqs. 3.4.123 and 3.4.124 4.129 Here the divergence cancels with the logarithmic counterterm 3.4.115. We have now dealt with all divergent expressions in eqs. 3.4.122-3.4.125, except for the products including three contractions in eq. 3.4.123 and the contraction with the remainder term (R 1 ) ϕ 3 in eq. 3. 4.125 (which is essentially also a threefold contraction, since (R 1 ) ϕ 3 itself includes two contractions). Further, the only remaining counterterms are the ones in the second line of eq. 3.4.114. Thus, we have found 4.130 and we may verify eq. 3. 4.119 by insertion of the above results into eq. 3. 4.121.
Although we do not determine the concrete form of the new remainder term (R 1 ) ϕ 4 in this thesis, results for a wide class of OPE coefficients (C 1 ) b ϕ 4 a can be obtained. Namely
3.4.131
and
Remark: The remaining classes of coefficients, namely (C 1 ) b ϕ 4 b with g(a, b) = 2 and g(a, b) = 0, include contributions from (R 1 ) ϕ 4 and are thus not treated here. Otherwise the derivation of the above result is analogous to previous sections, so we do not bother to give a "proof" here.
Construction of Y 1 (ϕ 5 , x)
The last left representative at first perturbation order to be discussed in this thesis is Y 1 (ϕ 5 , x). This is the final step in our algorithm before we can proceed to second order by the field equation. The general outline of this section is similar to the previous ones. As always, our starting point is the expression of Y 1 (ϕ 5 , x) in terms of known left representatives,
3.4.133
which is a consequence of our consistency condition. It should be remarked that in the last line of this equation no counerterms of the form (C 1 ) ϕ lm ϕ 4 ϕ with l = 0 appear, because this coefficient is zero according to the results of the previous section (since here g(a, b) =odd). The counterterms in the above equation take the values 4.134
3.4.139
Now let us come to the coefficient (C 1 ) ϕ 3 ϕ 4 ϕ . Here the first two lines of eq. 3. 4.132 vanish, because the sets I ϕ 3 ϕ (8) and I ϕ 3 ϕ (6) are empty as well. Therefore, the coefficient at hand contains the following contributions:
3.4.140
In the second line we decomposed the left representative Y 0 (ϕ, x) into a creation and an annihilation part. In addition, the results (C 0 ) ϕ 3 ϕ ϕ 2 (x) = 1 and [(R 1 ) ϕ 3 (x)] ϕ 3 1 = 0 from the previous sections were applied. The final equality then follows from eq. 3. 4.139 and confirms eq. 3. 4.136. Eq. 3.4.137 was derived in a similar manner.
The strategy is again to write Y 1 (ϕ 5 , x) as a sum of some known normal ordered (and thus finite) expressions and an additional remainder term (R 1 ) ϕ 5 . The computation of this remainder term is the main effort that goes into the construction of the desired left representative.
3.4.142
Remark: Again, we have not been able to verify convergence of this limit explicitly. However, our renormalization procedure implies Result 3. 4
.12 (Constraints on remainder terms)
For the consistency condition 3.2.16 to hold, it is necessary that
3.4.143
and
3.4.144
with c ∈ C.
Proof of results 3.4.11 and 3.4.12 : Insertion of Y 1 (ϕ 4 , x) in the form 3.4.119 into our equation for Y 1 (ϕ 5 , x) yields The next step is to bring these expressions into normal order and to keep track of the additional terms generated in the process. To be begin with, we pick up the additional expressions 4.146 from normal ordering of the first and the last product in eq. 3.4.145. Further, the contractions 4.147 result from normal ordering of the first three summands in eq. 3.4.145. The result cancels with the polynomial counterterms in the second line of 3.4.133. The remaining expressions with two contractions are 6 lim y→x : Y 0 (ϕ, x)(R 1 ) ϕ 2 (y)Y 0 (ϕ 2 , y) : + : Y 1 (ϕ, x)Y 0 (ϕ, y)Y 0 (ϕ, y)Y 0 (ϕ 2 , y) : 4.148 This divergence cancels with the logarithmic counterterm in eq. 3.4.134. Next consider the products with three contractions 4.149 which follows from the definition of (R 1 ) ϕ 4 , see eq. 3. 4.120. Here we encounter the problem that neither the OPE coefficients in the expression above, nor the counterterms in eqs. 3.4.136 and 3.4.137 are explicitly known. Thus it seems difficult to verify the cancellation of infinite terms in the limit above. This is not really a problem, however, since Y 1 (ϕ 5 , x) is finite by its very construction (see section 3.2). Thus, we may change our point of view and from now on assume that the counterterms render the left representative finite, instead of trying to show this with the help of results from previous sections. This yields the following constraints: These conditions were derived as follows: We performed a Taylor expansion of the operators Y 0 (ϕ, y) in eq. 3.4.149 around the point x and neglected all terms with positive scaling dimension in |x − y|, since these terms vanish in the limit. Eq. 3.4.150 then follows if we demand that the resulting expressions proportional to Y 0 (ϕ 3 , x) are rendered finite by the corresponding counterterm, eq. 3.4.136. Similarly, eq. 3. 4.151 collects all terms multiplying the left representative Y 0 (ϕ 2 ϕ 1m , x) and requires that subtraction of the corresponding counterterm, eq. 3. 4.137, yields a finite result.
Substitution of eq. 3. 4.136 into the first condition above yields Since the OPE coefficient (C 1 ) ϕ 3 ϕ 4 ϕ has scaling degree 1, this is also true for the contribution from the remainder term, where c 1 , c 2 ∈ C are constants. This fact together with the condition above uniquely determines 4.153 This confirms the first half of result 3. 4. 12. Now let us come to eq. 3.4.151. Here substitution of eq. 3.4.137 leads to 4
.154
Dimensional analysis of the expressions in brackets suggests that all summands are at most logarithmically divergent. Therefore, the above constraint is not as strong as the first constraint, in the sense that it will not allow for a unique determination of [(R 1 ) ϕ 4 ] ϕ 2 ϕ1m ϕ . The term [(R 1 ) ϕ 3 (x − y)] ϕ 2 ϕ1m 1 may be determined with the help of result 3.4.7. As mentioned above, we are only interested in the logarithmic contribution to this matrix element. We find 4.155 which yields upon insertion into the constraint above 4.156 where c ∈ C is some constant. Thus, we have confirmed result 3.4.12. It remains to analyze the genuinely new contributions containing four contractions and the remaining counterterm 4
.157
Restoring all the normal ordered products obtained in eqs. 3.4.146-3.4.149, one verifies eq. 3.4.141. The OPE coefficients (C 1 ) b ϕ 5 a with g(a, b) > 3 can be determined without any knowledge of (R 1 ) ϕ 4 and (R 1 ) ϕ 5 , so we will focus on these cases.
3.4.158
and
The result may be derived from the form of the left representative Y 1 (ϕ 5 , x) in analog to the previous sections.
Construction of Y 2 (ϕ, x)
According to the algorithm outlined in section 3.2 it is possible to construct the second order left representatives Y 2 (ϕ, x), or equivalently the OPE coefficients (C 2 ) b ϕ a , from the knowledge of the first order left representatives Y 1 (ϕ 5 , x). In the previous chapters we have presented the iteration up to this point, so we are finally ready to exploit the field equation and proceed to second perturbation order. This process will be carried out in the present chapter.
The central equation of this chapter is
3.4.160
which follows from eq. 3.2.8. Since we do not know the concrete form of the operators (R 1 ) ϕ 4 and (R 1 ) ϕ 5 , we will only be able to analyze the contributions from the first three terms on the right side of the above equation. As we have seen in the previous chapter, this still allows for the computation of a large class of OPE coefficients (C 2 ) b ϕ a , namely those with g(a, b) > 3. With the help of the equation above and the definition (here we implicitly assume that at n-th perturbation order logarithms up to the power n may appear, which will be proven in section 3.5) Definition 3.14 (Gradings by powers of the logarithm) The gradings of the vertex operators Y n (|v a , x) and the remainder terms (R n ) ϕ k (x) by scaling dimension d, "spin" J and powers of logarithms p are denoted by Y n (|v a , x; d) p J and
3.4.161
and
3.4.163
and
3.4.164
we obtain the following partial result Using the operator ∆ −1 to solve the differential equation 3.4.160, we find
3.4.165
where the terms on the right side are concretely
3.4.168
with D (q) (d, J, r) defined as in eq. E.8 (the special cases q = 0 and q = 1 can be found in eqs. E.10 and E.11).
Proof : We simply have to use the gradings introduced above, the coupling rules of the spherical harmonics as discussed in appendix D.2 and the solution to the resulting differential equation from appendix E.
As in the previous chapter, this knowledge allows for the computation of the OPE coefficients (C 2 ) b ϕ a with g(a, b) = 9 and g(a, b) = 7 in full generality, and for g(a, b) = 5 in the cases where (R 1 ) ϕ 3 is known.
The matrix elements of the left representative Y 2 (ϕ, x) given in result 3.4.14 are ϕ a (x) = 0 for g(a, b) > 9 or g(a, b) = even § ¦ ¤ ¥
3.4.169
and
Proof : All we have to do is invert the Laplace operator on the OPE coefficients (C 1 ) b ϕ 5 a , see result 3.4. 13. This effectively means that we have to multiply logarithmic expressions by D (1) and polynomial expressions by D (0) . This procedure yields eq. 3.4.170.
Some higher order results 8
One aim of this thesis is to recognize patterns in our iterative scheme and in this way to extrapolate our knowledge of low perturbation orders to gain some insight into higher orders. The present section, which is dedicated to precisely this topic, is structured as follows: First we extend our results for the simplest class of OPE coefficients to arbitrary order in perturbation theory (still in the 3-dimensional model considered in the previous sections). Then the general structure of more complicated higher order coefficients is discussed.
We begin our discussion of results for arbitrary orders with the analysis of vanishing OPE coefficients. We would first like to show Proposition 4 The left representative Y n (ϕ k , x) contains products of no more than 4n + k ladder operators.
Proof : For the left representatives of the free theory this follows simply from eq. 3.1. 27. Now suppose we know the claim holds at order n − 1. Then we know that the left representative Y n−1 (ϕ 5 , x) is related to Y n (ϕ, x) by the field equation 3.2.5. Hence Y n (ϕ, x) contains at most 4(n−1)+5 = 4n+1 ladder operators just as we claimed. In order to construct the other n-th order left representatives we use the consistency condition. Let us now assume the claim holds for Y n (ϕ k−1 , x). Then the consistency condition yields Since the left representatives up to Y n (ϕ k−1 , x) fulfill the proposition by assumption, it is easy to check that the product of the left representatives on the right side of this equation contains at most 4i + 1 + 4(n − i) + k − 1 = 4n + k ladder operators. Further, the OPE coefficient (C i ) c ϕ k−1 ϕ vanishes in the limit y → x if |v c contains higher powers than ϕ k . Thus, the left representative Y n−i (|v c , x) multiplying this coefficient contains at most 4(n − i) + k ladder operators, where i > 0. It remains to discuss the left representatives in the second line, which also fulfill the desired property by assumption. Therefore, our claim also holds for Y n (ϕ k , x) and by iteration of this procedure for arbitrary left representatives.
In a similar manner, it can be shown that
Proposition 5
The left representative Y n (ϕ k , x) contains only products of an even number of ladder operators if k is even, and an odd number of ladder operators if k is odd.
Proof : Again there is nothing to show at zeroth perturbation order due to eq. 3.1. 27. Also, assuming the claim holds at order n − 1, it holds for Y n (ϕ, x) if we use ∆ −1 to solve the field equation. Thus, it remains to check whether the consistency condition respects our proposition. Assuming the left representatives up to Y n (ϕ k−1 , x) satisfy the proposition, we can deduce that both factors in the product of the left representatives on the right side of eq. 3.5.1 satisfy our claim, so the product as a whole does so as well. It remains to investigate the counterterms. Let for the moment k =even. Then, since Y i (ϕ k−1 , x) acts by an odd number of ladder operators for i ≤ n, one can show (see result 3.5.1) that the OPE coefficient (C i ) c ϕ k−1 ϕ vanishes if g(|v c , ϕ) = even, i.e. if |v c is constructed from ϕ by an even number of ladder operators. In other words, for the coefficient not to vanish, |v c has to be of the form ϕ l1m1 · · · ϕ lj mj where j is an even number. Therefore, the left representative Y n−i (|v c , x), which multiplies this OPE coefficient, is obtained from Y n−i (ϕ j , x) by taking the appropriate derivatives, see eq. 3.1. 27. For j =even this left representative also acts by an even number of ladder operators due to our assumption, so the proposition does indeed hold. On the other hand, if k =odd, one can follow the same argumentation to show that Y n−i (|v c , x) acts by an odd number of ladder operators. As the left representatives in the second line of eq. 3.5.1 fulfill the desired property by assumption, the iteration is complete.
As a simple conclusion from these propositions, we find Result 3. 5
.1 (Vanishing OPE coefficients)
At arbitrary perturbation order n ∈ N and for any exponent k ∈ N the equation
holds.
Proof : By proposition 4 the left representative Y n (ϕ k , x) acts by at most 4n + k ladder operators. Further, g(a, b) > 4n + k means that more than 4n + k ladder operators are needed to transform |v a into |v b . Thus, the matrix element v b |Y n (ϕ k , x)|v a = (C n ) b ϕ k a (x) vanishes, due to orthonormality of our basis. Now we come to the second part of the result. Assume g(a, b) =even for the moment, i.e. we need an even number of ladder operators to transform |v a into |v b . Then only the part of Y n (ϕ k , x) that acts by an even number of ladder operators contributes to the coefficient (C n ) b ϕ k a . Proposition 5 tells us that for k =odd, this left representative does not contain any contribution of this kind, so for g(a, b)+k =odd the OPE coefficient under consideration vanishes. If on the other hand g(a, b) =odd, we find by the same arguments that the coefficient vanishes for k =even, which finishes the proof.
This result implies that only the coefficients (C n ) b ϕ k a with g(a, b) = 4n + k − 2i where i ∈ N are non-zero. The difficulty in the computation of these remaining coefficients depends strongly on the value of g(a, b), as we have also seen in the constructions of the previous sections. This is due to the fact that for g(a, b) = 4n + k − 2i we have to contract i pairs of ladder operators, which essentially means that we have to solve an i-fold infinite sum. Thus, it is natural to first consider the coefficients (C n ) b ϕ k a with g(a, b) = 4n + k, since here no infinite sums appear. In this simple case it is possible to give a closed form expression with the help of the following generalizations of our notation:
Definition 3.16 (Notation for higher orders)
For n > 0 we define recursively where D (n) [d A , J, r] is defined as in eq. E.8 and with As in previous sections, let
3.5.6
Remark: This definition of ∆ p 1 is consistent with the formula given in eq. 3.4.163. This can be seen as follows: Note that for n = 1 the parameters n 1 , . . . , n 5 are all restricted to be equal to zero. Thus, eq. 3.5.4 takes the form where we also used the fact that ∆ 0 [A = l q i ] J = δ l i ,J according to the definition above. The sum over partitions of A into submultisets of cardinality 1 is equivalent to a sum over permutations of the elements of A. The right side of the above equation is invariant under such permutations (the submultisets P i do not appear) so we may replace this sum by a symmetry factor, which by definition is just s[A]. Therefore, eq. 3. 4.163 is equivalent to the definition above.
Result 3.5.2 (The simplest class of non-vanishing OPE coefficients)
For g(a, b) = 4n + k, b = a + 4n+k i=1 e l q i and A = l q i , . . . , l q 4n+k the equation holds.
Before we give the proof of this result, let us first present the following Lemma 3 The counterterms appearing in the construction of an arbitrary left representative Y n (ϕ k , x) act by less than 4n + k ladder operators. 5.9 since the matrix elements of all the counterterms in eq. 3.5.1 vanish, which also allows us to perform the limit y → x. Repetition of this procedure yields This process can be further iterated to obtain the factorized form Hence, we can reduce the problem to finding an expression for with g(a, b) = 4n + 1. Recall that we may use the field equation in order to determine this coefficient from In the second step we again used the factorization property 3. 5.11. With the help of this relation we can establish an iteration: We start at n = 1 with the formula 5.14 with g(a, b) = 5, which is familiar from section 3.4.1. There we have found the result 5.15 with A = l q 1 , . . . , l q 5 and b = a + 5 i=1 e l q i (recall that for g(a, b) = n the set I b a (n) consists of only one element). This is in accordance with eq. 3.5. 8. Now suppose eq. 3.5.8 holds for all OPE coefficients up to (C n−1 ) b ϕ a . Then the right side of eq. 3. 5.13 can be written as 5.16 where g(a, b) = 4n + 1 and A = l q 1 , . . . , l q 4n+1 with b = 4n+1 i=1 e l q i . In the second step we used the definition of D (n) , eq. E.8, in order to solve the differential equation, and in the last line we used the definition of ∆ n , see eq. 3.5.4. Therefore, eq. 3. 5.8 holds for all coefficients of the form (C n ) b ϕ a with g(a, b) = 4n + 1, and hence for all coefficients (C n ) b ϕ k a with g(a, b) = 4n + k due to the factorization property, eq. 3. 5.11. As mentioned above, the construction of OPE coefficients (C n ) b ϕ k a becomes increasingly difficult for decreasing values of g(a, b), so it will be considerably more complicated to extend the above result so smaller values of g(a, b). Thus, instead of trying to determine the concrete form of these coefficients, we will spend the rest of this section discussing some general properties of arbitrary OPE coefficients, which follow from the patterns observed in our low order computations.
Powers of logarithms
Here we want to prove the familiar claim
Proposition 6
At n-th order in perturbation theory, OPE coefficients (C n ) c ab (x) contain at most the n-th power of log r.
Proof : We prove this statement iteratively. At zeroth-order it is obviously true, as can be seen from our explicit construction of the general left representative Y 0 (|v a , x) of the free theory. Matrix elements of this normal ordered operator contain only finite sums of polynomial terms, and hence no logarithms. Now suppose the proposition is true at order n. Then according to our algorithm we proceed to order n + 1 by inverting the Laplace operator on (C n ) b ϕ 5 a . By assumption, this coefficient contains no higher powers than (log r) n . Now according to eq. E.8, inversion of the Laplace operator on such an expression can increase the power of log r at most by one, which implies that our claim also holds for (C n+1 ) b ϕ a . The next step in our scheme is to determine (C n+1 ) b ϕ 2 a using the consistency condition 5.17 All expressions in this formula are known, i.e. only coefficients up to (C n+1 ) b ϕ a appear. Thus, we know that each summand on its own fulfills our claim, so the only possible source for an additional power of the logarithm is the infinite sum over c. Despite our lack of knowledge of the explicit form of the coefficients in this sum, dimensional analysis 10 allows us to put it into the form 5.18 From this estimate we can see that if the infinite sum over c is to produce additional powers of logarithms, the argument of this logarithm will clearly be 1−|y|/|x|. However, in the limit y → x this expression is divergent and thus has to be cured by subtraction of an appropriate counterterm. Let us suppose the sum over c diverges as (log 1 − |y|/|x|) r , then the counterterm has to be proportional to (log |x|) p+q (log |x−y|) r |x| |b|−|a|−1 . After cancellation of the infinite parts, we are left with a finite contribution of the form (log |x|) p+q+r |x| |b|−|a|−1 . Now recall that every counterterm is a product of two OPE coefficients of order i and n+1−i respectively, which both satisfy our proposition. In other words, the combined power of logarithms in this product may not exceed n + 1. Therefore, for our counterterm of the form (log |x|) p+q (log |x − y|) r |x| |b|−|a|−1 we find the condition p + q + r ≤ n + 1, and it follows that also the finite result will fulfill our proposition.
This argumentation can be straightforwardly generalized to show that all coefficients (C n+1 ) b ϕ k a , and thus also the general coefficient (C n+1 ) c a b , fulfill our proposition, which completes the iteration.
Comparison to customary method
In the standard approach to quantum field theory OPE coefficients are determined via certain renormalized Feynman integrals [13]. In this section an exemplary computation of this type for a first order coefficient is presented. It will be shown that our method, i.e. the scheme outlined above, does indeed yield an equivalent result.
We want to determine the three-point coefficient (C 1 ) ϕ 3 ϕ,ϕ,ϕ (x 1 , x 2 , x 3 ) again in our three dimensional toy model with ϕ 6 interaction. In the usual approach this means that we have to perform the integrals is the propagator in our theory. Here we used the Feynman rules corresponding to the Lagrangian see eqs. 3.1.1 and 3.2.2. In the last step of eq. 3.6.1 we simply shifted the integration variable y → y + x 3 . Power counting suggests that the integrals are logarithmically infrared-divergent. Therefore, we introduce a cutoff as regularization and treat the integrals separately. In the end, as the cutoff is removed, we will obtain a finite result for eq. 3.6.1.
Let us start with the first integral in eq. 3. 6.1 and assume without loss of generality r 13 ≤ r 23 , where r ij = |x ij |. Then we can solve the integral using the Gegenbauer polynomial technique [34] [35]. Let r y = |y|, dΩ = sin Θ dΘ dφ and Λ ∈ R. Then we find Finally, we are ready to perform the radial integration, which is trivial in our present form of the integral. 1 Now consider the other integral in eq. 3.6. 1. In addition to the infrared-divergence, which we will again control using a cutoff, this integral is also ultraviolet-divergent. This divergence may be cured using differential renormalization [36,37,38], which works as follows: We may replace the integrand using the identity with some renormalization parameter µ ∈ C, which holds for r = 0. Thus, we obtain for the integral under consideration where Gauss'-theorem was applied in the second step. Subtraction of this result from eq. 3. 6.8 shows that the logarithmic divergences cancel out. Hence we may safely remove the cutoff, i.e. take the limit Λ → ∞, and arrive at the result where the abbreviations s := r 13 r 23 , c :=x 13 ·x 23 § ¦ ¤ ¥ 3. 6.12 were used. Now let us compute the same coefficient in our framework. First, the coherence theorem, thm. 1, states that the desired three-point coefficient can be uniquely determined just from the knowledge of the two-point coefficients. This can be seen by application of the factorization axiom (assuming r 13 ≤ r 23 , i.e. s ≤ 1) 6.13 where the sums go over all basis elements |v c ∈ V . The coefficients on the right side have been determined in section 3.4.1, see results 3.4.2 and 3.4.3. With the help of these results we can reduce the sums to the form 6.14 since the coefficients vanish for all other (linearly independent) choices of v c . Explicitly, we find for the coefficients on the right side (using the mentioned results from section 3.4.1 or appendix A) for n > 0 20 log r 12 for n = 0 § ¦ ¤ ¥
3.6.18
Thus we have by substitution into eq. 3. 6.14
3.6.19
Finally, using the addition theorem of the spherical harmonics and the abbreviations s and c as introduced above, we arrive at the formula 6.20 Comparing this result to eq. 3. 6.11 we find that the difference can be absorbed in the arbitrary choice of renormalization parameter 11 µ. Therefore, the results obtained from the two methods are indeed equivalent.
Conclusions and outlook
This thesis contains the first concrete results on the construction of an interacting, perturbative quantum field theory in the framework of [1] (see also chaper 2). Adopting a Fock-space representation and diagrammatic notation from Hollands and Olbermann [2] for the free theory, we have explicitly constructed all OPE coefficients of the form (C 1 ) b ϕ a and (C 1 ) b ϕ 2 a , as well as a large class of coefficents (C 1 ) b ϕ k a , k > 2 and (C 2 ) b ϕ a in a model theory with ϕ 6 -interaction on 3-dimensional Euclidean space.
These results were obtained with the help of an iterative algorithm first proposed in [1] (see also 3.2 ), which contains an inherent analog of renormalization via subtraction of counterterms. It was found by Hollands and Olbermann that this procedure could be neatly replaced in the free theory by a normal ordering prescription on the ladder operators in the mentioned Fockspace. As one might expect, however, the process of renormalization in interacting theories is considerably more complicated, in analog to usual formulations of perturbative quantum field theory. The constructions of section 3 constitute the first explicit example of this non-trivial process.
We have found that one can again incorporate renormalization by bringing all ladder operators into normal order. However, in the interacting case there is a finite difference between this procedure and the subtraction of counterterms arising from the general algorithm (see e.g. eq. 3.2.16). In order to compensate for this difference, we have to add additional remainder operators, which in a sense contain all the non-trivial information on the remormalization procedure, and are the main computational obstacle to proceed the iterative scheme. In order to find explicit expressions for these remainder terms, one has to perform divergent multiple series, subtract divergent counterterms and extract the finite difference. In result 3.4.4 we have defined the first operator of this kind, and in result 3.4.7 the second one is partially given. The computational machinery applied to obtain these results consists of identities of special functions and of hypergeometric series. Most notably, the results of [39] and [40] have been of great help in the analysis of the mentioned infinite sums. At the heart of these identities lies Dougall's formula, see eq. F.4, which may be generalized to arbitrary dimension and should thus be of importance in more general models [2]. These findings constitute the first insights into the structure of the infinite sums appearing in the framework.
Apart from these specific results on the OPE coefficients at first and second perturbation order, we have observed some general structures in the construction, which also apply to higher orders. These results are presented in section 3. 5. We were able to give a result for a particularly simple class of OPE coefficients to arbitrary orders by extrapolating the knowledge gathered in first order computations. In addition, a general statement about the powers of logarithms ap-pearing in arbitrary OPE coefficients was made. Finally, in section 3.6 we have shown in an example that our approach does indeed yield the same results as standard ones.
Future research will be aimed at a better conceptual understanding of the underlying mathematics of the framework, e.g. the interpretation of the left representatives as vertex operators [2], but also at a better understanding of the explicit computational obstacles. In particular, one would like to find ways to treat the infinite sums inherent in the framework in a general way. The results of this thesis may provide a first step into this direction, and future work could extend the results to higher orders and also to more general models in terms of spacetime dimension and type of coupling. Furthermore, a generalization of the framework to arbitrary (globally hyperbolic) background manifolds would be of interest, since the OPE is expected to play a fundamental role in the formulation of quantum field theory on curved spacetimes [14]. It may also be fruitful to study the relation to standard renormalization theory more deeply, and possibly to incorporate renormalization group techniques.
B Multisets
A multiset is a generalization of the notion set, where finite occurrences of any element are allowed. Equivalently, one may view a multiset as an unordered tuple. The concept was first used by Dedekind in 1888 and has since found applications in various fields of applied mathematics [41,42]. A formal definition is given by Remark: Any set A is a multiset (A, χ A ), where χ A is the characteristic function. Throughout this thesis, we denote multisets by capital fraktur letters A, B, C, . . ..
One may also define a multiset by giving the list of its elements. In order to avoid confusion, we will use · as brackets. Then for example the multiset A = (D = {a, b, c}, f ) with may equivalently be written as Many properties of sets can be naturally generalized to multisets. Here we only need the notion of cardinality. As in the case of ordinary sets, the notions of union and intersection are commutative, associative, idempotent and distributive. One can illustrate these operations on multisets with the following example. Let A be the multiset given in B.2 and further let Then the definitions above imply
Hypergeometric series
Due to their appearance in calculations in all fields of physics, hypergeometric series have received increasing attention over the last decades. In this chapter, after a brief introduction into the notation, definitions and basic results on hypergeometric series, we state the identities used in the computations of this thesis. For proofs and additional results on the topic we refer the reader to the literature [31,43]. The series is called hypergeometric series or also Gaussian hypergeometric series, as it was introduced into analysis by Gauss in 1812. Here the Pochhammer symbol where Γ(z) is the Gamma function was employed for convenience. We call a, b and c the parameters of the hypergeometric series and z its argument. A natural generalization of eq. C.1 is with α i , β j ∈ C∀i ∈ {1, . . . p}, j ∈ {1, . . . q} and p, q ∈ N, which is known as the generalized hypergeometric series.
If ω ∈ Z, then it is customary to refer to the corresponding class of hypergeometric series as ω-balanced hypergeometric series. A one-balanced series is also called Saalschützian series.
It is often necessary in the calculations of chapter 3.4 to analyze the divergent behavior of zero-balanced hypergeometric series in the limit z → 1. Therefore, the formula will be of great value. Here we defined (see [43]) a j + 1, 1, 1
Spherical symmetries in Euclidean spaces
As probably the most intuitive kind of symmetry that is frequently present in various problems in physics, rotational symmetries have naturally been studied extensively, as documented in the vast amount of literature on the topic. By axiom 2, symmetries of this kind will also appear in our framework, and should thus be expected to play a prominent role in simplifications of explicit calculations. In the following two subsections, we want to recall some basic results from the analysis of such symmetries, first for the general case of arbitrary dimensional space and then for the special 3-dimensional case of relevance for the calculations of this thesis.
The first subsection will mainly be concerned with some special functions related to spherical symmetries, most notably the spherical harmonics in D dimensions [44,30]. After stating the definitions and basic properties of these functions, we will derive some results needed in section 3.1, in particular concerning the relationship between a basis of spherical harmonics and traceless, totally symmetric tensors. Then additional properties of the special D = 3 dimensional case are recalled [33,45,46]. Emphasis will be shifted to group theoretical results, which were first introduced into physics in the context of quantum mechanics.
D.1 Euclidean spaces of arbitrary dimension D
In a setting where spherical symmetries are present, it is often desirable to express formulas in terms of functions, which are invariant under these symmetries. Let Then the following three definitions describe convenient properties we would like these functions to employ.
Definition D.2 (Irreducibility )
An invariant space I is called reducible if it can be split into two nontrivial invariant subspaces I 1 , I 2 with I 1 ⊥ I 2 . Otherwise, the space is called irreducible.
Definition D.3 (Primitive spaces)
A space I is called primitive if it is invariant and irreducible.
In the following we turn to the explicit construction of a primitive space in arbitrary dimension D, namely the space of spherical harmonics. The latter are most straightforwardly introduced with the help of the linear space H n (D) of homogeneous polynomials of degree n in D-dimensions, which consists of elements of the form with a i 1 ,...,i D ∈ C, i j ∈ {1, . . . , n} and |i (D) | = i 1 + i 2 + · · · + i D . Furthermore, a homogeneous polynomial of degree n ≥ 2 that satisfies H n = 0 is called homogeneous-or solid harmonic. Alternatively one might give a more abstract, but also more easily extendible definition of spherical harmonics. Namely, we can define the space Y n (D) as the space of eigenfunctions of the Beltrami operator ∆ * (D−1) on S D−1 , defined via (D) = ∂ 2 r + D−1 r ∂ r + 1 r 2 ∆ * (D−1) , to the eigenvalue −n(n + D − 2).
In the following, we sum up some basic properties of spherical harmonics without giving any proofs, which may be looked up in the cited literature: • from here on, we will denote by Y nm (D;x) an orthonormal basis of Y n (D), i.e. • the following addition theorem holds for the basis elements defined above: 1.8 where P n (D; x) is the Legendre polynomial in D dimensions defined as Then L n (D; x) is called the Legendre harmonic of degree n in D dimensions and the Legendre polynomial P n (D; t) with t ∈ R is defined via L n (D;x) = P n (D; t), where polar coordinatesx = t D + √ 1 − t 2x D−1 were used.
Remarks:
• P n (D; t) is a polynomial of degree n in t with the properties P n (D; 1) = 1 and P n (D; −t) = (−1) n P n (D; t) • the generating function of the Legendre polynomials is 1.9 where D ≥ 3, 0 ≤ x < 1 and −1 ≤ t ≤ 1.
• Legendre polynomials obey the orthogonality relation 1.10 Of course, many additional properties and alternative definitions of spherical harmonics and Legendre polynomials can be found in the literature on the subject [44,30]. In the context of this thesis, however, the above general relations should be sufficient, and we now focus on some specific results that are needed in the calculations of our framework. First, in section 3 we made use of the following isomorphism between spherical harmonics Y nm and totally symmetric, traceless, orthonormal tensors t lm 1.11 Here we want to derive the normalization factor c l . Orthonormality of the t lm means t lm t lm = 1 . § ¦ ¤ ¥ D. 1.12 As mentioned above, the space of spherical harmonics for given l has dimension N (l, D), which implies by the isomorphism, eq. D. 1.11, that this is also true for the t lm . With this information we can determine c l as follows: The third equality holds because of rotational invariance, i.e. we simply performed the angular integral over y by replacing y →ê, whereê is any unit vector, and multiplying with the surface area of the D − 1-dimensional sphere. We also abbreviated x ·ê by x e . Then the addition theorem, eq. D. 1.7, was used and the integral over the D − 2-dimensional sphere was performed. Finally, by the orthogonality relation for the Legendre polynomials D. 1.10 and the expansion (see [44]) of section 3. 1. In another calculation of that section we used the identity 1.16 which we show to be valid now. First observe that 1.17 Successive application of this relation yields 1.18 From eq. D. 1.11 one can easily derive (as also noted in eq. 3. 1.20) x {µ 1 · · · x µ l } = c −1 l (t lm ) µ 1 ...µ l r l Y lm (x) . § ¦ ¤ ¥ D. 1.19 Substitution of this formula into eq. D. 1.18 and simple algebraic manipulation gives the proclaimed result.
D.2 3-dimensional Euclidean space
Having discussed the special functions appearing in the analysis of spherical symmetries in an Euclidean space of arbitrary dimension D in the previous section, we from here on focus on the case D = 3, which describes the space our toy model of section 3 lives on. First, some of the above general results are repeated in the special D = 3 version, before we come to additional group theoretic results familiar from the analysis of angular momenta in quantum mechanics. Let us rewrite the central formulas of the previous section for D = 3. First, the space Y l (D) of spherical harmonics of degree l has dimension N (l, 3) = 2l + 1. The basis elements Y lm of this space are parametrized by m ∈ Z, with −l ≤ m ≤ l. Further, Y l (D) is an irreducible, O(3)-invariant, space of functions for any l, and the spherical harmonics are complete in C(S 2 ). In the following, we omit the label D and implicitly assume D = 3, e.g. we write Y lm (x) instead of Y lm (3;x). The addition theorem then takes the form Here P l is the usual Legendre polynomial with generating function (see [31]) As mentioned in section 3.2, we chose a toy model on 3-dimensional Euclidean space, because the representation theory of the corresponding symmetry group is comparably simple and familiar from the quantum mechanics of angular momentum [33,45,46], where spherical harmonics are the eigenfunctions of the operator of orbital angular momentum. In the remainder of this section, we will be concerned with the decomposition of products of spherical harmonics into irreducible parts. By this procedure, we can put our OPE coefficients into a convenient form that simplifies the differential equations 3.2.8. For this purpose, let us briefly recall the addition (or coupling) of angular momenta from quantum mechanics. Given two systems with angular momentum quantum numbers (j 1 , m 1 ) and (j 2 , m 2 ) and corresponding eigenstates |j 1 m 1 and |j 2 m 2 of the angular momentum operators J 2 1 , J 1z and J 2 2 , J 2z , there are different ways to express the combined system. On the one hand, one may use the direct product |j 1 m 1 ⊗ |j 2 m 2 = |j 1 j 2 m 1 m 2 of the constituent states, which is an eigenstate of all four operators J 2 1 , J 1z , J 2 2 and J 1z . It is easy to see that this product state is reducible, despite the irreducibility of both |j 1 m 1 and |j 2 m 2 . Alternatively, one may choose the system to be described by eigenstates |j 1 j 2 JM of the operators J 2 1 , J 2 2 , J 2 = (J 1 + J 2 ) 2 and J z = J 1z + J 2z . Contrary to the direct product case above, these states are also eigenstates of the total angular momentum operator J 2 and hence irreducible. It is an important fact for quantum mechanics that there exists a unitary transformation between the two mentioned sets of states describing the coupled system, which has the form where the expansion coefficients j 1 j 2 m 1 m 2 |j 1 j 2 JM = j 1 j 2 JM |j 1 j 2 m 1 m 2 are called Clebsch-Gordan coefficients (CG coefficients). For the sake of brevity, we will often use the notation j 1 j 2 m 1 m 2 |JM instead. The Wigner 3j-symbol defined by is also widely used because of its additional symmetry properties. Let us briefly recall some basic features of CG coefficients: • CG coefficients satisfy the orthogonality relations • they vanish unless the triangle inequality 2.9 and the condition m 1 + m 2 = M are fulfilled Before we come to the desired relation transforming a product of spherical harmonics into irreducible parts, we study the rotation matrices D J M M (α, β, γ) defined by where ψ JM is the wavefunction of a quantum mechanical system with angular momentum quantum numbers J and M , and x is obtained from x by a rotation of the coordinate system by the Euler angles (α, β, γ). Thus, rotating the coordinate system on both sides of eq. D. 2.5 by ω = (α, β, γ) and using orthogonality of the states, we arrive at the Clebsch-Gordan series 2.11 and equivalently
D.2.12
Now let us draw the connection to the coupling of spherical harmonics and consider the rotation where the second equality is a standard identity from the theory of special functions. Comparing this equation to the addition formula D.2.2, we immediately obtain 2.14 Substituting this connection between rotation matrices and spherical harmonics into eqs. D. 2.11 and D. 2.12 we finally arrive at the desired coupling rule for spherical harmonics with the same argument.
S l 1 m 1 (x)S l 2 m 2 (x) = l l 1 l 2 m 1 m 2 |lm l0|l 1 l 2 00 S lm (x) Here we used the unnormalized spherical harmonics for convenience. The CG coefficient with magnetic quantum numbers equal to zero, i.e. the coefficient of the form l 1 l 2 00|l0 , is sometimes called parity coefficient in the literature (see e.g. [46]), because it vanishes unless the sum of its entries is an even number, i.e. l 1 l 2 00|l0 = 0 if l 1 + l 2 + l = 2n + 1 with n ∈ N . § ¦ ¤ ¥ D. 2.16 Further, this coefficient is related to the Legendre polynomials by 2.17 and explicitly takes the values Eqs. D. 2.15 are the central formulae of this section. It is obvious that by successive application of these equations, one may decompose products of any number of spherical harmonics.
E
The characteristic differential equation As we discussed in section 3.2, our iterative scheme for the construction of OPE coefficients consists basically of two steps: Perform infinite sums of the form of eq. 3.2.16 to determine all coefficients at a given order and use the field equation, or more precisely eq. 3.2.8, to proceed to the next perturbation order. As mentioned in that section, most calculational effort goes into the former step, i.e. the infinite sums. In the present section we show how to perform the latter step, i.e. solving the differential equation for any i, k ∈ N and in arbitrary spacetime dimension D. We assume that any left representative takes values in the ring of functions Thus, an arbitrary element Y i (|v , x) ∈ Y is of the form with A i,d,q,J,M ∈ End(V ). Note that this assumption is consistent with our free theory results. Now in order to find a solution to the differential equation E.1 we define a right inverse to the Laplacian, i.e. an operator −1 ∈ End(Y) satisfying A solution to the differential equation is then simply found by the application of this operator on Y i−1 (ϕ k , x). This can be seen from and therefore is the desired solution. Our explicit choice for −1 is Expressing the Laplace operator in spherically symmetric form, i.e. using the form of given below def. D.4, one can check by straightforward calculation that this definition does indeed yield a right inverse to the Laplacian. Note however that this choice is not unique, since we may add any A ∈ Y with A ∈ ker and would again obtain a right inverse. These functions A are the harmonic polynomials in x with values in End(V ).
In the computations of section 3.4
F
The characteristic sum Due to the iterative nature of the construction described in section 3, one expects certain patterns to appear in the calculation of OPE coefficients. In this section we analyze one such expression, which characteristically shows up in first order calculations. Namely, as we saw in section 3.4, sums of the general form S(l 1 , l 2 ; a) := with l 1 , l 2 , a ∈ N are typically present at first perturbation order. The denominator is familiar from the solution of the differential equation relating the coefficients of the free theory to the first order coefficients, see eq. E.10 while the CG coefficient results from the coupling of angular momenta as discussed in appendix D. 2. In the following we will first give some general simplifications of this sum, and afterwards distinguish different special cases of the parameters. This analysis is based on the results of [39,47,48] and [40]. First note that we may extend the summation limits arbitrarily, as the CG coefficients automatically vanish if the triangular inequality D. 2.9 is not satisfied. Thus we may write Further, we may express the CG coefficients through an integral over Legendre polynomials by eq. D. 2.17, which yields In order to get rid of the sum over J we would now like to apply Dougall's formula (see [31]) but as a ∈ Z, this is clearly not possible at this stage. Hence, we first have to use the little trick of writing our sum as the limit S(l 1 , l 2 ; a) = 1 2 lim 2J + 1 ν(ν + 1) − J(J + 1) P ν (y) + 1 2a + 1 P a (y) P l 1 (y)P l 2 (y) . § ¦ ¤ ¥ F.5 Here we may use eq. F.4, and after carrying out some derivations and the limit we arrive at the convenient form The derivative of the Legendre function with respect to its degree has been discussed in [40], where the explicit form was derived. Here R n is a polynomial defined as where ψ is the digamma function [31,32] with the Euler-Mascheroni constant γ. Using this explicit form of the derivative in eq. F. 6, we obtain yet another expression for our sum.
F.10
This concludes our discussion of the general form of the sum S, and we will now use these results in order to further simplify the sum for special choices of the parameters l 1 , l 2 and a.
F.1
The cases a < |l 1 − l 2 | and a > l 1 + l 2 Let us first consider the case a < |l 1 − l 2 | in eq. F. 10. First we note that there is no sum over k in this case. Further, all expressions containing the integral 1 −1 P l 1 (y)P l 2 (y)P a (y)dy vanish, as the triangle inequality D. 2.9 is not satisfied. Therefore, only the term containing the logarithm remains.
S(l 1 , l 2 ; a < |l 1 − l 2 |) = 1 2 1 −1 P l 1 (y)P l 2 (y)P a (y) log(1 − y) dy § ¦ ¤ ¥ F. 1.1 Similarly, the integral 1 −1 P l 1 (y)P l 2 (y)P a (y)dy also vanishes if a > l 1 + l 2 . In this case, the sum over k in eq. F.10 goes from |l 1 − l 2 | to l 1 + l 2 . Recalling the original form of our sum S from eq. F.1, we notice that the sum over k is just 2 · S in the case at hand. Subtracting 2 · S on both sides of equation F. 10 and multiplying with (−1), we obtain the simple form S(l 1 , l 2 ; a > l 1 + l 2 ) = − 1 2 1 −1 P l 1 (y)P l 2 (y)P a (y) log(1 − y) dy . § ¦ ¤ ¥ F. 1.2 F. 2 The case a + l 1 + l 2 = odd If the sum of the parameters l 1 , l 2 and a is an odd number, the sum S simplifies drastically. This is also the case that has been studied most extensively in the literature (see [39,47,48]). The simplification is most easily derived from the form F.6 of our sum. To begin with, note that the second summand, i.e. the term without the derivative, vanishes, due to the parity requirement of the CG coefficients D. 2.16 (alternatively, one may deduce this result from the fact that we integrate over a function of odd degree). Additionally, as we will show in the following, the derivative of the Legendre function in the remaining term may be written in a convenient form in the underlying case.
Proofs
Here we gather some lengthy, more involved computations for the sake of readability of the main text.
G.1 Proof of result 3.4.4
Translating the corresponding diagrams into an explicit equation using the rules of section 3.3, we arrive at the formula We are especially interested in the sum over the contraction index l, since only this sum may contain infinite expressions after taking matrix elements. Therefore we omit the first line of the above equation in the following calculations and may easily restore it in the end. By a straightforward computation one can check that if we replace d+2 → −d−2, then the expression in brackets stays the same except for the sign in front of the logarithmic term in the last line, which then becomes a minus. Thus, it is sufficient to consider only the case d + 2 ≥ 0, since the other values of d can then simply be determined by the mentioned symmetry.
The logarithmic terms
Let us first focus on the logarithmic terms in eq. G. 1 just as we claimed in eqs. 3.4.58 and 3. 4.59. Note that this expression is zero when d+j = even, since in this case the 3j-symbol vanishes due to the parity condition.
The remaining expressions in eq. G. 1.1 are of the typical form discussed in appendix F, where it was observed that sums of this type behave very differently for varying choices of parameters l, j and d (corresponding to l 1 , l 2 and a in eq. F.1). For that reason, we distinguish different cases in the analysis of the above formula: The case d + 2 > j, the case d + 2 ≤ j and d + j = odd and the case d + 2 ≤ j and d + j = even (still assuming d + 2 ≥ 0). These cases are related to the number q of annihilation operators in the left representative of eq. G. 1.1, as we will see in the following.
The case q = 2 The fact that the result for (R 1 ) ϕ 2 (q = 2) is much simpler than for (R 1 ) ϕ 2 (q = 2) is related to the following lemma, which cannot be applied in the latter case. if |d + 2| > j and d + j = even § ¦ ¤ ¥ G. 1.7 Proof : We assume d + 2 > 0 for convenience (recall that the results for negative values of d + 2 may be obtained from the symmetry around d = −2). Let us first consider the case d + 2 > j. Then also l + d + 2 > l + j, which according to eq. F. 1 P l (y)P j (y)P l+d+2 (y) log(1 − y) dy § ¦ ¤ ¥ G. 1.8 for the first sum we want to discuss. Note that here the denominator does not vanish for any value of J due to our assumption for d. Now let us come to the second sum and first consider only the case l ≥ d + 2 neglecting the remaining finite sum. Then it is evident, that the inequality l − d − 2 < |l − j| holds, so that we can apply eq. F. 1 P l+d+2 (y)P j (y)P l (y) log(1 − y) dy . § ¦ ¤ ¥ G. 1.9 Comparing the two equations above, we see that these infinite sums cancel , § ¦ ¤ ¥ G. 1.10 which together with the symmetry around d = −2 confirms the first statement of our lemma.
.2 yields the simplification
We proceed with the case d + j = odd, which is just the type of sum considered in section F.2 of the appendix. Therefore we may now use eq. F. 2.4, which leads to the following simplifications (again assuming d + 2 > 0 for the moment): P l+d+2 (y)P j (y)Q l (y) dy . § ¦ ¤ ¥ G. 1.12 As mentioned in the appendix, these integrals vanish if the triangle inequality is satisfied by the parameters (see eq. F. 2.5), and they cancel each other if this inequality is not satisfied (see eq. F. 2.6). Therefore, also in this case we observe that the infinite sums cancel and we again obtain the result G. 1.7 as claimed in the lemma.
It remains to show that the sum under investigation vanishes if |d + 2| > j and d + j =even. In view of the previous results, this suggests that we have to show = 0 § ¦ ¤ ¥ G. 1.13 if |d + 2| > j and d + j =even. Let us again assume d + 2 > 0 for convenience. Since d + j + 1 =odd, we may apply eq. F. 2 dy P j (y)P l (y)Q d+1−l (y) § ¦ ¤ ¥ G. 1.14 According to eq. F. 2.5 this integral vanishes if the inequality |l − j| ≤ d + 1 − l ≤ l + j § ¦ ¤ ¥ G. 1.15 is satisfied. This implies that only two partial sums remain: dy P j (y)Q l (y)P d+1−l (y) = 0 § ¦ ¤ ¥ G. 1.16 The second line follows if we change the summation index l → d + 1 − l, and in the last equality we used eq. F.2. 6. Thus, the proof of the lemma is complete.
.4 obtaining
We are now ready to prove our results for (R 1 ) ϕ 2 (q = 2), eqs. 3.4.58 and 3. 4.57. Let us start with the case q ∈ {0, 4}, i.e. in G. 1.1 we consider only the contribution including four creation or four annihilation operators, b ±l 1 · · · b ±l 4 . The specific form of the left representative Y 0 (ϕ 4 , x) =: (Y 0 (ϕ, x)) 4 : then suggests that the power of r in these contributions is d = l 1 + . . . + l 4 for the contribution containing only creation operators, and d = −l 1 − . . . − l 4 − 4 for the part including four annihilation operators. Further, coupling of the spherical harmonics S l 1 m 1 · · · S l 4 m 4 restricts the possible values of the spin j in eq. G. 1.1 to j = l 1 + . . . + l 4 − 2k for k ∈ N (due to the parity condition and the triangle inequality satisfied by the intertwiners). 1.18 This form, despite its complicated appearance, allows us to analyze the divergent behavior of the infinite series by simply investigating the parameters of the hypergeometric functions involved.
As mentioned in the appendix, hypergeometric series of the general form p+1 F p α 1 , . . . , α p+1 β 1 , . . . , β p ; 1 § ¦ ¤ ¥ G. 1.19 converge for j β j − i α i =: k > 0. Thus, we immediately see that the hypergeometric functions 4 F 3 in eq. G. 1.18 are convergent, as they are 1-balanced, i.e. k = 1. The series of the type 3 F 2 , on the other hand, are 0-balanced, and should thus not be expected to converge. The precise divergent behavior can be analyzed with the help of eq. C.6 from the appendix, which tells us that these hypergeometric series approach infinity as Γ(α 1 )Γ(α 2 )Γ(α 3 ) Γ(β 1 )Γ(β 2 ) 3 F 2 α 1 , α 2 , α 3 β 1 , β 2 ; 1 − ε = 2ψ(1) − ψ(α 1 ) − ψ(α 2 ) − log ε 1.20 if the series on the left is zero balanced and if Re(α 3 ) > 0. Applying this formula to eq. G. 1.18 and extracting just the divergent part (i.e. the part proportional to the logarithm), we obtain Thus, we have indeed verified that all infinities cancel without the need of any exterior renormalization procedure. However, we do not obtain a result as simple as eq. G. 1.7 in this case, as the remaining hypergeometric functions of the type 4 F 3 in eq. G. 1.18 do not seem to cancel in general, leading to the complicated result G. 1.22 This completes the proof of result 3.4.4. It should be noted that the above discussion, especially eq. G. 1.22, holds in general, i.e. in all three cases we distinguished, since we did not put any restrictions on d or j. Thus, we could have shown finiteness for all choices of j and d in just this one step. However, the particularly simple result G. 1.7 does not seem to follow so easily from eq. G. 1.22. Furthermore, we wanted to emphasize the contrast between the calculational simplicity of the first two cases (due to the identities of sections F.1 and F.2) and the complicated expressions needed in the analysis of the last case. 3.4.7 Translating the three diagrams in eq. 3.4.96 into an equation, we find × η l +1 (l + l + d + 2)(l + l + d + 3) − J 2 (J 2 + 1) J 2 =m(l+l +d) + η l +1 log r y 2(l + l + d) + 5 δ J 2 ,m(l+l +d) + η l +1 (l − l + d + 1)(l − l + d + 2) − J 2 (J 2 + 1) J 2 =m(d+l −l−1) + η l +1 log r y 2(l − l + d) + 3 δ J 2 ,m(d+l −l−1)
G.2 Proof of result
Further, under this assumption equation G. 2 Comparing the two equations above it is easy to verify that these sums cancel in the limit η → 1 (i.e. y → x), so that only the finite sum which is in accordance with the logarithmic terms in eqs. 3.4.105 and 3. 4.106.
Vanishing partial sum
Now consider the follwing partial sum of eq. G.2.1: , § ¦ ¤ ¥ G. 2.7 As in the calculation of the remainder (R 1 ) ϕ 2 in the previous section, this expression again behaves very differently for varying values of d and j. Hence, we again distinguish different values of the number of annihilation operators among the three operators constituting Y 0 (ϕ 3 , x) in eq. G. 2.1, where the notation Y 0 (ϕ 3 , x; q) will be used in order to indicate this grading of the left representative. Let us start with the case q = 0, which implies d ≥ j and d + j = even. Thus, we may use eq. F. 1.2 from the appendix in order to simplify
G.2.19
Here we expressed the 3j-coefficient and the integral through Γ-functions with the help of eqs. D. 2.18 and F. 2.6, and wrote the infinite sum over l as a hypergeometric series. Taking a closer look at the parameters of this hypergeometric series, we find that it is zero-balanced, and therefore logarithmically divergent in the limit η → 1, as expected. Eq. C.6 from the appendix allows for a more detailed characterization of this divergence. We find for the prefactor of the diverging logarithm
G.2.20
Finally, we have found the divergence that cancels with the logarithmic counterterm (see eq. 3.4.93). Since there is no further counterterm, and as we now have brought all expressions into normal order without any remaining divergences, we have verified that the renormalization procedure also works in the case at hand.
Result
Summing up all the results of the preceding discussion, we can now give the remainder for q = 0 as | 2009-06-30T11:23:26.000Z | 2009-06-30T00:00:00.000 | {
"year": 2009,
"sha1": "dd946824c8f2b237d5805e54f870486f026c2d75",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0545c7ee8b384d0b23111a5fa1fc133799dafa07",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
238478342 | pes2o/s2orc | v3-fos-license | AREAdata: A worldwide climate dataset averaged across spatial units at different scales through time
In an era of increasingly cross-discipline collaborative science, it is imperative to produce data resources which can be quickly and easily utilised by non-specialists. In particular, climate data often require heavy processing before they can be used for analyses. Here we describe AREAdata, a continually updated, free-to-use online global climate dataset, pre-processed to provide the averages of various climate variables across different administrative units (e.g., countries, states). These are daily estimates, based on the Copernicus Climate Data Store’s ERA-5 data, regularly updated to the near-present and provided as direct downloads from our website (https://pearselab.github.io/areadata/). The daily climate estimates from AREAdata are consistent with other openly available data, but at much finer-grained spatial and temporal scales than available elsewhere. AREAdata complements the existing suite of climate resources by providing these data in a form more readily usable by researchers unfamiliar with GIS data-processing methods, and we anticipate these resources being of particular use to environmental and epidemiological researchers.
a b s t r a c t
In an era of increasingly cross-discipline collaborative science, it is imperative to produce data resources which can be quickly and easily utilised by non-specialists. In particular, climate data often require heavy processing before they can be used for analyses. Here we describe AREAdata, a continually updated, free-to-use online global climate dataset, preprocessed to provide the averages of various climate variables across different administrative units ( e.g. , countries, states). These are daily estimates, based on the Copernicus Climate Data Store's ERA-5 data, regularly updated to the near-present and provided as direct downloads from our website ( https://pearselab.github.io/areadata/ ). The daily climate estimates from AREAdata are consistent with other openly available data, but at much finer-grained spatial and temporal scales than available elsewhere. AREAdata complements the existing suite of climate resources by providing these data in a form more readily usable by researchers unfamiliar with GIS data-processing methods, and we anticipate these resources being of particular use to environmental and epidemiological researchers.
Value of the Data
• AREAdata provides estimates of daily climate data, population density, and future climate forecasts, averaged across different spatial units at different scales, distributed in easy to use file formats. • We believe these data are of wide use, but specifically we see use-cases for ecologists and epidemiologists. In particular, researchers untrained in GIS methods would benefit from the accessible nature of how we distribute these data. • We have already used these data to investigate the seasonality of SARS-CoV-2 (the causative agent of COVID-19) [1,2] and envisage further use of these data for understanding the seasonal responses of infectious diseases. Furthermore, the continually updating nature of this dataset makes it particularly useful for for rapid analyses in response to new disease emergence. • Many other researchers have applied similar methods to the same underlying data in order to quantify climate variables, resulting in a mass duplication of effort [3][4][5][6][7][8] . By using AREAdata, this duplication of effort could be reduced. • Climate datasets are essential for researchers across many disciplines, however are generally available only in formats that require extensive processing and specialist knowledge to use. AREAdata makes climate data accessible and open to non-specialists.
These are distributed both as .RDS files for use in the R statistical programming environment and as zipped tab-delimited files for other uses. Details of each file are given in Table 1 . The daily climate files consist of a matrix of point estimates of an environmental variable (either temperature, specific humidity, relative humidity, UV or precipitation), with rows representing each spatial unit that the variable was averaged across and columns representing the date. These daily files are periodically updated, by automatically downloading and processing new data as it becomes available. The population density files consist of a matrix with a single column of population density point estimates, with rows for each spatial unit. The climate forecast files consist of a matrix of point estimates for annual mean temperatures, with rows representing each spatial unit, and columns representing the combination of global climate model (GCM) and shared socio-economic pathway (SSP), and the year range of the projection. Column headers for the forecasting files follow the labelling convention < GCM > _ < SSP > _ < XXXX-YYYY > , where XXXX-YYYY specifies the date range of the forecast. These files are all distributed by the level of spatial organisation that the data have been averaged across ( i.e. separate files for countries, states, counties). In the initial release, AREAdata provided daily climate estimates from 2020-01-01 to 2021-09-30.
To ensure that those who process and release the raw data going into AREAdata are properly acknowledged, a condition of use of AREAdata is the citation of the raw data, and this information is provided on the website. Table 1 List of all files distributed by AREAdata. All files are available both in.RDS and zipped.txt formats (with filenames appended as such). Status column shows which files are released only once with this dataset (static), or are continuously updated when new data become available (updating). For the updating files, new data are periodically downloaded and processed, and the new estimates are appended to the old files and re-published with the same file-names. Publication of these data on figshare enables previous versions to also remain online and be downloaded alongside updated versions.
Experimental Design, Materials and Methods
To produce the daily climate estimates provided in AREAdata, we gather gridded rasters describing daily climate data and average these climate variables across the geographic areas of spatial units at different levels of administrative organisation.
Below, all software packages given in italics are R packages (version 4.1.0) [9] unless otherwise specified. The code to fully reproduce this pipeline is freely available under a GPL v3.0 license and can be acquired from our GitHub repository ( https://github.com/pearselab/areadata ). An archived version of the code used in this publication is available on zeonodo ( https://doi.org/ 10.5281/zenodo.5901419 ).
Continual updates of the output files as new climate data becomes available can be found on our GitHub project website ( https://pearselab.github.io/areadata/ ) and on figshare ( https://doi. org/10.6084/m9.figshare.16587311 ). These continual updates are automatically released monthly, however the underlying code to run these updates locally is also shared so that users can update these data to-the-day when necessary. Output files for the county-level estimates are large ( > 100MB), and so are released only on figshare. Data on either platform are version-controlled with dates of submission recorded and past versions archived.
Users can also create custom downloads for the county-level (GID2) data using an R Shiny app ( https://smithtp.shinyapps.io/areadata-app/ ). This allows for finer control in which parts of the data are downloaded, rather than downloading these large files in their entirety.
Static output files for population density and future estimates of annual mean temperatures can also be found on our GitHub website and figshare ( https://doi.org/10.6084/m9.figshare. 16770 0 04 ).
Data collection
We acquire shapefiles for worldwide administrative areas from the Global Administrative Areas (GADM) database [10] at three different spatial scales: GID 0, GID 1, and GID 2. GID 0 is equivalent to countries, and (in the USA) GID 1 and GID 2 are equivalent to states and counties respectively.
We collect hourly estimates of climatic variables for the ERA-5 reanalysis from the Coperincus Climate Change Service's Climate Data Store (CDS). Temperature (K), specific humidity (kg kg −1 ; mass of water vapour per kilogram of moist air), and relative humidity (%; water vapour pressure as a percentage of the air saturation value) are acquired from the pressure-levels dataset [11] at 10 0 0 hPa ( i.e. , surface atmospheric pressure). Estimates of ultraviolet (UV) levels (J m −2 ; the amount of UV radiation reaching the surface) and precipitation (m; total precipitation, the accumulated liquid and frozen water falling to the Earth's surface as measured in metres of water equivalent) are acquired from the surface-level dataset [12] .
Global population density data are acquired from the Gridded Population of the World collection, version 4, revision 11 [13] . These data consist of population density estimates based on national and sub-national censuses and population registers. They use a gridding algorithm to assign population densities to grid cells, and these data are provided as rasters at different scales. Here we use the 15 arc-minute resolution for consistency with the resolution of the ERA5 climate data.
Climate averaging pipeline
We use the Climate Data Operators program [16] to compute daily means from the hourly data for each of the climate variables acquired from the CDS. We then calculate the mean value of each environmental variable across the administrative units given in each of our acquired shapefiles ( i.e. countries, states, etc.), using the exactextractr R package. Specifically, we compute the mean of all grid cells fully or partially covered by the administrative unit polygon, weighted by the fraction of each cell covered by the polygon. When new climate data becomes available, these are appended to the previously extracted data to produce a single, live, updated output file for each administrative level and environmental variable combination. The data produced are simple files containing the daily climate estimates by spatial unit, e.g. country and by date, which we output as .RDS files for use in R and as zipped tab-delimited text files for other applications. We use an automated pipeline to produce new estimates on a monthly basis, which updates these files and automatically publishes new versions to GitHub and figshare (the links for which remain constant).
We use the same methods to process the gridded population density data, which we provide similarly with a single population density estimate for each spatial unit. We process annual mean temperatures from the climate forecast data, and again provide estimates by spatial unit for each combination of GCM and SSP. The population density and temperature forecast output files are static (not continually updated). Our website provides an easy interface to download these data; however, users can also run the provided code locally to make adjustments to the calculations and generate their own files.
Ethics Statement
Not applicable -no human or animal subjects used in the generation of this dataset.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships which have, or could be perceived to have, influenced the work reported in this article. | 2021-10-09T13:15:59.854Z | 2021-10-06T00:00:00.000 | {
"year": 2022,
"sha1": "1dd26df8ce91a9619d7a2e35dbcb9230d294c388",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2022.108438",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d5cebb5692f33513e461b2f6a164d7f6f501b0b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
233388197 | pes2o/s2orc | v3-fos-license | Weakly-supervised Multi-task Learning for Multimodal Affect Recognition
Multimodal affect recognition constitutes an important aspect for enhancing interpersonal relationships in human-computer interaction. However, relevant data is hard to come by and notably costly to annotate, which poses a challenging barrier to build robust multimodal affect recognition systems. Models trained on these relatively small datasets tend to overfit and the improvement gained by using complex state-of-the-art models is marginal compared to simple baselines. Meanwhile, there are many different multimodal affect recognition datasets, though each may be small. In this paper, we propose to leverage these datasets using weakly-supervised multi-task learning to improve the generalization performance on each of them. Specifically, we explore three multimodal affect recognition tasks: 1) emotion recognition; 2) sentiment analysis; and 3) sarcasm recognition. Our experimental results show that multi-tasking can benefit all these tasks, achieving an improvement up to 2.9% accuracy and 3.3% F1-score. Furthermore, our method also helps to improve the stability of model performance. In addition, our analysis suggests that weak supervision can provide a comparable contribution to strong supervision if the tasks are highly correlated.
Introduction
Deep learning involving multiple modalities can be seen as a joint field of computer vision and natural language processing, which has become much more popular in recent years (Vinyals et al., 2015;Goyal et al., 2017;Sanabria et al., 2018). For human affect recognition tasks (e.g. emotion recognition, sentiment analysis, sarcasm recognition, etc.), more modalities can provide complementary and supplementary information (Baltrušaitis et al., 2018) and help to recognize affect more accurately. Prior works mainly focus on two major directions: 1) mod-Preprint. Code will be available soon.
Wow, you're very nice and helpful!
Visual Text
High pitch, slow rate.
Speech
She is being sarcastic
Negative Sentiment
Auxiliary Task Figure 1: An example of multi-task learning (MTL) when emotion recognition is the main task. Given the input (positive text, skeptical face and hesitating voice), it is not clear what emotion the woman has, resulting in a positive prediction due to the strong signal from the textual modality. However, if we could know she is actually being sarcastic or her sentiment is negative, the prediction will be leaned towards negative emotions. MTL is a way to eavesdrop external information by leveraging more useful supervisions. In this paper, we explore weakly-supervised MTL which enables to get more supervisions with zero extra human labor.
Apr 2021
2009; Lin et al., 2014a;Rajpurkar et al., 2016) or other multimodal ones (Lin et al., 2014b;Johnson et al., 2017;Sidorov et al., 2020), which poses two potential problems: 1) models are easy to overfit and cannot generalize well; and 2) instability of performance (e.g. alter a random seed for weights initialization can lead to a salient change of the model performance). To study and validate the severity these problems, we benchmark 12 different models on two commonly used multimodal emotion recognition datasets (Table 3 and Appendix A). The experimental results show that recent stateof-the-art (SOTA) models do not have an obvisou advantage over simple baselines. By constructing a naive hybrid fusion mechanism, the baselines can easily surpass the SOTA models. Furthermore, to investigate the instability issue, we run each model with five different random seeds (0 -4) on those two datasets. Averagely, the standard deviations of the weighted accuracy (Tong et al., 2017) and F1 are 1.8% and 1.7%, respectively. The performance improvement gained by just altering a random seed is similar to or even larger than the improvement gained by using a more advanced method from the prior work (Zadeh et al., 2018a;Tsai et al., 2019;Wang et al., 2019b;Dai et al., 2020a).
Given the fact that there are many different multimodal datasets related to affect recognition and most of them are single-task labeled, to mitigate the aforementioned problems, we propose to conduct weakly-supervised multi-task learning (MTL) to improve the generalization performance and stability of models (Zhang and Yang, 2017) (Figure 1). Compared to MTL with strong supervision, weaklysupervised MTL does not require the datasets to have multiple labels at the same time, which makes it much cheaper and more flexible. It can be seen as an implicit way of achieving data augmentation without any additional human labor (more details in Section 5). According to our experiments, we get an improvement up to 2.9% accuracy and 3.3% F1-score by leveraging weakly-supervised MTL.
The contributions of this paper are summarized as follows: • We thoroughly benchmark 12 models on two widely used multimodal affect recognition datasets. Based on this, we further propose a simple but effective hybrid modelagnostic modality fusion method, which performs equally or even better than previous state-of-the-art models.
• We show the effectiveness of weaklysupervised MTL on relevant multimodal affect recognition tasks. We achieve an improvement, up to 2.9% accuracy and 3.3% F1-score, on three multimodal affect recognition tasks. Furthermore, our results demonstrate that weak labels can bring comparable improvement as strong labels.
Related Works
Multimodal affect recognition. Multimodal affect recognition has attracted increasing attention in recent years. It can be seen as a family of tasks, including multimodal emotion recognition, sentiment analysis, sarcasm recognition, etc. There are two major focuses in this research field (Baltrušaitis et al., 2018): 1) how to better model intra-modal dynamics, i.e. improving the representation learning of a single modality; and 2) how to improve the intermodal dynamics, i.e. the interactions cross different modalities. Diverse methods have been proposed to improve these two parts. For example, quite a few works focus on the fusion of modalities, such as the Tensor Fusion Network , Memory Fusion Network (Zadeh et al., 2018a), Multimodal Adaptation Gate (Rahman et al., 2020). Additionally, Multimodal Transformer (Tsai et al., 2019) was introduced to handle unaligned data, Dai et al. (2020a) proposed to use emotional embeddings to enable zero-/few-shot learning for lowresource senarios, and Dai et al. (2021) introduced the sparse cross-attention to improve performance and reduce computation. Despite the remarkbale progress has been made, we find that most models suffer from the small scale of data on these tasks. For example, the Multimodal Transformer (Tsai et al., 2019) has a dimenion of only 40, meanwhile with a large dropout value around 0.3.
Multi-task Learning (MTL).
MTL has been widely used in numerous tasks to improve the performance of models. It can be seen as an implicit data augmentation and an eavesdropping of extra supervision to improve the generalization ability of models (Zhang and Yang, 2017). For example, in computer vision, Xu et al. (2018) proposed PAD-Net, which tackles depth estimation and scene parsing in a joint CNN with four intermediate auxiliary tasks. Moreover, Kokkinos (2017) invented UberNet, which solves seven different tasks simultaneously. In natural language processing, MTL is also leveraged in various tasks, such as offensive text detection (Abu Farha and Magdy, 2020; Dai et al., 2020b), summarization (Yu et al., 2020), question answering (McCann et al., 2018), etc. For multimodal affect recognition tasks, not much work has been carried out to incorporate multi-tasking. Of the few works that have considered multi-tasking, Akhtar et al. (2019) proposed to tackle sentiment analysis and emotion recognition jointly. However, their method is only verified on one dataset, as it requires the dataset to have multiple human annotation at the same time. Also, Chauhan et al. (2020) studied the relationship between sentiment, emotion, and sarcasm by manually annotating a sarcasm dataset (Castro et al., 2019), which we believe does not scale. In addition, they only labeled a few hundred of samples, which limits the persuasion of their results. Differently, our work explores weakly-supervised multi-task learning which is much cheaper and more scalable. Moreover, our experiments are done in a wider range with three different datsets.
Data and Evaluation Metrics
In this section, we first introduce three datasets used for model benchmarking and weakly-supervised multi-task learning. Then, we discuss the feature extraction algorithms to pre-process the data. Finally, we illustrate the metrics we use to evaluate models.
IEMOCAP. The Interactive Emotional Dyadic
Motion Capture (IEMOCAP) (Busso et al., 2008) is a dataset for multimodal emotion recognition, which contains 151 videos along with the corresponding transcripts and audios. In each video, two professional actors conduct dyadic dialogues in English. Although the human annotation has nine emotion categories, following the prior works (Hazarika et al., 2018;Wang et al., 2019a;Tsai et al., 2019;Dai et al., 2020a), we take four categories: neutral, happy, sad, and angry.
CMU-MOSEI.
The CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) (Zadeh et al., 2018b) is a dataset for both multimodal emotion recognition and sentiment analysis. It comprises 3,837 videos from 1,000 diverse speakers and annotated with six emotion categories: happy, sad, angry, fearful, disgusted, and surprised. In addition, each data sample is also annotated with a sentiment score on a Likert scale [-3, 3].
MUStARD. The Multimodal Sarcasm Detection
Dataset (MUStARD) (Castro et al., 2019) is a multimodal video corpus for sarcasm recognition. The dataset has 690 samples with an even number of sarcastic and non-sarcastic labels.
We show the data statistics of three datasets in Table 1.
Data Feature Extraction
Feature extraction is done for each modality to extract high-level features before training. For the textual modality, we use the pre-trained GloVe (Pennington et al., 2014) embeddings to represent words (glove.840B.300d ). For the acoustic modality, CO-VAREP (Degottex et al., 2014) is used to extract features of dimension 74 from the raw audio data. The features include fundamental frequency (F0), Voice/Unvoiced feature (VUV), quasi open quotient (QOQ), normalized amplitude quotient (NAQ), glottal source parameters (H1H2, Rd, Rd conf), maxima dispersion quotient (MDQ), parabolic spectral parameter (PSP), tilt/slope of wavelet response (peak/slope), harmonic model and phase distortion mean (HMPDM and HMPDD), and Mel Cepstral Coefficients (MCEP). For the visual modality, 35 facial action units (Ekman et al., 1980) are extracted from each frame of the video with OpenFace 2.0 (Baltrusaitis et al., 2018). Following previous works (Tsai et al., 2018;Zadeh et al., 2018b), word-level alignment is done with P2FA (Yuan and Liberman, 2008) to achieve the same sequence length for each modality. We reduce multiple feature segments within one aligned word into a single segment by taking the mean value over the segments.
Evaluation Metrics
Overall, we use three metrics for different datasets. For the emotion recognition task, as we evaluate on each emotion category, there are much more negative data that positive data, we use the weighted Accuracy (WAcc) (Tong et al., 2017) to mitigate the class imbalance issue. The formula of WAcc is where means total negative, true negative, total positive, and true positive. For sentiment analysis and sarcasm recognition, we just use normal accuracy as the data is more balanced. In addition, we also use the F1-score for all the tasks.
However, we do not use the binary weighted F1-score (WF1) as some previous works (Zadeh et al., 2018a,b;Tsai et al., 2019;Rahman et al., 2020) did. The formula of WF1 is shown below, in which F1 is the F1 score that treats positive samples as positive, while F1 treats negative samples as positive, and they are weighted by their portion of the data ( is the total number of samples). We think that WF1 makes the class imbalance issue even severer, as F1 only contributes a small portion of the total WF1. According to our experiments on IMEOCAP, when using WF1 to evaluate models, by increasing the threshold of classification, the WF1 increases a lot as well. Even when the threshold is 0.9 (means most of the samples will be classified as negative), the average of WF1 can still be higher than 0.7 while the WAcc is already below 0.5. A similar phenomenon is also observed by Dai et al. (2020a). Therefore, we just use the normal unweighted F1-score, which we think can better reflect the model's performance.
Model Benchmarking
As mentioned in Section 1, we conjecture that the data sacacity issue of multimodal affect recognition tasks causes two problems: 1) models tend to be overfitted; and 2) instability of performance. To verify the severity of them, we benchmark 12 different models on two commonly used multimodal emotion recognition datasets: IEMOCAP (Busso et al., 2008) and CMU-MOSEI (Zadeh et al., 2018b). The models include six baselines, three recently proposed SOTA models, and three advanced baselines with the hybrid fusion method. All the models are listed in the first column of Table 3. The implementation details are discussed in Section 6.
Baselines
For the baselines, we use two commonly used model-agnostic fusion methods ( Figure 2): 1) earlyfusion (EF), which can capture low-level crossmodal interactions; and 2) late-fusion (LF), which can capture high-level ones. They are used to build simple baselines. Besides, based on EF and LF, we propose to combine them as a hybrid fusion method (EF-LF) to construct strong baselines.
Within each fusion method, we apply three different model architectures, Average of features (AVG), bi-directional Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997), and Transformer (Vaswani et al., 2017), to process sequences of feature vectors. As the simplest baseline, the AVG model just takes the mean of the input vectors as the output vector. For LSTM, we take the output vector at the last time step as the representation of the whole input sequence. For Transformer, following the practice of previous work (Vaswani et al., 2017;Devlin et al., 2019;Liu et al., 2019), we prepend a [CLS] token to the input sequence and use the output embedding at that position.
SOTA Models
Apart from the baselines, we also select three recently proposed state-of-the-art models on this task for comparison: 1) Memory Fusion Network (MFN) (Zadeh et al., 2018a), which has a multi-view gated memory for storing cross-modal interactions over time; 2) Multimodal Transformer (MulT) (Tsai et al., 2019), which fuses each pair of modalities with self-attention and can handle unaligned sequences; and 3) the model proposed by Dai et al. (2020a), which leverages the information inside emotion embeddings (EMO-EMB) and can hanlde zero-/few-shot senarios.
Weakly-supervised Multi-task Learning
In this section, we first formally define the problem settings. Then, we explain our method for multitask learning and how do we generate weak labels.
Problem Definition
We define a multimodal affect recognition dataset with data samples as = {( , , , )} =1 , in which ∈ R × is a sequence of word embeddings to represent a sentence, ∈ R × denotes an aligned sequence of audio features, Table 2: Emo, Sen, and Sar are abbreviations of Emotion, Sentiment, and Sarcasm, respectively. In the second column, denotes the existence of human annotated strong labels, and -denotes the absence. In the third column, we show the existence of model generated weak labels for each dataset, and also which dataset the model is trained on.
∈ R × denotes an aligned sequence of facial action units (FAUs) extracted from the video frames, and ∈ R denotes the golden label. Each modality has the same sequence . For different datasets, the inputs , , have the same dimension as we use the same feature extraction across all datasets, only the dimension of labels might be different, depending on how many classes the dataset has.
Multi-task Learning (MTL) for Multimodal Affect Recognition
As mentioned in Section 1, datasets for multimodal affect recognition are relatively small in nature, which causes two problems. To mitigate them, we propose to utilize MTL by two reasons. Firstly, MTL can potentially improve the generalization performance when tasks are relevant (Zhang and Yang, 2017). Secondly, there are many existing datasets related to multimodal affect recognition, even though each can be small, and their data come in a similar format (text, audio, video) and can be pre-processed in the same way. Specifically, we leverage three relevant multimodal tasks: 1) emotion recognition; 2) sentiment analysis; and 3) sarcasm recognition.
Weak Label Acquisition. Given two separate datasets 1 and 2 on two different tasks 1 and 2 , we generate weak labels of 2 for 1 in two steps: 1) first train a model on the data of 2 ; and 2) use the trained model to infer predictions on the data of 1 , and the predictions are treated as weak labels of 2 . Specifically, in this paper, we train a dedicated EF-LF-LSTM model to generate weak labels on each of the three tasks following the aforementioned procedure. The details of what kinds of labels each dataset has after weak label acquisition are shown in Table 2 Table 3: Performance evaluation of 12 models (six simple baselines, three strong baselines, and three SOTA models) on the IEMOCAP (Busso et al., 2008) dataset. We report the weighted accuracy (WAcc) and the F1-score on four emotion categories: neutral, happy, sad, and angry. In addition, we also report the average of them as an overall measurement. For each model, we run five random seeds {0, 1, 2, 3, 4} and report the mean ± standard_deviation. The best performance is decorated in bold.
2018b; Liu et al., 2018;Rahman et al., 2020), we simplify it to a two-class classification problem (either positive or negative) and generate binary labels. For the other tasks, we follow the original categories of the datasets. In addition, we also store the accuracy of the EF-LF-LSTM model on each dataset as a confidence score of the generated weak labels.
Weakly-supervised MTL. After getting the weak labels, we can conduct weakly-supervised learning for different tasks. The training procedure is shown in Eq.1 and 2: where is the backbone model with weights shared by all tasks, which generates a representation of the input data, and is the number of tasks. For each task , there is a linear layer with parameters to perform affine transformation on to get the desired output dimension. The overall objective is to minimize the total loss of all tasks, and each task has a loss function with a weighting factor . This weighting factor can be either the confidence score mentioned in the last paragraph, or a hyper-parameter searched manually. For the strong labels of the main task, the confidence score is 1.
For datasets that contain multiple labels, we can directly apply this MTL procedure, while for unlabeled tasks, we perform MTL in a weaklysupervised way. For example, we can train a model on sentiment analysis and use it to predict sentiment scores for a sarcasm recognition dataset. Then, jointly use predicted weak labels with the original labels of the dataset for MTL.
Experimental Settings
To ensure we make a fair comparison of the models, we perform an elaborate hyper-parameter search with the following strategies. Firstly, for each model, we try the combinations of four learning rates {1e −3 , 5e −4 , 1e −4 , 5e −5 } and six batch sizes {16, 32, 64, 128, 256, 512}, resulting in 24 experiments. Among their hyper-parameter settings achiveing top-5 performance, we further try different model-dependent hyper-parameters, such as the hidden dimension, feed-forwad dimension, number of layers, dropout values, etc. For the previous state-of-the-art models, we also conduct a similar hyper-parameter search based on their reported numbers in the paper. To test the stability of models and eliminate the possible contingency caused by weights initialization, for the best setting of each model, we run five different random seeds {0, 1, 2, 3, 4} and report the mean and standard deviation. The best hyper-parameters are shown in Appendix In the first column, we show the main target task and the corresponding dataset. In the second column, we show the tasks used in the training process. The symbol + means adding an auxiliary task in the training, besides the main target task. All means using all the tasks above for training. ( ) or ( ) indicates whether this external supervision is strong or weak. For emotion recognition, Avg. means the average of all emotion categories. In the bottom block, we also compare the effectiveness of using strong and weak label when conducting MTL.
B. For the emotion recognition, we use the binary cross-entropy loss as the data are multi-labeled (a person can have multiple), with a loss weight for positive samples to alleviate the data imbalance issue. For the sentiment prediction and sarcasm recognition, we use the cross-entropy loss. The Adam optimizer (Kingma and Ba, 2015) is used for all of our experiments with 1 = 0.9, 2 = 0.999 and a weight decay of 1e −5 . Our code is implemented in PyTorch (Paszke et al., 2019) and run on a single NVIDIA 1080Ti GPU.
Benchmarking Results Analysis
To investigate the two problems aforementioned in Section 1 and have an overall understanding of the performance of various models on multimodal affect recognition, we benchmark 12 models (Section 4) on CMU-MOSEI and IEMOCAP. The results on IEMOCAP are shown in Table 3, and the results on CMU-MOSEI are included in Appendix A. As explained in Section 3.3, we use slightly different evaluation metrics compared to previous works (Zadeh et al., 2018a;Tsai et al., 2019) to better reflect the model performance. First of all, we discover that SOTA models do not have an obvious advantage over the baselines on these two datasets. The hybrid modality fusion (EF+LF) with a simple architecture can surpass the SOTA models. We conjecture that the data scarcity issue makes complex architextures not able to show their full capacity. For example, the Multimodal Transformer (MULT) (Tsai et al., 2019) use a hidden dimension of only 40 and a dropout value of 0.3 to achieve its best performance (i.e. avoid overfitting). Secondly, we find out that the performance is unstable given the small size of data. For example, by altering a random seed, the WAcc can change up to 4.7%. On average, we get a standard deviation of 1.8% for WAcc and 1.7% for F1-score on the IEMOCAP. Besides the dataset size, we speculate that another reason for the overfitting is that there is a feature extraction step before training to get the high-level features from the raw data. Thus the information in the resulting input features is highly concentrated, especially the audio and video features, which makes the problem of data scarcity even severer.
Effects of Weakly-supervised MTL
Experimental results of MTL are shown in Table 4. We evaluate weakly-supervised MTL using three models for four target tasks across three datasets. Generally, by incorporating auxiliary tasks with weak labels, we can achieve a better performance and this kind of improvement is consistantly observed on four target tasks across all datasets. We use the weak labels as soft labels by weighting the loss down from the auxiliary tasks. As mentioned in Section 5.2, the loss weights can be the confidence scores or manually searched. The best loss weights are reported in Appendix B. Additionally, it also mitigates the instability problem. Still on the emotion recognition task, the standard deviations of WAcc and F1-score are lowered by 0.4% and 0.3%, respectively. On the multimodal emotion recognition task, we observe that either sentiment or sarcasm weak labels can help to increase the performance of the models. On the sarcasm recognition task, the improvement gained by MTL is even larger. For example, the EF-LF LSTM model can achieve 2.9% accuracy and 3.3% F1-score improvement when trained with an additional sentiment or emotion task. This fact shows that even though the auxiliary labels are noisy, MTL is still effective to help the model generalize better, especially when the target dataset has an insufficient number of samples. However, when we try to add two weak labels together to the main task, we do not observe further improvement compared to one auxiliary task. We spectulate that it is because more kinds of weak labels introduce more noise from different domains, which makes it harder for the model to learn and generalize.
On the multimodal sentiment analysis task, MTL with emotion labels consistently make a greater contribution than with sarcasm labels. This aligns with our label analysis in Table 5 that sentiment has a higher correlation with emotion than sarcasm. Furthermore, we also compare the effectiveness of strong and weak emotion labels on the CMU-MOSEI dataset when sentiment analysis is the main task. We observe that weak emotion labels can contribute similarly or even slightly better than the strong emotion labels. One of the reason is that they contain different emotion categories, and the weak emotion labels have the neutral class, which could be more helpful for recognizing sentiment.
We think this is beneficial for future work, by providing the alternative to use weak-supervision as a cheaper solution for performance improvement compared to manually annotating the data.
Conclusion and Future Work
In this work, we prove the effectiveness of weaklysupervised multi-task learning (MTL) for multimodal affect recognition tasks. We show that it can significantly improve the performance, especially when the dataset size is small. For instance, on the sarcasm recognition task, the weakly-supervised MTL approach can improve the performance by up to 2.9% accuracy and 3.3% F1-score. We further conduct an empirical analysis on the effects of strong and weak (noisy) supervision, and show that weak supervision can help to boost performance almost as well as strong supervision. It's also a more flexible and cheaper way to incorporate more related supervision. Additionally, we introduce a simple but very effective hybrid modality fusion method by combining early fusion and late fusion. Its performance is on par with or even better than previous state-of-the-art models, which we believe can be used as a strong baseline for future work on multimodal affect recognition tasks. Table 6: Performance evaluation of different models on the CMU-MOSEI dataset. We report the weighted accuracy (WAcc) and the F1-score on six emotion categories: angry, disgust, fear, happy, sad and surprised. Table 9: The loss weights we used for each multi-tasking setting. Each entry has 3 numbers, which are the loss weights for the multimodal emotion recognition, sentiment analysis, sarcasm recognition, respectively. Therefore, 0.0 means that task is not used in that experiment. | 2021-04-26T01:15:50.440Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "6f1096ebcdb9bfcf66a722a3300da772628d4321",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6f1096ebcdb9bfcf66a722a3300da772628d4321",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118918058 | pes2o/s2orc | v3-fos-license | First principles study on small ZrAln and HfAln clusters: structural, stability, electronic states and CO2 adsorption
We report a first principles study based on density functional theory on the structural and electronic properties of transition metal Zr and Hf doped small aluminum clusters with 1 to 7 aluminum atoms. We have used B3PW91 with LANL2DZ basis set in Gaussian 09 package. The stability analysis reveals that the ZrAl4 and HfAl4 structures with C2v symmetry and square pyramid geometry are lowest energy structures. The most stable structures in ZrAl5 and HfAl5 are distorted tetrahedron type structure with symmetry C1. The binding energies per atom for transition metal doped Aln clusters increases with the cluster size, while the second order difference in total energy show oscillatory behavior with even and odd cluster size. The HOMO and LUMO gap for ZrAln is larger than the HfAln clusters except for n being 1 and 3. The HfAl6 has more tendency to accept or give away electrons. The negative charge exists on Zr and Hf indicating that the electron transfers from Al atom to transition metal, Zr and Hf. The thermodynamical analysis suggest that the HfAl6 cluster has highest exothermicity compared to not only all considered Al clusters but also other transition metal doped Al clusters reported in J. Phys. Chem. C, 120, 10027, 2016.
Introduction
In recent years great deal of attention is devoted to clusters due to their fundamental concept of electronic and geometrical stability and potential application in variety of fields such as catalytic activities, hydrogen storage 1 -4 , formation of metallic glasses 5,6 and most importantly alternate building blocks of materials in the form of super atoms 7 . Among the variety of clusters, binary metallic clusters have much attention due to their wide range of tailoring properties about its size, shape and chemical composition 8 -12 . Furthermore, binary metallic clusters are used to control the synthesis and chirality of the single walled carbon nano tubes (SWCNT) 13 . As a typical example of atomic and doped atomic clusters, the aluminum and transition metal (TM) doped aluminum clusters respectively have been the subject of many investigations 14 -29 . The display of large local magnetic moments by the transition metal doped aluminum clusters is one of the main reason for their increasingly focused studies 29 . The existence of large local magnetic moments is attributed to the interaction of d electron with the nearly free electron gas when the TM element is present in sp metal host or surfaces. Wang et. Al 29 There exist few studies based on Zr-Al compounds using DFT calculations 30 -35 . Wang et. al 30 . studied the phase stability, mechanical and thermodynamical properties of Zr-Al binary substitutional alloys and found that the ZrAl 2 is most stable with excellent mechanical properties, while ZrAl 3 possesses better thermal conductivity and higher melting temperature 30 . Arikan 31 has reported the elastic, electronic and phonon properties of Zr 3 Al compound. A stability and thermodynamic property of Zr 2 Al under high pressure is reported using DFT method within GGA for exchange and correlation 35 . De Souza 32 et. al. performed a basin-hopping Monte Carlo investigation within the embedded-atom method for the structural and energetic properties of bimetallic ZrCu and ZrAl nanoclusters with 55 and 561 atoms and found that unary systems adopt the well-known compact icosahedron (ICO) structure. However, the ICO structure changes to the nearly spherical shape due to strong minimization of the surface energy when both chemical species change towards a more balanced compositions. DFT study of structural and electronic properties of Zr n Al ±m (n = 1 -7, m = 0, 1) clusters shows that all stable structures are three dimensional for n > 3 and binding increases as n increases for all considered Zr n Al ±m 33 .
In recent time the sequestration of the CO 2 emitted from industrial manufacturing plants is one of the most pressing issues in the environmental protection. An ideal CO 2 sequestration material should have large surface areas and strong adsorption sites. Many CO 2 adsorbents including metal organic frameworks (MOFs), carbon and silicon carbide and boron rich boron nitride have limitations in terms of interactions 36 -40 . Clusters have been found good not only to absorb the CO 2 but also H 2 , O 2 , and N 2 . Sengupta et. al. 7 have studied the CO 2 absorption over transition metal series from Sc to Zn doped aluminum clusters (Al 5 and Al 7 ) and found that the starting element of series Sc and Ti doped Al clusters are good absorber. Therefore, it would be of interest to see the absorption capability of non-magnetic and closer to Sc and Ti elements, Zr and Hf doped aluminum clusters.
In the present paper, on the basis of results of first principles calculations using theories of plane waves and localized atomic orbitals we report relative stability, binding energy and electronic properties of Zr and Hf doped Al clusters. Furthermore, we report that the Zr and Hf doped Al clusters can capture CO 2 more strongly than the reported other transition metal doped Al clusters. The structures of optimized clusters and CO 2 clusters conformers are found to be consistent with available similar CO 2 cluster conformers.
Computational method
In the present study all the calculations were performed using density functional theory simulation on TmAl n clusters using GAUSSIAN 09 package 41 . The Becke's three parameter (B3) and Perdew and Wang (PW91) GGA functional together called the B3PW91 were used for the exchange and correlation respectively along with Los Alamos set of double-zeta type (LANL2DZ) 42 -44 basis set to derive a complete geometrical optimization of transition metal Zr and Hf doped Al n (n = 1 -7) clusters. The accurate determination of ground state geometry of cluster is required in the chemistry of clusters. Initially different arrangements of isomers of clusters are taken for each cluster to obtain the ground state conformers. After optimization of all these isomers the energetically minimum structure is taken as ground state geometry for each cluster. Binding energy between transition metal and Al n clusters is calculated using the following formula where E(Al) and E(Tm) is the total energy of one atom of Al and transition metal (Zr, Hf) respectively and E(TmAl n ) is the total energy of transition metal doped Al n cluster. The relative stability of the cluster is defined by the second order difference in the total energy which was calculated by the following formula Chemical hardness defines the tendency to gain or give the electrons is calculated using the following formula 45,46 (
Structural Geometry:
For calculation of any ground state properties one needs to optimize the structures and obtain the equilibrium geometrical structures. Figs. 1 and 2 present the fully optimized ground state and low lying structures of transition metal doped Al n clusters (TmAl n ; TM = Zr and Hf, n = 1 to 7).
The point group and the multiplicity are listed in Table -
Stability
For the analysis of relative stability of TmAl n clusters we have calculated binding energy per atom E b (n) and 2 nd order difference in the total energy Δ 2 E of the clusters using the equations (1) and (2) respectively presented in section 2. The second order difference in the total energy is the quantity which defines the relative stability of the clusters 49 . The calculated values of E b (n) and Δ 2 E for ground state structures are listed in Table -I. The binding energy per atom increases with the increasing in cluster size as can be seen from Fig. 3(a). Table -I and Fig. 3(b) show that the Δ 2 E behaves oscillatory with cluster size. The fluctuation behavior of Δ 2 E is more for HfAl n than ZrAl n . However, we observe that the Δ 2 E is higher for clusters of even number, which is similar to the clusters of Zr n Al ±m 33 . The maximum peak observed at n = 4 for HfAl 4 suggest that HfAl 4 is the most stable cluster than its nearest neighbors (n = 3, 5). This can also be seen from the Fig. 3(a) that the HfAl 4 has highest binding energy. As for n = 6, ZrAl n is concerned that the n = 6 is most stable, which can also be seen from the Fig. 3(a) where binding energy is highest for ZrAl 6 . All the calculations presented in the following sections are performed for the most energetically stable structures.
Electronic properties
Electronic properties of clusters can be analyzed through Highest Occupied Molecular Orbital Fig. 4(a) shows the HOMO -LUMO gap (G HL ) for TmAl n clusters as a function of cluster size n. The HOMO -LUMO gap for ZrAl n is larger than the HfAl n clusters except for n = 1 and 3. Maximum and minimum gap is found for HfAl and HfAl 4 respectively with the value of 2.214 eV and 0.159 eV which indicates that the HfAl cluster has relatively higher chemical stability and HfAl 4 is more inert cluster than others. Chemical hardness (η) as a function of cluster size (n) is presented in Fig. 4(b) and listed in Table -II. Chemical hardness is the measure of the cluster to accept or give away electrons i.e. bigger η suggest the smaller tendency and smaller η suggest the larger tendency for accepting or give away the electrons 50 . Table -II shows that the HfAl 6 has minimum η with 1.825 eV indicating that the HfAl 6 has more tendency to accept or give away electrons.
NBO analysis
The natural electronic configuration and charge transfer between considered transition metals and Al cluster can be studied by natural bond analysis (NBO). The natural electronic configuration and atomic charge on each atom in ZrAl n and HfAl n cluster presented in supplementary Table -I Table -II.
Adsorption of CO 2 on TmAl n Clusters
We have computed the thermodynamic data of the adsorption of CO 2 on the Hf and Zr doped Al n (n = 4 -7) clusters and compare them with the available data on other transition metal doped Al Table -III which depicts that the HfAl 6 cluster has highest exothermicity compared to the other Hf and Zr doped Al n clusters. It is interesting to note that the exothermicity for HfAl 6 is even higher than the Sc and Ti doped aluminum clusters 7 . Furthermore, one can observe that in all cases the angle of CO 2 is changed and the bond length between CO 2 and cluster is increased as observed in the case of other transition metal doped Al n clusters.
Conclusions
In summary we have performed the density functional theory calculations with the basis set of LANL2DZ of two transition metals (Zr and Hf) doped aluminum clusters. We report energetically stable geometries, electronic structure and relative stability of Zr/Hf Al n (n = 1 -7) clusters. All the calculations are performed for the most energetically stable structures. The Zr and Hf doped Al n clusters favor 3D spatial structures at the smaller number of Al atoms compared to higher number of Al clusters. The natural electronic configuration analysis shows that the electron moves from Al n clusters to transition metal Zr and Hf. We found that the binding energy of considered clusters is increasing with increase in cluster size. HfAl 4 is relatively stable structure. From the HOMO -LUMO analysis we found that the ZrAl is relatively chemically more active with HOMO -LUMO gap of 2.124 eV. HfAl 6 cluster has more tendencies to accept or give away the electrons and higher enthalpy difference as well as higher adsorption of CO 2 .
Supplementary Materials
See supplementary material for the natural bond analysis (NBO). The natural electronic configuration and atomic charge on each atom in ZrAl n and HfAl n cluster is presented in supplementary Table -I. 16. B. D. Leskiw and A. W. Castleman, Jr., "The interplay between the electronic structure and reactivity of aluminum clusters: model systems as building blocks for cluster assembled materials," Chem. Phys. Lett., 316, 31 (2000). . | 2018-06-27T10:40:26.000Z | 2018-06-27T00:00:00.000 | {
"year": 2020,
"sha1": "6790cb88701a9cfdc30c36068e69e518a4348988",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.10399",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6790cb88701a9cfdc30c36068e69e518a4348988",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
24702277 | pes2o/s2orc | v3-fos-license | Effect of phospholipids transesterified enzymatically with polyunsaturated fatty acids on gelatinization and retrogradation of starch.
The effects of phospholipids (PLs) transesterified with polyunsaturated fatty acids (PUFAs) with lipase (Aspergillus niger) on gelatinization and retrogradation of starch during storage were studied by differential scanning calorimetry (DSC). The resulting transesterified PLs were rich in PUFAs and linoleic acid, while the total percentage of PUFAs incorporated was 20.2%. The addition of PLs or PLs enzymatically transesterified with PUFAs (PUFA-PLs) to the starch sample decreased the gelatinization enthalpy of starch (deltah(g)) slightly, but clearly increased the starch-lipid complexes (deltah(s-1)) by DSC. After 21 days of storage, the percent of retrogradation of starch became lower by the addition of 4%, PLs or 4% PUFA-PLs to the starch sample when compared with the control. These results suggest that PLs retard retrogradation of starch during storage, whereas PUFA-PLs retard it greatly. The addition of PLs or PUFA-PLs increased the amount of deltah(s-1), while re-gelatinization enthalpy decreased during storage, which suggests that PLs or PUFA-PLs could retard the retrogradation of starch.
Polyunsaturated fatty acids (PUFAs) of the n-3 series play a key role in the prevention and treatment of a wide range of human diseases and have been recog nized as important dietary compounds. Imbalance of these compounds in the body is believed to cause a vari ety of diseases such as cardiovascular, hypertension, in flammatory and autoimmune disorders, depression and certain disrupted neurological functions (1)(2)(3). Long chain PUFAs such as eicosapentaenoic acid (EPA), do cosapentaenoic acid (DPA) and docosahexaenoic acid (DHA) are mainly obtained from seafoods which have originated from linolenic acid by a series of chain elon gation and desaturation steps (3). Therefore, high con sumption of marine lipids faithfully reflects sufficient di etary PUFA intake.
Though PUFAs can be emulsified with the aid of emulsifiers, their application to processed foods is con siderably restricted because of their low solubility in water. Recently, phospholipids (PLs) enzymatically transesterified with PUFAs (PUFA-PLs) have been re ported (4). Since PLs have been extensively used as an emulsifier, it can be expected that such esterified PUFA PLs may serve to improve the nutritive quality of food stuffs and to supply people who dislike fish with PUFAs for good health.
PLs are used throughout the world to improve food processing, such as clarifying starch pastes (5), enhanc lug the viscosity of starch, increasing dough tolerance, and improving baking properties (6)(7). For this reason, the present study was focused on the preparation of PUFA-PLs by lipase-catalyzed transesterification be tween purified egg PLs and concentrated PUFAs from menhaden oil, and on their usefulness for starch pro cessing in relation to gelatinization or retrogradation.
MATERIALS AND METHODS
Materials. Wheat starch (type "Hermes", provided by Okumoto Flour Milling Co., Ltd., Osaka) was defatted with a hot butanol and water (3:1, v/v) mixture, and re-extracted three times in a screw-capped tube. To re move the remaining butanol, the starch sample was washed repeatedly with deionized water and then freeze-dried. PLs of egg lecithin origin (wako Pure Chem. Ind., Osaka) were purified according to the method described by Sridhar and Lakshminarayana (8), and the concentrate of PUFAs was obtained from men haden fish oil (Sigma Chemical Co., USA) according to the urea-complex method described by wanasundara and Shahidi (9).
RESULTS AND DISCUSSION
Transesterilcation efficiency of PUPA-PLs Fatty acid compositions after saponification or its ad ditional urea treatment from menhaden oil are com pared in Table 1. The DHA and EPA contents in the PUFAs concentrate were approximately 2 6.1 and 37.9%, respectively. The fraction rich in EPA and DHA prepared by the urea-complex method has been re ported to serve as a good substrate for transesterifica tion (17). Table 2 shows the fatty acid compositions of purified PLs per se and PLs transesterified by the use of immobilized lipase. Though the arachidonic acid (AA) content was 6.6% in the purified egg PLs, it increased about 2-fold after transesterification with lipase. Likewise, EPA and DHA, increased in their contents from 1.5 and 4.1% (before transesterification) to 3.8 and 11.8% (after transesterification), respectively. Although the transesterification efficiency was only 20.2%, the existence ratio of polyunsaturated fatty acids became apparently higher as reported by Totani and Hara (4). | 2018-04-03T05:03:33.346Z | 2000-10-01T00:00:00.000 | {
"year": 2000,
"sha1": "ca4d617146a5c6a90e31f13f45f10329f588e7e8",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jnsv1973/46/5/46_5_252/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fe3fcd592a327f2ab140403c11316ba29a567a45",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
247605773 | pes2o/s2orc | v3-fos-license | Bronchoesophageal fistula: An unusual manifestation of lung cancer
Bronchoesophageal fistula (BEF) is a rare condition caused by a fistulous connection between the bronchus and the esophagus. BEF can be acquired or congenital; congenital BEFs are rarely encountered in adults. Acquired BEF can be due to either a benign or a malignant process. Acquired BEF due to primary lung cancer is a life-threatening and usually a terminal complication. Unlike tracheoesophageal fistula, this condition is much rarer. Patients usually present with symptoms related to recurrent aspiration. Barium esophagogram is the initial diagnostic modality of choice. Treatment is primarily palliative. We are presenting a case of a bronchoesophageal fistula caused by non-small cell lung cancer that was successfully treated with concurrent chemoradiation therapy.
Case presentation
A 65-year old male smoker (50 pack-years) with a history of chronic bronchitis, HIV infection (on Genvoya, diagnosed more than 20 years ago, recent CD4 count of 244 with no prior opportunistic infections), hypertension, paroxysmal atrial fibrillation, and diastolic heart failure. He presented with intermittent fevers, worsening dyspnea, cough productive of purulent sputum, and generalized weakness. Physical examination revealed a thin, pale male who appeared older than his stated age, in mild respiratory distress. His BP was 102/60, HR 102/min, RR 30/min, Temp 38C, and spo2 was 92% on ambient air at rest. Auscultation revealed bilateral expiratory wheezes and a coarse crepitation at the right lung base. His white blood cell count was 14,000 with a left shift, and his lactate was 3.2. X-ray chest revealed a consolidative opacity in the right lower lobe [ Fig. 1]. He was hospitalized with a diagnosis of communityacquired pneumonia with sepsis and started on intravenous Ceftriaxone and intravenous Azithromycin. While in the hospital, he developed atrial fibrillation with a rapid ventricular response; synchronized cardioversion was attempted but unsuccessful. He was then started on IV Heparin and IV Esmolol, but he developed hemoptysis. His respiratory status worsened with increasing oxygen requirement, necessitating broadening the antibiotic coverage with intravenous Vancomycin and Piperacillin/Tazobactam. He was evaluated by a pulmonologist, and based on his recommendations, IV heparin was immediately discontinued, and a stat CT chest was obtained that revealed a large, irregular, thick-walled cavitary lesion in the right lower lobe with a fistulous tract connecting the lung cavity to the distal third of the esophagus [ Fig. 1]. His sputum was negative for AFB but grew Enterobacter cloacae and Pseudomonas; his antibiotic was accordingly switched to IV Meropenem based on sensitivity results.
Esophagogastroduodenoscopy confirmed a small fistula in the distal third of the esophagus [ Fig. 2]. Flexible bronchoscopy revealed a destructive cavitary lesion in the right lower lobe that resembled a limestone cave. Individual segmental bronchi in the right lower lobe could not be identified clearly due to the widespread tissue destruction [ Fig. 3]. Bubbles could be seen in one of the bronchi, which we believe is the bronchus connected to the esophagus. Biopsy of the cavitary lesion revealed fibrotic lung parenchyma infiltrated by irregular nests and cords of malignant cells with high mitotic activity, Napsin negative, weakly TTF1 positive, and stained strongly positive for p63, p40, and CK5/6; consistent with squamous cell carcinoma [ Fig. 4]. Immunostaining for EGFR was negative, and PD-L1 expression was only 1%. Staging with CT PET showed a large hypermetabolic mass in the right lower lobe invading the mediastinum with hypermetabolic ipsilateral mediastinal lymph nodes and a left hilar lymph node [ Fig. 1]. He was made NPO, and a PEG tube was placed for enteral nutrition. After staging workup was completed, he received a stage IIIc NSCLC/squamous cell carcinoma diagnosis. He was started on a combination of Carboplatin and Paclitaxel and radiation therapy. Six months later, his repeat imaging showed resolution of the fistula with a residual but non PET avid mass in the right lower lobe [ Fig. 5]. He tolerated the chemoradiation therapy and is currently on Durvalumab consolidation therapy. His symptoms have resolved; he is tolerating regular food and gaining weight.
Introduction
Bronchoesophageal fistula is a rare clinical entity, and the incidence in adults is unknown. Acquired BEFs in adults is usually due to esophageal and lung cancers. By the time the diagnosis is made, the underlying cancer is usually advanced. Treatment is mostly palliative and depends on the location of the fistula, the severity of the symptoms, the associated complications, and the performance status of the patients.
Discussion
Esophagorespiratory fistula (ERF) refers to the existence of an abnormal connection between the esophagus and the lungs and includes tracheoesophageal (TEF) and bronchoesophageal (BEF) fistulas. TEF is more common than BEF; information on ERF is derived chiefly from the existing literature on TEFs. BEFs are rare; the available literature is supported primarily by periodic case reports and small observational studies. Most reported cases of BEF involved the main bronchi and not the distal airway like in this case.
Diagnosis is often delayed because of the condition's rarity and the non-specificity of the symptoms. The diagnosis of BEF/TEF should be entertained in patients with predisposing conditions who develop coughing spells with oral intake (Ono's sign), recurrent purulent bronchitis or pneumonia, unexplained weight loss, and respiratory failure. Patients with esophageal and lung cancers can present with dysphagia and hemoptysis, respectively [1][2][3][4][5][6][7][8][9].
The initial diagnostic modality of choice is barium esophagography, demonstrating contrast displacement into the lungs. This test is diagnostic in over 65% of the cases [1]. Gastrograffin studies should be avoided due to concerns for the development of pulmonary edema, pneumonitis, respiratory failure, and death. Contrast-enhanced CT with three-dimensional reconstruction is a suitable option for patients who cannot swallow. CTs also have the added advantage of providing more information on the abnormalities of the surrounding structures. Endoscopy (esophagogastroduodenoscopy and bronchoscopy) is probably the most helpful test and can help confirm the diagnosis, obtain biopsies, and devise appropriate treatment plans. Oral administration of methylene blue and indocyanine green followed by diagnostic bronchoscopy can sometimes identify small fistulas [16].
For obvious reasons, bronchoesophageal fistulas due to benign causes have a better prognosis than malignant ones. A multidisciplinary approach involving thoracic surgery, interventional pulmonology, gastroenterology, oncology, and nutritionist is required to manage these complex cases. Simultaneous treatment of the fistula and the underlying etiology should be started as soon as possible. It is essential to focus on their nutritional needs and aggressively treat superimposed infections. Supportive care should be offered to these patients, including cessation of oral intake, elevating the head to 45 • , anti-reflux medicines, and PEG or J tube for enteral nutrition. Depending on the location of the fistula and the severity of the underlying condition, these patients can be considered for esophageal or bronchial stenting or both [14,15]. Stenting can immediately palliate symptoms and increase the overall quality of life. Metallic stents are preferred over silicon stents for malignant fistulas [17]. However, stenting is an option only for proximal lesions. Our patient was unsuitable for bronchial stenting because of the location, and fortunately, the esophageal lesion was small. Without treatment, the life expectancy of these patients is measured in weeks. Surgical intervention is not an option for these sick and terminal patients. Still, if their performance status is reasonable, palliative chemoradiation therapy can be considered.
Declaration of competing interest
We have no conflicts to declare. | 2022-03-23T15:30:36.621Z | 2022-03-21T00:00:00.000 | {
"year": 2022,
"sha1": "c08b34864b7eab58b2f7911c8d176a8fdcdd42d2",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rmcr.2022.101634",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3999dd0cf371be18c8560e2b2a60c0bd56dc4c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243836572 | pes2o/s2orc | v3-fos-license | Public Health Communication in France during the Spanish Flu and the COVID-19 Pandemic: The Role of Experts
In times of crisis, a government’s communication with the public is fundamental, as one of the government’s main tasks is to provide critical information to protect the population. In the current context of the COVID-19 pandemic, public health communication has been paramount because of the elevated risk of contagion. Moreover, in public health communication, experts play a pivotal role by providing reliable information on the basis of their technical expertise. The impact of the COVID19 pandemic is often compared to that of the Spanish flu, a pandemic occurring in 1918-1919, whose global spread decimated tens of millions of people. This contribution aims to assess the role of experts in the two crises by highlighting the differences in France’s public health communication during the two events. Assuming that the objectives of public health communication during the two pandemics were more or less identical, i.e. to prevent the spread of disease and inform and protect the public, the paper inquires about the means used to achieve them, focusing on the contribution of experts. The main characteristics of public health communication during the Spanish flu will be investigated by analysing articles published in the period between 1918 and 1919 in two French newspapers Le Matin and Le Petit Parisien. In terms of the current COVID-19 pandemic, this paper will probe articles published since December 2019 in the newspaper Le Monde.
Introduction
The serious pandemic situation brought about by the spread of COVID-19 has highlighted the crucial role of public health communication in state crisis management. In this context, only clear and precise communication can provide useful information to protect the public and consequently limit the spread of the contagion as much as possible. The provision of up-todate information on the development of the epidemic, alongside scientifically based prognoses on the probable development of the crisis, provides the public with valuable and useful information for planning their future actions. Finally, another important function of public health communication is to reassure the public of the measures taken to ensure their safety and thus avoid situations of confusion and panic. To achieve these objectives, experts play a significant role in public health communication-especially but not exclusively in the field of health-as their professional competence makes the information provided more reliable. Nevertheless, it must be pointed out that the role of experts is very difficult because they deal with phenomena that science can't explain yet. In such a situation, providing information that is necessarily incomplete can provoke extreme reaction from the public (Raoult, 2020). The magnitude of the current COVID-19 pandemic is comparable to that of another global health crisis called the Spanish flu, which occurred in 1918-1919; its global spread decimated tens of millions of people. Assuming that the objectives of public health communication remain essentially identical, examining how public health communication during the first half of the last century differs from that in the current context is interesting. In particular, this paper aims to highlight the differences in public health communication in France during the two pandemics by analysing a set of contemporary newspaper articles reporting on them, with particular attention to the role of experts. The paper is divided into three main sections. The first section describes the methodology used and the construction of two corpora: one containing newspaper articles published in 1918-1919 on the Spanish flu and the other consisting of newspaper articles on the COVID-19 pandemic published from December 2019 to June 2020. The following two sections present the results of the analysis aimed at identifying the main characteristics of public health communication in France during the Spanish flu ( §2) and the COVID-19 pandemic ( §3), with a focus on the contribution of experts. Finally, the conclusion highlights the major differences in French public health communication during the two crises.
Methodology and construction of the corpora
The main characteristics of French public health communication at the time of the Spanish flu were observed by analysing the corpus Grippe espagnole; it contains articles on the subject published during the period 1918-1919 in two French newspapers, Le Matin and Le Petit Parisien, which are available online on the Gallica website, the digital library of the BnF-Bibliothèque nationale de France. Le Matin and Le Petit Parisien were chosen because they are two of the four largest French newspapers on the eve of the First World War. At the time, both newspapers were sold in more than a million copies (Allemand & Oullion, 2005, p. 35). The corpus Grippe espagnole was formed by searching for all articles during the period considered containing the word 'grippe' and then selecting those relating to the flu epidemic. A total of 214 articles were identified, 70 of which were published in Le Petit Parisien and 144 in Le Matin. Public health communication in France during the current health crisis was examined by analysing articles published from December 2019 to June 2020 in the French newspaper Le Monde. The articles in question are available to subscribing readers in the newspaper's archive, which is equipped with a search engine allowing readers to carry out thematic searches. A total of 6,779 articles relating to the COVID-19 pandemic have been identified by searching the keyword 'coronavirus'. To facilitate the analysis, the number of articles was reduced by selecting those containing not only the keyword 'coronavirus' but also the term 'expert', as the focus of the current study is the contribution of experts. A total of 139 articles related to the pandemic were obtained, covering the whole period under examination. In the following phase, the corpus Covid was created, in which 62 articles with the main objective of informing about the coronavirus pandemic were selected from the previous corpus. As for the texts that have been removed from the corpora because they did not have the Spanish flu or COVID-19 epidemic as their main subject, the following types can be identified: -advertisements for medicines against the flu, also mentioning the Spanish flu (only in the corpus Grippe espagnole) -advertisements with offers of medical consultations (only in the corpus Grippe espagnole) -obituaries (only in the corpus Grippe espagnole) -articles providing information on the state of health or death of public personalities -articles containing non-experts' opinions on the pandemic -other articles whose main topic is not the health crisis but its consequences (e.g. limited services, cancelled events, impact on the economy) The aim of the analysis of the selected articles is to identify the characteristics of public health communication in France during the two pandemics with the following taken into account: -the types of articles providing information on the epidemic -the characteristics of the experts involved -the type of information provided The term 'Spanish flu' was attributed to the epidemic because the first reports on its emergence informed about the situation in Spain. However, at the time, it was known that the first outbreak of the epidemic did not occur in Spain. The article La prétendue grippe espagnole viendrait d'Allemagne [The alleged Spanish flu comes from Germany] in 7 July 1918 (Le Petit Parisien) reports that the epidemic started in Germany a few months earlier and denounces how the local authorities managed to hide it. However, the press at the time was unanimous in stating that the name of the epidemic had no particular meaning; it was not a new disease but another flu epidemic with serious consequences: Anxious to reassure the public, the medical authorities in Berlin state that the Spanish flu, which has just appeared in Germany, is not dangerous. According to the directors of a medical bacteriological institute, the epidemic is reminiscent of those that occurred throughout Europe between 1889 and 1893. The members of the Koch Institute, who have already treated a number of cases, say that the elderly are less affected than young people and that the disease, characterised by high fever and inflammation of the mucous membranes, develops in two or three days. They consider the epidemic harmless to the population. There have been no fatal cases.
Types of articles providing information on the epidemic
Regarding the articles providing information on the Spanish flu, the following types have been identified: -articles providing statistical data on the course of the crisis in France (e.g. number of deaths, sick people) -reportages describing the situation in France and, to a lesser extent, that in other countries -reportages providing information on the proceedings of the meetings of scientific institutions (Académie de médecine, Académie des sciences), public administration bodies (e.g. city councils) and other institutions dealing with public health (hygiene committees) -articles providing the results of scientific studies on the Spanish flu -interviews with experts and public health officials
2 Characteristics of the experts involved
The experts in the field of health are mainly doctors and scientists (university professors and members of scientific institutions). With a few exceptions (e.g. Professeur Vincent, le savant bactériologiste et maître réputé en épidémiologie, Le Petit Parisien, 26/02/1919), the specialisation of experts is not specified. Very often, these experts in the field of health are members of scientific institutions and professional associations. Amongst scientific institutions, Académie de médecine plays the leading role. In the daily newspaper Le Matin, a special section is often dedicated to the news of this institution. Other scientific institutions whose members inform French society about the development of the epidemic are Académie des sciences and Institut Pasteur. Professional associations include, for example, Société médicale des hôpitaux and Syndicat des médecins de la Seine. In addition, individual cases were found in which the article provided information from a foreign scientific institution, such as the Koch Institute (Germany) and the London School of Hygiene (Great Britain).
The experts in the field of health play an important role in French public health institutions, such as the Epidemics Commission, the High Council of Hygiene, the Departmental Council of Hygiene, the City Council, the Medical Inspection Service and the Naval Health Service. The analysis revealed that health experts also intervene during debates on the Spanish flu in the French Chamber of Deputies. Finally, it should be stressed that the management of the Spanish flu crisis was carried out at two levels-the civilian population level (e.g. Service des épidémies pour la population civile) and the military level (e.g. Service de santé militaire).
3 Type of information provided
The information provided by the experts mainly concerns the following:
Results of the analysis: The case of the COVID-19 pandemic
In a more globalized context, the Covid-19 pandemic is not only a health crisis, but a phenomenon with a considerable impact on the economy, social system and political situation of the countries concerned. Moreover, the epidemic also has an important scientific dimension. This has been analysed in detail by Jullien (2020). Among the scientific disciplines whose experts play a significant role in crisis management and public health communication, Julien (2020, pp. 287-297) highlights the contribution of epidemiology, molecular biology, genetics and the theory of evolution. Currently, the coronavirus pandemic represents a challenge for scientific institutions in the field of health because they have to react quickly, seeking to answer the major scientific and health questions of today's world. Therefore, the strategic plans of scientific institutions, such as that of the Institut Pasteur (Institut Pasteur, n.d), are developed with the aim of boosting research and increasing its impact on health issues.
During the period from December 2019 to June 2020, the first article providing information on the coronavirus-related disease was published in the newspaper Le Monde on 9 January 2020 under the title Une pneumonie d'origine inconnue en Chine [Pneumonia of an unknown origin in China]. The analysis of the corpus Covid, which contains 62 articles on the pandemic, revealed that in 48 articles, the information is provided by experts. The other 14 articles contain information of various kinds (particularly statistics on deaths and sick people) without the contribution of experts.
Types of articles providing information on the epidemic
The articles informing about the COVID-19 pandemic are of the following types: -articles providing very precise statistical data on the course of the crisis in France and in other countries around the world -reportages describing the situation in France and in other countries, also providing testimonies of the local population and of doctors helping in the fight against the virus -reportages about the proceedings of meetings of certain institutions, particularly those of the World Health Organization -articles providing the results of scientific studies on the coronavirus pandemic, including studies conducted abroad -interviews with experts and public health officials -articles reporting health experts' opinions on the health crisis
2. Characteristics of the experts involved
In the management of the COVID-19 pandemic in France, an important role is played by 11 experts who form the Conseil scientifique COVID-19, established on 10 March 2020 with the task of providing relevant information to the French President. In managing the crisis, the French government is also supported by experts from the Haut Conseil de la santé publique (Zanola, 2020, p. 86). As regards the corpus Covid, the analysis revealed a wide variety of health experts providing the public with information on the pandemic. The specialisation of doctors (general practitioner, pathologist, neurologist, resuscitation doctor, etc.) is usually specified as well as that of the scientists consulted (immunologist, virologist, infectiologist, microbiologist, professor of emergency medicine, professor of public health, infectious disease expert, bioengineer, epidemiologist, professor of chemical biology, etc.). The health experts who provide information to the public are members of French scientific institutions (especially Académie nationale de médecine, L'Inserm -Institut national de la santé et de la recherche médicale, Institut Pasteur) and several foreign scientific institutions (e.g. Chinese Centre for Disease Control and Prevention, German Hospital Federation and London School of Hygiene and Tropical Medicine). The analysis also showed that alongside health and biology experts, experts from other domains, such as anthropology, sociology and mathematics, also play an important role in crisis management and public information dissemination.
3 Type of information provided
The types of information provided in the articles on the COVID-19 pandemic broadly correspond to those found in the articles about the Spanish flu. Compared with the corpus Grippe espagnole, the corpus Covid does not include articles comparing the coronavirus pandemic with previous influenza epidemics. The statistics on the course of the epidemic (number of deaths, sick people, etc.) are very precise and concern France and other countries. More often, scientists deal with the question of the origin of the virus. Many articles also contain the reflections of experts on the likely course of the pandemic and on life in the future after the virus. Attention should be given to articles aiming to disprove fake news about the origin of the virus, such as the following: Non, cette vidéo virale ne prouve pas que le coronavirus est une « arme biologique militaire » [No, this viral video does not prove that the coronavirus is a 'military bioweapon'] (Le Monde, 18/03/2020).
Conclusion
The differences in French public health communication during the two pandemics separated by 100 years are mainly due to their different political contexts, technology developments and increased globalisation. Whereas in the years 1918-1919, the newspapers Le Petit Parisien and Le Matin were published in paper form and with a reduced number of pages (4), the newspaper Le Monde has many more pages and is also published in digital version. For this reason, more articles report on the COVID-19 pandemic than on the Spanish flu. It should also be noted that in the years 1918-1919, events related to the end of the First World War were the focus of attention, placing less emphasis on news about the Spanish flu epidemic. Indeed, a recent study aimed at analysing the Parisian press during the Spanish flu has shown that in almost all cases, articles on the flu epidemic are published inside the newspaper and not on the front page (Bar-Hen & Zylberman, 2015, p. 35).
During the two pandemics, experts play a significant role in providing reliable information on the situation and explaining to the public the necessity of the measures taken by the authorities-measures which restrict the freedom of the people (Baverez, 2020) but help reduce the spread of the disease. In the case of the Spanish flu, public health communication takes place almost exclusively in France; the data, which are provided by French experts, focus on the situation in the country. The national character of the management of this health crisis is also reflected in the fact that the term 'pandémie' is never used in the articles of the corpus Grippe espagnole. On the other hand, the information provided during the current COVID-19 pandemic, as well as the management of this crisis, has a more global character. The French public is provided with information not only on the development of the epidemic in France but also on the health situation worldwide. The experts providing information on the pandemic come from different scientific institutions around the world, and compared with the experts in 1918-1919, they have a wide variety of specialisations. Moreover, the analysis of the corpus Covid revealed that the main reference institution for pandemic information is no longer a French institution (such as Académie de médecine during the Spanish flu) but the World Health Organization, which was founded in 1948 (Organisation mondiale de la Santé, 2020). It should be also noted that after 100 years, the perception of experts has changed, especially because the figure of an unreliable expert, who spreads false information about the pandemic, is now much more visible. Therefore, in the current context of the coronavirus pandemic, public health communication and real experts also have the task of helping the public interpret properly the large amount of information provided, especially by identifying and disproving fake news. Finally, the analysis revealed that in public health communication, an important role is played by scientific institutions. That's why in order to contribute effectively to crisis management, scientific institutions should be more and more committed to communicating research results to the public in a clear and appropriate way. | 2020-10-01T22:25:27.081Z | 2020-10-15T00:00:00.000 | {
"year": 2020,
"sha1": "750286ac7f439faf4da1d9cc5eb1e027371232c0",
"oa_license": "CCBYSA",
"oa_url": "https://revistia.org/index.php/ejls/article/view/5374/5226",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "750286ac7f439faf4da1d9cc5eb1e027371232c0",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
209445534 | pes2o/s2orc | v3-fos-license | Global Statement on Air Pollution and Health: Opportunities for Africa
The editorial speaks to the Global Statement on Air Pollution and Health and How it may assist African countries to eliminate air pollution-related health impacts.
Introduction
The human health and economic costs of air pollution in Africa are high and rising. Between 1990 and 2013, deaths from ambient particulate matter on the African continent rose by more than one third, and by 2013 was costing the African economy approximately USD 215 billion annually [1]. Similarly, premature deaths associated with domestic fuel combustion rose by 18% between 1990 and 2013 and cost the African economy approximately USD 232 billion in 2013 [1].
Sources of human exposure to air pollution in Africa include anthropogenic and natural sources and occur in urban, rural, industrial and residential settings. The main contributors are industry, power generation, agricultural burning, transport and traffic, the combustion of wood, coal, paraffin and dung for household energy needs (Figure 1a, b), unpaved roads and burning of household solid waste in areas not provided with regular residential waste collection services. Desert dust and wildfires are sources of particulate matter of natural origin [2].
Certain vulnerable groups may be simultaneously exposed to air pollution from multiple sources -sometimes at highly elevated concentrations [3]. For example, people living in informal settlements without a connection to the electricity grid, located close to mine tailings facilities or industrial sites, and in areas with unpaved roads (Figure 1c). In countries with high unemployment, many households generate livelihoods in the informal sector, including cottage industries (Figure 1d) such as vehicle spray painting, which may lead to increased levels of air pollutants in the immediate vicinity that frequently exceed the air quality guidelines of the World Health Organization [4,5]. A study undertaken in Accra, Ghana showed that biomass burning accounted for 39-62% of total PM 2.5 mass in the cooking area, road dust and vehicle emissions comprised 12-33% of PM 2.5 mass, and solid waste burning was also a significant contributor to household PM 2.5 in low-income settlements [6]. Patriarchal systems and household power dynamics may play a role in women and children being particularly vulnerable to exposure to noxious air pollutants [7].
The Challenge of Addressing Air Pollution in Africa
Tackling air pollution in Africa is undoubtedly important but constitutes a uniquely complex challenge. Unlike many OECD countries, African efforts to curb exposure to air pollution need to be implemented alongside actions to address competing health and economic challenges, including poverty and inequality, a process of rapid urbanization currently underway, housing shortages, unsafe water, inadequate sanitation and major epidemics such as HIV/AIDS. Poor people are exposed to higher concentrations of air pollutants, have access to inferior health services, and tend to suffer disproportionately from the effects of air pollution [5]. In settings of poverty, where safe energy alternatives are not available, legislation to curb household solid fuel combustion would place additional hardship and financial burdens on poor households. While being home to 16% of the world's population, only 3% of the world's vehicle fleet is found in Africa [1,8]. Traffic-related air pollution may gain in importance as a source of air pollution, given the predicted population and income increases in Africa in the coming decades, and the process of rapid urbanization currently underway. The current African population of approximately 1.2 billion is predicted to rise to 2.5 billion by 2050, and to 4.4 billion (or 40% of the world's population) by 2100 [8].
In this regard, a fundamental concern is that air quality monitoring capacity in Africa is weak. One study reported that only 41 cities across 10 African countries measured ambient air pollution levels, and knowledge of the sources and pathways of human exposure to air pollution across is limited to well-resourced countries, providing a weak base for policy development and priority setting [9].
Harnessing opportunities from the Global Statement on Air Pollution
As stated in the July 2019 Air Pollution and Health Statement jointly issued by the Academy of Sciences of South Africa (ASSAf), the Brazilian Academy of Sciences (ABC) and the German National Academy of Sciences Leopoldina as well as both the US National Academies of Medicine and Science (USNAM and USNAS), "the costs of air pollution to society and the economies of low-and middle-income countries are enormous" and "can undercut sustainable development" [10]. African countries thus have much to lose from limited action on air pollution, but much more to gain from heeding the Joint Call for investment in air pollution reduction.
Signs of Hope and Success
While the challenges are complex, some novel solutions are emerging to overcome air pollution in some African countries. In South Africa, at a regional scale, the Greenhouse Gas and Air Pollution Interactions and Synergies or 'GAINS' model is a useful framework being considered to identify strong linkages between air quality and climaterelevant measures [12]. The results would provide evidence for improving understanding of the cost-efficiency of air pollution policies in line with the Statement's
Box 1: Case study
The Integrated High-Speed Train Network is a flagship project of the African Union's Agenda 2063. The project aims to connect all African capitals and commercial centers through an African High-Speed Train Network thereby facilitating the movement of goods, factor services and people. The increased rail connectivity holds the potential for reducing transport costs, relieving traffic congestion and lowering traffic emissions [11]. Emerging low-cost air quality monitoring technologies also provide hope for more extensive air quality monitoring systems and the generation of improved information for future decision-making. recommendation to 'identify co-benefits among policy instruments'. Renewable energy technologies that are increasingly deployed at grid-and household-level present another opportunity for policy co-benefits by reducing air pollution and greenhouse gas emissions from large fossil fuel-fired power stations and household fuel burning.
Funding Statement
Funding for this article was provided by the US National Academy of Sciences and the US National Academy of Medicine. | 2019-12-19T09:22:01.046Z | 2019-12-16T00:00:00.000 | {
"year": 2019,
"sha1": "b63908ca2eae7d89f53d4273f6d87ce76c25c235",
"oa_license": "CCBY",
"oa_url": "http://www.annalsofglobalhealth.org/articles/10.5334/aogh.2667/galley/2809/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10cfd3d0ad119d4459a8d8dd2e16d466b3bd159c",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
248969769 | pes2o/s2orc | v3-fos-license | Ostomy Does Not Lead to Worse Outcomes After Bowel Resection With Ovarian Cancer: A Systematic Review
Background Debulking cytoreduction surgery with bowel resection is a common intervention for ovarian cancer. It is controversial whether ostomy causes worse survival outcomes and how clinical physicians should choose which patients to undergo ostomy. During this study, we performed a systematic review to determine whether ostomy leads to worse outcomes after bowel resection compared to anastomosis. We also summarized the possible indications for ostomy. Methods We searched PubMed, Embase, and Cochrane for articles containing the phrase “ovarian cancer with bowel resection” that were published between 2016 and 2021. We included studies that compared primary anastomosis with ostomy. We mainly focused on differences in the anastomotic leakage rate, length of hospital stay, overall survival, and other survival outcomes associated with the two procedures. Results and Conclusion Of the 763 studies, three were ultimately included in the systematic review (N=1411). We found that ostomy did not contribute to worse survival outcomes, and that the stoma-related complications were acceptable. Indications for ostomy require further study. Bowel resection segment margins and the distance from the anastomosis to the anal verge require consideration.
INTRODUCTION
Bowel metastasis frequently occurs with advanced ovarian cancer, and debulking cytoreduction surgery, especially en bloc cytoreduction, is recommended (1)(2)(3). Bowel resection is performed when bowel metastasis is observed before or during surgery, and it is followed by primary anastomosis or ostomy (4)(5)(6)(7). Primary anastomosis is the first choice after bowel resection, and ostomy is the alternative choice. Ostomy is not preferred because it has been shown to cause a worsened quality of life with ovarian cancer (8,9).
Ostomy is the creation of an artificial anus in the abdomen. Because ostomy can be reversed, it can be permanent or temporary. For ovarian cancer patients, ostomy is performed to divert the feces so that the anastomosis can recover well. Anastomotic leakage (AL) is one of the most important complications associated with anastomosis because it can cause abdominal inflammation and some other issues (10). After bowel resection, ostomy can be performed instead of primary anastomosis to prevent AL. Diverting ileostomy is one of the common choices with ostomy because it is believed that it can decrease the AL rate associated with colorectal cancer (11). The AL rate has not shown any difference with permanent ostomy and temporary ostomy for colorectal cancer (12). However, patients with a stoma experience complications such as skin irritation and prefer to avoid ostomy to ensure a better quality of life (13)(14)(15). Physicians need to evaluate the problems and benefits before deciding whether to perform ostomy.
No guidelines specifically recommend follow-up interventions after bowel resection in ovarian cancer patients (1,16,17). Clinical physicians make decisions based on experience. There is no clear answer regarding whether ostomy will benefit patients with ovarian cancer. Few studies have concentrated on bowel surgery and its outcomes when ostomy and anastomosis are performed for ovarian cancer patients. It is still controversial whether ostomy causes worse survival outcomes and how clinical physicians should choose which patients to undergo ostomy. This review analyzed studies performed during the past 5 years to determine whether ostomy leads to worse outcomes than anastomosis. The possible indications for ostomy are also summarized.
MATERIALS AND METHODS
We searched PubMed, Embase, and Cochrane using the following MeSH terms and keywords in articles published during the past 5 years: [ovarian cancer] AND [ostomy] OR [anastomosis]. After excluding repeated studies, we screened all articles based on the title, abstract, and full text ( Figure 1).
According to the PICO principle, P was primary or relapsed ovarian cancer, I was ostomy after bowel resection, and C was primary anastomosis after bowel resection. All meta-analyses were required to meet the following criteria: patients had primary or relapsed ovarian cancer; all patients underwent bowel resection during primary debulking surgery or interval debulking surgery, and data of ostomy patients were separated from those of anastomosis patients.
Age, body mass index, American Society of Anesthesiology score, and medical history were recorded as baseline data. Surgical information was recorded as a variable. We mainly focused on short-term and long-term outcomes such as the anastomotic leakage (AL) rate, length of hospital stay, 30-day readmission, overall survival (OS), progression-free survival (PFS), and other survival outcomes. This systematic review was not registered.
RESULTS
Of the 763 studies found during the database search, 112 remained after screening for duplicates. After screening the title, abstract, and full text based on the criteria, three studies were finally analyzed. Twenty-two other studies were used as references to support our interpretations.
Major Findings of the Three Studies
Three retrospective studies directly compared the outcomes of ostomy and primary anastomosis. These were the most important references during our analysis ( Figure 1 and Table 1).
The study by Canlorbe et al. included stage IIIB to IV ovarian cancer based on the International Federation of Gynecology and Obstetrics (FIGO) classification with anterior bowel resection during complete cytoreductive surgery (18). Patients were divided into groups based on whether they underwent ileostomy/colostomy (without stoma group, N=90; with stoma group, N=9). Of the nine patients with a stoma, one patient had two stomas and one was scheduled to undergo left colostomy but it was changed to ileostomy. Some baseline data, such as age and BMI, were reported. They also reported some surgery information, including the number of stomas and whether small bowel resection was performed. They compared the basic data and surgery information and found no difference between groups. A reverse rate of 88.9% and overall AL rate of 7.1% (6.7% in the primary anastomosis group and 11.1% in stoma group; not statistically compared) were reported during this study. Three adverse events (lower than Clavien-Dindo grade III) caused by the stoma were reported. The median OS was 31 months and the median PFS was 17 months. During the univariate analysis, patients with a stoma had a longer hospital stay and worse OS and PFS (log-rank test). The multivariate analysis was performed after the univariate analysis. Ileostomy and lymph node involvement were found to be risk factors for relapse. They reported two OS and PFS curves, but these were not statistically compared. This retrospective study reported worse outcomes for the stoma group; however, it did not report multivariate analysis results regarding survival outcomes and did not compare the curves of the two groups using multivariate statistics. The sample size was small, especially that of the stoma group, which made the results less reliable. The association between the surgery type and the outcomes was not reported.
The study by Fleetwood et al. included ovarian cancer patients who underwent colon resection. These patients were divided into the primary anastomosis group (N=453) and end ostomy group (N=586) (19). Some basic information was reported and the preoperative comorbidities were compared between groups (no significant difference). The primary anastomosis group seemed to have significantly more disseminated cancer, and the stoma group experienced more preoperative weight loss and received more neoadjuvant chemotherapy; however, the differences were not significant. Lower preoperative albumin and platelet levels and significantly higher preoperative leukocyte counts were observed in the ostomy group. Surgery information was not recorded in detail. Postoperative complications were almost statistically equal between groups, but the ostomy group tended to have worse complications. Severe adverse events (Clavien-Dindo grades III and IV) were equal between groups. However, the ostomy group had more grade II adverse events. The 30-day mortality rate was higher in the ostomy group in this study (3.1% in the primary anastomosis group and 6.2% in the stoma group). However, ostomy was not an independent risk factor when the preoperative laboratory values were controlled in the logistic regression. The blood urea nitrogen, creatinine, and preoperative albumin levels contributed to death. This retrospective study reported worse outcomes for the stoma group; however, the results changed after performing the multivariate analysis. The baseline data were compared and multivariate analysis was performed, which made these study results more reliable. However, surgery information was not recorded in detail, making the study results less reliable. It is unknown whether equal numbers of surgery types were performed. Furthermore, the reversal rate and AL rate were not reported.
The study by Lago et al. included patients with FIGO stages II to IV (20). These patients underwent colorectal resection after anastomosis and were divided into three groups (wait and see group, N=72; diverting ileostomy group, N=19; ghost ileostomy group, N=42). Ghost ileostomy involved the cutaneous placement of a portion of the terminal ileum. If no AL was observed by 7 days after surgery, then the ileum was reversed. If any AL occurred, then an ileum incision was performed without repeat laparotomy. Colonoscopy at 3 days and colonoscopy at 7 days after surgery was performed to find the AL. Some baseline information, including the albumin level, was recorded. There were no statistical differences among the three groups. Surgery information was recorded in detail. Statistical differences were observed in the estimated blood loss and intraoperative transfusion rate among groups (both were higher in the ileostomy group). The AL rate was equal among the three groups (5.6% in the primary anastomosis group; 5.3% in the ileostomy group; 4.8% in the ghost ileostomy group). The ileostomy group had a reversal rate of 73.7%; however, the ghost ileostomy group had a reversal rate of 100%. The median hospital stay and the interval between surgery and chemotherapy were not different between the ileostomy group and the ghost ileostomy group. The ileostomy group had a higher rate of stoma-related complications than the ghost ileostomy group. This retrospective study compared the baseline data and surgery information among three groups, thus making its results more reliable. Colonoscopy was beneficial for finding asymptomatic AL and resulted in a more reliable AL rate. However, no multivariate analysis was performed and the sample size was small, thus making the results less reliable. Survival outcomes were not reported by this study.
These three studies reported different AL rates and survival outcomes for the primary anastomosis group and ostomy group. To further analyze these three studies, we used other studies as a reference.
Interpretation of the Major Finds
Higher AL rates contributed to worse OS (21). Hypoalbuminemia was an independent risk factor for AL, as reported by many studies (21)(22)(23)(24). In fact, clinical physicians believed that the albumin level indicated whether ostomy should be performed (18)(19)(20). It was suggested that preoperative hypoalbuminemia might contribute to worse OS. According to the studies by Canlorbe et al. and Fleetwood et al., OS was worse for the ostomy group (18,19). However, the study by Canlorbe et al. did not mention the albumin levels of the groups (18). The study by Fleetwood et al., after controlling the albumin level, reported no difference in the OS of the primary anastomosis group and ostomy group (19). The study by Lago et al. also reported no differences in the AL rates and albumin levels of the two groups; however, the survival outcomes were not directly compared (20). Therefore, ostomy itself would not contribute to worse OS with ovarian cancer.
The study by Canlorbe et al. reported that ileostomy and lymph node involvement were risk factors for relapse and observed higher PFS in the ostomy group (18). Gallotta lymph nodes were associated with high rates of isolated aortic and celiac trunk lymph node recurrences (25). However, we did not find any other studies that supported ostomy as an independent risk factor for relapse. Canlorbe et al. explained that patients with a stoma received fewer cycles of adjuvant chemotherapy because of poorer compliance (18). However, their small sample size and lack of a multivariate analysis made the results less reliable. Many stoma-related complications, such as dehydration and malnutrition, decrease the quality of life (26). Many patients do not prefer ostomy because it is associated with a worse quality of life. However, stoma-related complications rarely cause Clavien-Dindo grade III or higher adverse events with ovarian cancer, which is acceptable (18)(19)(20). It has been reported that the reverse rate varies from 43.3% to 88.9% (18,20,27). Enhancing postoperative care and increasing the reverse rate might result in patients being more receptive to ostomy.
We summarized and analyzed three studies that directly compared primary anastomosis and ostomy. We found that ostomy alone did not contribute to worse survival outcomes and that the stoma-related complications were acceptable.
Indications for Ostomy
Ostomy, especially diverting ileostomy, is believed to reduce the AL rate and may result in better survival outcomes; however, these results were not reported by the three aforementioned studies (18)(19)(20). The indications for ostomy might contribute to the AL rate, thereby rendering the difference between primary anastomosis and ostomy not significant. A decreased AL rate was reported for ostomy by Kalogera et al., who compared the AL rate before and after determining the indications for ostomy (26). For ovarian cancer, there are no guidelines that specifically recommend ostomy after bowel resection (1,14,15). Indications causing physicians to perform ostomy have been partially different from the real risk factors for AL (28). Some studies have reported that hypoalbuminemia (<3.5 g/dL), additional bowel resection, more extensive rectosigmoid resection, previous treatment with bevacizumab, longer operative time, and intraoperative red blood transfusion might lead physicians to perform ostomy (27)(28)(29). However, in other studies of the risk factors for AL, age, preoperative albumin level, small intestine resection, positive resection margins, additional bowel resection, manual anastomosis, and the distance from the anastomosis to the anal verge were independent risk factors for AL (21)(22)(23)(30)(31)(32)(33). Clinical physicians chose patients with a worse preoperative status, such as malnutrition and elderly age, to undergo ostomy, which were proven to be risk factors for AL (22,28). Regarding the surgery information, additional bowel resection and additional small intestine resection were proven to be risk factors for AL and were evaluated by physicians to decide whether to perform ostomy (21,22,27,31,32). However, physicians in the gynecology field might ignore some surgical factors, such as bowel resection margins and the distance from the anastomosis to the anal verge. Furthermore, risk factors for AL were not unanimous in all of these studies; different potential factors were included in the aforementioned studies. It should be proven whether risk factors for AL affect outcomes of ovarian cancer by directly comparing them using multivariate analysis.
No clear indications for ostomy have been suggested by the guidelines for ovarian cancer. We suggest that clinical physicians should evaluate the preoperative status and surgery information of patients before deciding whether to perform ostomy. The preoperative albumin level and age were the most important preoperative characteristics. However, additional bowel resection, additional small bowel resection, bowel resection margins, and the distance from the anastomosis should be considered. Furthermore, a multivariate analysis of all possible indications is needed to prove the conclusions of these studies.
DISCUSSION
The aforementioned studies did not allow us to perform a metaanalysis to obtain a more precise and statistical conclusion. We were only able to review the literature and report our suggestions.
Ostomy itself did not contribute to worse outcomes, and the AL rate decreased after considering the indications for ostomy (26). Further investigations of the indications for ostomy with ovarian cancer are needed. Because the AL rate is reportedly low for ovarian cancer (approximately 5%), larger sample sizes are needed for future studies. Some studies of the risk factors for AL showed that postoperative colonoscopy might be useful for identifying asymptomatic AL (34-36) after surgery. Moreover, studies of colorectal cancer might suggest some factors that could affect decision-making when ovarian cancer is involved. For example, the distance from the tumor to the anal verge was associated with the AL rate and survival outcomes of colon cancer; this has been proven but is not usually considered an indication by physicians treating ovarian cancer (23,37).
CONCLUSION
Ostomy did not contribute to worse survival outcomes after bowel resection compared with primary anastomosis with ovarian cancer. The stoma-related complications were considered acceptable. The basic characteristics of patients (age and preoperative albumin level) and surgery information (operative time, intestine resection, additional bowel resection, manual anastomosis, bowel resection margins, and the distance from the anastomosis to the anal verge) should be considered before performing ostomy and require further investigation.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
XH searched the database and did the statistic analysis, as well as wrote this manuscript. ZY provided practical suggestions and critically revised the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This study was funded by the Sichuan Province Science and Technology Support Program (grant number 2019YJ0072). | 2022-05-23T13:20:48.526Z | 2022-05-23T00:00:00.000 | {
"year": 2022,
"sha1": "559e459c6f5eff9541f8399ce022f58a9bb65bc3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "559e459c6f5eff9541f8399ce022f58a9bb65bc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
216119310 | pes2o/s2orc | v3-fos-license | The role of iron oxidizing bacteria to the quality of leachate on acid sulphate soil
The problem encountered in acid sulphate soil is the presence of pyrite (FeS2) which causes soil to have highly acid reaction when the pyrite is oxidized. The decline in quality not only occurs on the soil but also on the quality of the surrounding waters. One way to improve the quality of the leachate is by draining it through biofilter plants in the form of purun tikus (Eleocharis dulcis) and bulu babi (Eleocharis retroflaxa) which can absorb or neutralize these elements. The purpose of this research was to know the inoculant influence of iron oxidizing bacteria to leachate quality in acid sulphate soil. The research was conducted on a pot scale in greenhouse. The research was in randomized complete block design (RCBD) of three factors with three replications. The first factor was inoculants, the second factor was water management, and the third one was phytoremediation material (E. dulcis and E. retroflaxa). The results showed that the plant height in the inoculant treatment+wood charcoal was in the range of 89.33−95.33 cm, while that in the inoculant treatment+husk charcoal was in the range of 89.50 −93.00 cm. Meanwhile, the yield of rice with bacteria oxidizing iron inoculant+wood charcoal was higher, which was at 6.77 ton.ha-1 than inoculant treatment of oxidizing iron+husk charcoal which was only 5.95 ton.ha-1. Received: 14th April 2018; Revised: 16th December 2019; Accepted: 11th February 2020
INTRODUCTION
The utilization of sub optimal lands such as tidal swampland is one of the resolutions to increase the national rice production system capacity to support food self-sufficiency program. Swampland is spread over Indonesia with an area of about 34 million consisting of 20,707 million hectares of tidal swampland and 13,296 million hectares of swampland (Balai Penelitian dan Pengembangan Pertanian, 2013).
The main problem of acid sulphate soil are soil physic and chemical properties (Pusparani, 2018). Tidal acid sulphate swamp land is a sub optimum land that needs special handling because the main problem of land is less support for plant growth such as pyrite mineral (FeS2), low pH soil, iron toxicity, and poor nutrients. The pyrite (FeS2) which is widely contained in acid sulphate soils is stable if in reductive condition, but if in drainage the pyrite will be oxidized causing the formation of H2SO4 compound which can increase soil acidity (Susilawati and Fahmi, 2013).
Quality of water in sulphate land is classified in low level, indicated by pH of <3.5 and presence of soluble ions such as Fe, Al and SO4 in toxic level (Asikin and Thamrin, 2012). Oxidation of pyrite in acid sulphate soil produces H + and SO4 2⁻ , due to the solubility of toxic elements such as Al, and Fe. In the inundated conditions, Fe metal can be lost from the soil solution in several ways, among others by precipitation, being absorbed on the clay surface or Fe 3+ oxide, being oxidized to Fe 3+ and being carried along the water drainage (Napisah, 2019). The quality of the contaminated water can be improved by draining it through biofilter plants in the form of purun tikus (E. dulcis) and bulu babi (E. retroflaxa) which can absorb or neutralize these ions. According to Jumberi et al. (2004), purun tikus plant ecologically act as biofilter that can neutralize toxic elements and acidity on acid sulphate by absorbing Fe of 1.559 ppm and SO4 of 13.68 ppm. Alia et al. (2013) reported that the water plant in association with microorganisms transforms harmful pollutants become less and harmless. The roots of aquatic plants provide space for microorganisms to grow on stems and plant roots (Lu et al., 2015). Based on the research results conducted by Nurseha and Djajakirana (2004), the iron oxidizing bacteria is Thiobacillus ferroxidans. T. ferrooxidans is a bacterium that acts as iron oxidase which can metabolize metal ions, such as ferrous iron. T. ferrooxidans can grow on soil that undergoes oxidation or reduction. Through the reaction of microorganisms to oxygen, it shows that the bacterium T. ferrooxidans is microaerophilic (obligate aerobic organisms that remain well developed in low oxygen content). T. ferrooxidans are able to oxidize Fe 2+ to Fe 3+ and oxidize reduced sulfur compounds and utilize these oxidants as energy sources (Mariana et al., 2012). The aim of this research was to know the effect of formula inoculant to improve the quality of waste water in acid sulphate soil.
MATERIALS AND METHODS
The research was conducted on pot scale placed in the greenhouse and laboratory of Indonesian Swampland Agricultural Research Institute (ISARI) in Banjarbaru, South Kalimantan from July to November 2017. The design used in this research was a factorial randomized complete block design (RCBD) of 3 factors with 3 replications. The first factor of inoculant (T) with various inoculant formulas of: T1 = no formulation, T2 = wood charcoal, T3 = husk charcoal husk, T4 = bacteria oxidizing iron inoculant+wood charcoal, and T5 = bacteria oxidizing iron Inoculant+husk charcoal. The second factor was water management (P) comprising: P1 = no leached (closed system), P2 = leached (open system). The third factor was phytoremediation material (A) that was: A1 = Purun tikus (E. dulcis) and A2 = Bulu babi (E. retroflaxa).
Without the drying process, the sample of sulfuric acid soil was taken from the field at the selected site. It was put into a pot of 12 kg and then limed according to the optimum dose. After being incubated for 2 × 24 hours, it was continued with the planting of rice seedlings of superior varieties of Inpara 2 swamp rice taking up to 25 days with 2 clumps per pot. Basal fertilizers were given for all treatments according to recommended doses in tidal land. Recommended dose for basal urea fertilizer was SP -36: KCl with: 200 kg.ha -1 : 100 kg.ha -1 : 100 kg.ha -1 . Selected Purun tikus (E. dulcis) and Bulu Babi (E. retroflaxa) (the same age and performance) were planted in the gutter with the same plant population for all the sewers which were stimulated in a greenhouse. The first factor of inoculant formula with carrier and with no carrier was applied to plant phytoremediation material ( Figure 1). Leachate water coming from rice plant was contained in pots treated with aquatic plants as phytoremediation materials. After being inoculated by two formulas and incubated for one month, the rice crops were maintained until harvest. Observation was made on the result of dried grain yield of harvest (ton.ha -1 ) and leachate water quality including pH, Eh, TDS, Fe, and SO4 (Balai Penelitian Tanah, 2012). The data analysis was conducted to determine the effect of treatment whether there was a significant difference in the level of the factor (variety) conducted using Duncan Multiple range Test (DMRT, α = 5%). All statistical tests were performed with SAS 9.1.3 version for windows.
Characteristics of acid sulphate soil
According to soil classification, sulfidic rate (pyrite) is the characteristic of acid sulphate soil. The result showed that the soil included in the Typic Sulfaquent subgroup was observed from the pH value of the soil which was 4.16 (very acid) with organic soil C content of 3.699% (high) and N content of total land of 0.248% (medium). For P content, the available soil was 10.596 cmol.kg -1 (low) with K-dd 0.352 cmol.kg -1 (medium) and Al-dd 9.774 cmol.kg -1 (high) ( Table 1).
Leachate water pH
Water quality in acid sulphate soil is influenced by soil biogeochemical processes. The data showed that the pH of leachate water continued to increase during plant growth up to 12 WAP ( Figure 2). An increase in the pH of leachate water was followed by a decrease in the Eh value from 8 wap to 12 WAP (Figure 3). At the same time, the TDS value continued to decrease to 8 WAP and increased again in 10 WAP to 12 WAP ( Figure 4).
From Figure 2 it is presented that the pH of leachate water in the inoculant treatment of iron oxidizing bacteria+husk charcoal carrier which was 5.16 was higher than the inoculant treatment of iron oxidizing bacteria+wood charcoal which was only 5.02. This is in accordance with the research results by Hazra and Widyati (2007) stating that inoculant of iron oxidizing bacteria can grow well on husk charcoal media. This is related to the conditions of husk charcoal which are dominated by the aromatic carbon structure making it stable and highly resistant to chemical and biological degradation in the soil. The increase of pH due to the treatment of husk charcoal and inoculant occurs through several mechanisms, including (1) OH release in the oxidation process of organic acid anion, (2) proton consumption during decarbosylation of organic acid anion, (3) OH ion release during organic N mineralization, (4) OH release as the effect of specific absorption of the humic material and/or organic molecules into the Al and Fe hydroxides, and (5) an increase in the Ca, Mg and base cation K content of the added organic fertilizer (Haynes and Mokotobate, 2001). T. ferroxidans bacteria are able to use organic carbon in a limited way (Bacelor and Johnson, 1999). According to Mariana et al. (2012) changes in soil acidity (pH) are also strongly determined by the activity of iron oxidizing bacteria in acid sulphate soils.
Leachate Water Redox Potential (Eh)
It can be seen in Figure 3 that Eh of leachate water at the observation in 6 WAP has increased. The increase of redox potential at the beginning of the observation was due to the Fe 2+ oxidation process to Fe 3+ with the help of T. ferroxidans bacteria causing soil conditions to become oxidative so that the redox potential got increased. On the other hand, at the observation in 8 to 12 WAP, there was a decrease in the Eh value of leachate water. Redox potential (Eh) also had an ecological significance with respect to its effect on the balance of nutrient availability and Fe toxicity. (Notohadiprawiro, 2000). Becker and Ash (2005) reported that the microbial reduction process led to the change of insoluble iron (Fe 3+ ) to soluble iron (Fe 2+ ). The decrease of Eh resulted in increased supply of N, P, K, Fe, Mn, Mo, and Si (Ponnamperuma, 1978). The Eh of leachate water in the inoculant treatment of iron oxidizing bacteria with higher wood charcoal was 72.1 mV compared to inoculant treatment of iron oxidizing bacteria with husk charcoal which was only 69.5 mV. Fluctuations of Eh will also affect the ionic composition including Fe elements and other mineral elements associated with oxidation and sulfur reduction (Su et al., 2017).
TDS of leachate water
In Figure 4 it is seen that the value of Total Dissolve Solid of liquefied water at the observation in 6 to 8 WAP decreased. This was because purun tikus and bulu babi took solid elements dissolved around them for growth. Entering the observation of 10 to 12 WAP, the value of liquefied TDS in the treatment of iron oxidizing bacteria with charcoal wood increased to 276 ppm higher than liquefied leather TDS in the treatment of iron oxidizing bacteria with charcoal husk of only 223 ppm. TDS is an indicator used to measure the high content of dissolved solids in ground water. The high value of TDS in the soil becomes an indication of the presence of fracture in the parent rock, thus causing the quality of leachate water to decrease.
Relationship between Eh Value, pH and TDS of leachate to water quality
Figure 5 and Figure 6 show the relationship between the value of Eh, pH and TDS in leachate water. As seen in Figure 5, more oxidative environmental conditions were indicated by an increase in the value of Eh leachate water so it will be followed by a decrease in the pH value of the leachate water, whereas the TDS value of leachate water was directly proportional to the Eh leachate water value ( Figure 6). TDS is generally caused by inorganic materials in the form of ions commonly found in waters. Changes in pH and Eh soil will affect the stability and solubility of metal minerals (Satawathananont et al., 1991). Yuliana (2012) highlighting that the measurement of ferri iron at the end of drying showed an increase of ferri iron from 424.73 ppm to 448.52 ppm and the soluble sulphate (SO4 2-) also increased from 348.64 to 436.18 ppm. Oxidized pyrite besides releasing H + and SO4 2 ions also releases Fe 3+ . In reductive conditions, thermodynamics does not adequately explain the Fe 3+ reduction because there are other non-enzymatic mechanisms associated with the Fe 3+ reduction process, so in addition to the organic material, sulphate also determines the intensity of Fe 3+ reduction due to the utilization of sulphate as the primary electron acceptor by microbes in the Fe 3+ reduction process (Annisa, 2014).
Plant growth
In Table 2 and 3, the agronomic observations of plants showed that there was a possitive interaction among three factors on the height and the number of paddy saplings, which were in inoculant formula, water management, and phytoremediation materials. This was related to the quality of the produced leachate water. The leaching will rinse iron (Fe) in the form of Fe 2+ , so that soil conditions will be more suitable for development of plant root. Plant height in the inoculant treatment + wood charcoal was in the range of 89.33-95.33 cm, while in the inoculant treatment + husk charcoal it was in the range 89.50-93.00 cm. Meanwhile, the number of paddy saplings in the inoculant treatment + wood charcoal + inoculant + husk charcoal were in the range of 23.33-28.00 and 21.67-25.33 (Table 2 and 3) respectively. Generally they look high and the number of paddy saplings with inoculant treatment and wood charcoal were better than that in inoculant treatment and husk charcoal. Table 4 shows a positive interaction among those three treatment factors; inoculant formula, water management and phytoremediation materials. It is seen that in the treatment of wood charcoal, the management of closed water and phytoremediation material of the E.retroflaxa was the lowest at 26.00 g, while the treatment of inoculant formula and husk charcoal, opened water management, and photoremediation material of E. dulcis plant was higher equal to 32.67 g. This was because pH of soil and water in pots where the rice was planted was high, so the ability of rice plant to absorb nutrients was higher. According to Masganti (2011), straw weight is an indicator of rice ability to absorb nutrients. The level of nutrient availability influenced by soil acidity is one of the determinant factors for the ability of plants to absorb nutrients (Marschner, 1986). Figure 8 shows that the treatment with oxidizing iron bacteria inoculant + wood charcoal obtained 5.64 ton.ha -1 which was higher than that of inoculant treatment of iron oxidation bacteria + husk charcoal which was only 5.11 ton.ha -1 . This was supported by the high pH of leachate water when approaching harvest (12 WAP). It was seen that pH at the inoculant treatment of oxidizing iron bacteria + wood charcoal was 5.02 ( Figure 2). Soil acidity (pH) may affect soil nutrient availability and may be a factor related to soil quality and limiting factors to plant growth and production (Aksani, 2016). The high value of TDS in the inoculant of iron oxidizing bacteria with wood charcoal was also influential due to the availability of dissolved solid elements in leachate water. This was related to the pH of leachate water where the increased pH water can affect the availability of soluble nutrients easily in water (Sulistiyani et al., 2014). Application of the pyrite oxidizing bacterial and leaching eight times gave the best influence on the growth and yields of rice plants (Maftuah and Susilawati, 2018).
CONCLUSIONS
The inoculant treatment of iron oxidizing bacteria with charcoal carrier material can improve the water quality. Fe concentration with inoculant treatment of iron oxidizing bacteria + husk charcoal was only 9.750 mg.kg -1 which was lower than inoculant treatment of oxidizing bacteria of iron + wood charcoal with 10.060 mg.kg -1 . The highest grain yield in the treatment of iron oxidizing bacteria inoculant + wood charcoal was higher, which was 6.77 ton.ha -1 . Table 4. Trubus weight of rice plant (g) by treatment of inoculant formula, water management and phytoremediation materials | 2020-04-02T09:33:03.510Z | 2020-01-29T00:00:00.000 | {
"year": 2020,
"sha1": "b7db6b32e32433665d6f6cd9683ac0c25279885e",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.ugm.ac.id/jip/article/download/34731/27259",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66afcd1879eb868c95f3f984e966a0ea28c58446",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
14705000 | pes2o/s2orc | v3-fos-license | Loss Reduction in Distribution System Using Fuzzy Techniques
In this paper, a novel approach using approximate reasoning is used to determine suitable candidate nodes in a distribution system for capacitor placement. Voltages and power loss reduction indices of distribution system nodes are modeled by furzy membership functions. A fuzzy expert system (FES) containing a set of heuristic rules is then used to determine the capacitor placement suitability of each node in the distribution system. Capacitors are placed on the nodes with the highest suitability. a new design methodology for determining the size, location, type and number of capacitors to be placed on a radial distribution system is presented. The objective is to minimize the peak power losses and the energy losses in the distribution system considering the capacitor cost. Test results have been presented along with the discussion of the algorithm.
I. INTRODUCTION
Efficiency of power system depends on distribution system.Distribution system provides the final link between the high voltage transmission system and the consumers.A distribution circuit normally uses primary or main feeders and lateral distributors.The main feeder originates from the substation, and passes through the major load centers.Lateral distributors connect the individual load points to the main feeder with distribution transformers at their ends.Many distribution systems used in practice have a single circuit main feeder and are defined as radial distribution systems.Radial systems are popular because of their simple design and generally low cost [4].
Capacitor placement problem has been extensively discussed in technical literature especially since 1980"s as the distribution system planning and operation started getting renewed focus.Since then, many solution techniques have been suggested identifying the problem as a complex problem of large scale mixed integer non-linear programming.
Artificial intelligent techniques have been tried in recent years in search of a superior solution tool.With rapid growth of computing power, new class of search techniques capable of handling large and complex problems has been developed during the last few decades.These techniques have also been explored for the solution of the capacitor placement problem.Among these techniques, evolutionary computing methods such as Genetic algorithm [14], [15] and Ant colony optimization [9] have been reported to produce superior results.Simulated annealing [10] and Tabu searches [11] had also been very successful.However, one common drawback of these techniques lies in the huge computing task involved in obtaining the solution On the other hand, there had always been the efforts of the system engineers to avoid applications of computation intensive complex solution processes and use simple, physically understandable logics to solve the problems, though such simplified solutions occasionally can not find the best one.Fuzzy based approaches [9]- [12] involve less computational burden.
The power loss in a distribution system is significantly high because of lower voltage and hence high current, compared to that in a high voltage transmission system [5].The pressure of improving the overall efficiency of power delivery has forced the power utilities to reduce the loss, especially at the distribution level.In this paper a radial distribution system is taken because of its simplicity.
Fuzzy based solution methods use fuzzy membership functions to model the actual systems.Identification of proper membership function is the most challenging task in the development of fuzzy based solution techniques.Node voltage measures and power loss in the network branches have been utilized as indicators for deciding the location and also the size of the capacitors in fuzzy based capacitor placement methods.
II FRAME WORK OF APPROACH
For capacitor placement, general considerations are: (1)the number and location; (2) type( fixed or switched ); (3) the size; When capacitors are placed power loss is reduced & also energy loss is reduced.Both these factors contribute in increasing the profit.Cost of capacitors decreases this profit.So profit is weighted against the cost of capacitor installation [1].Whole saving can be given as follows [3].
(1) http://ijacsa.thesai.org/Where-K P -Per unit cost of peak power loss reduction ($/KW) K E-Per unit cost of energy loss reduction ($/KWh) K C -Per unit cost of capacitor ($/KVar) Δp-Peak power loss reduction (KW) ΔE-Energy loss reduction (KWh) C-Capacitor size (KVar) S-Saving in money per year ($/year) Then by optimising the profit "S" due to capacitor placement actual capacitor size is determined i.e. by setting C S / =0, and then solving for C, the capacitor size.The above procedure is repeated until no additional savings from the installation of capacitors are achieved.
For each solution voltage constraint must be satisfied.Voltage (pu) should be between min (0.9) to max (1.1).i.e.max min In this paper shunt (fixed) capacitors are used.A simple 10 bus radial distribution system is taken as the test system.It has only main feeder & no branches.To determine the location & size of capacitors to be installed, a load flow program was executed on MATLAB.This gave the location of capacitor most suitable for capacitor placement.Shunt capacitors be placed at the nodes of the system have been represented as reactive power injections [3].
III. ALGORITHM ADOPTED FOR LOAD FLOW SOLUTION
A balanced three-phase radial distribution network is assumed and can be represented by its equivalent single line diagram [2].Line shunt capacitance is negligible at the distribution voltage levels.The algorithm for capacitor location finding & sizing is as follows: 1. Perform load flow program to calculate bus voltages and segment losses.
2. Find the membership functions of voltage drops, power loss and suitability of capacitor node, and decision for the fuzzy sets of voltage drops, power loss and capacitor nodes.
3. Identify the node having highest suitability ranking.4. Install a capacitor at optimal node (s).Select capacitor that has the lowest cost and size.
5. Check whether voltage constraint is satisfied.If yes, go to next step, otherwise, go to step-9.6. Compute the benefits due to reduction in peak power loss, energy loss and cost of capacitor banks and net savings.
7. Check whether net savings is greater than zero.If yes, go to next step, otherwise, go to step-9.
8. Increment size of capacitor bank and go to step-2.9. Reject the installation.
Compensation of each bus" reactive power demand is done by placing capacitor.Calculation of power loss reduction & voltage were done thereafter .Highest power loss reduction was assigned "1" & lowest loss reduction was assigned "0".All other power loss reductions were placed between 0 &1.Voltage is also given in pu values [6].
IV. CAPACITOR LOCATION FINDING USING FUZZY TECHNIQUES
For the capacitor allocation problem, rules are defined to determine the suitability of a node for capacitor installation.Such rules are expressed in the following form:
IF premise (antecedent), THEN conclusion (consequent)
For determining the suitability of capacitor placement at a particular node, a set of multipleantecedent fuzzy rules have been established.The inputs to the rules are the voltage and power loss indices, and the output consequent is the suitability of capacitor placement.As given in table I.
The consequents of the rules are in the shaded part of the matrix.The fuzzy variables, power loss reduction, voltage, and capacitor placement suitability are described by the fuzzy terms high, highmedium/normal, medium/normal, low-medium/normal or low.These fuzzy variables described by linguistic terms are described by the fuzzy terms high, highmedium/normal, medium/normal, low-medium/normal or low [2].
These fuzzy variables described by linguistic terms are represented by membership functions.The membership functions are graphically shown in Fig. 1,2 & 3.The membership functions for describing the voltage have been created based on Ontario Hydro Standards of acceptable operating voltage ranges for distribution systems [6].The membership functions for the PLRI and CPSI indices are created to provide a ranking.Therefore, partitions of the membership functions for the power and suitability indices are equally spaced apart.
V. IMPLEMENTATION OF FUZZY ALGORITHM FOR CAPACITOR SIZING
A 10 bus radial distribution feeder with 23 KV rated voltage system is taken as the main system. 1 st bus is source bus & other 9 buses are load bus.First bus is source bus.All the other 9 load buses were fully compensated by placing capacitors.Then power loss reduction in the entire system is calculated by load flow program using MATLAB.Both the power loss reduction index (PLRI) & voltage sensitivity index (VI) is scaled in pu values.Based on these two values capacitor placement suitability index (CPSI) for each bus is determined by using fuzzy toolbox in MATLAB.As shown in table 4.The bus which is in urgent need of balancing will give maximum CPSI.Buses which are already balanced will give lesser values.Bus which gives highest values of CPSI is first considered for capacitor placement.Then value of capacitor to be place is determined.From load flow program on MATLAB relevant data is obtained, and a graph between C & S for bus 4 is plotted.S is max for C=3400KVar.So capacitor of this value is installed on bus 4.After bus 4 same process is repeated.First location is determined by fuzzy techniques, then saving is calculated for different capacitor values.C-S graphs are plotted for other buses.capacitor corresponding to maximum saving is the required capacitor.
VII. CONCLUSION
An approach incorporating the use of fuzzy sets theory has been presented in this project to determine the optimal number, locations and ratings of capacitors to place in a distribution system.In choosing the ideal locations for capacitor placement, a compromise of the reactive losses and the voltage sensitivity is determined.Application of this method to a sample test system has shown its effectiveness in peak power and energy loss reductions, and improvement in voltage regulation.The same procedure with some additional considerations can be successfully applied to complex systems having sub feeders or system with more buses...In addition, this algorithm can easily he adapted for capacitor allocation in distribution system planning, expansion or operation.
S=KFigure 4 -
Figure 4-Curve of C Vs S for bus 4.
TABLE 2 LOAD
DATA OF TEST SYSTEM
TABLE 3 BUS
DATA OF TEST SYSTEM
TABLE 4 -
BUS LOCATION FINDING FOR CAPACITOR PLACEMENTBus 4 has highest CPSI, so its selected for capacitor placement.Now value of capacitor is to be found.So equation (1) was used for saving calculation.
TABLE 5
B.Voltage stabilisation:There is a considerable improvement in voltage profile after the compensation of system.It satisfies the voltage constraint. | 2014-10-01T00:00:00.000Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "d4715bcd245c78b61208c8589967a3fe50926b62",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume1No3/Paper%203-Loss%20Reduction%20in%20Distribution%20System%20Using.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "d4715bcd245c78b61208c8589967a3fe50926b62",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
249325850 | pes2o/s2orc | v3-fos-license | Electrochemical Mechanism of Molten Salt Electrolysis from TiO2 to Titanium
Electrochemical mechanisms of molten salt electrolysis from TiO2 to titanium were investigated by Potentiostatic electrolysis, cyclic voltammetry, and square wave voltammetry in NaCl-CaCl2 at 800 °C. The composition and morphology of the product obtained at different electrolysis times were characterized by XRD and SEM. CaTiO3 phase was found in the TiO2 electrochemical reduction process. Electrochemical reduction of TiO2 to titanium is a four-step reduction process, which can be summarized as TiO2→Ti4O7→Ti2O3→TiO→Ti. Spontaneous and electrochemical reactions take place simultaneously in the reduction process. The electrochemical reduction of TiO2→Ti4O7→Ti2O3→TiO affected by diffusion was irreversible.
Introduction
Titanium is considered a rare metal because it is dispersed in nature and difficult to extract. However, it is relatively abundant, ranking tenth among all elements. Titanium ore mainly ilmenite and rutile, widely distributed in the earth's crust and lithosphere. Titanium and its alloys have been widely used in aerospace, national defense, ocean, energy, transportation, medical, and other fields due to its advantages of low density, high specific strength, good heat resistance, and corrosion resistance [1][2][3]. Therefore, titanium has a "21st century metal", "all-round metal", and "modern metal" reputation [4].
Due to titanium and oxygen, nitrogen, carbon, hydrogen, and other elements have a strong affinity, making the titanium production process complex, a long process with high energy consumption and high cost, limiting the application of titanium in many industries. In order to reduce the production cost of titanium, researchers continue to improve the traditional process and develop new extraction methods. At present, Kroll process is the most important industrial process for titanium production. However, the complex process, long process, high energy consumption, and high cost limit the application of titanium in many industries [5,6]. In order to reduce the production cost of titanium, researchers have developed many new processes, among which the molten salt electrolysis method has attracted a lot of attention worldwide because of its characteristics of short process, low energy consumption, and simple process [7][8][9][10][11][12]. Using alkaline metal or alkaline earth metal salt as electrolyte, TiO 2 as cathode, and graphite as anode, titanium was prepared by direct electrodeoxidation of TiO 2 in the molten salt electrolysis method. Titanium can be obtained in one-step reduction process [13,14]. At present, the electrochemical method has already been intensely studied in preparation of alloys [15][16][17][18][19] and carbides [20].
In order to clarify the deoxidation process of TiO 2 in molten salt electrolysis, the preparation of titanium by direct electro-deoxidation of TiO 2 in NaCl-CaCl 2 binary molten salt system was carried out in this work. The reduction process and electrochemical mechanism of the molten salt electrolysis from TiO 2 to titanium were studied by potentiostatic electrolysis and electrochemistry analysis in detail.
Raw Materials and Cathode Precursor Preparation
TiO 2 (96 wt.%) and carbon (4 wt.%) powders of 2 g were used as raw materials and mixed homogeneously. The mixed powders were die-pressed at 20 MPa in a cylindrical mold (30 mm in diameter). The die-pressed bodies were sintered at 353 K for 8 h; then, the sintered disc was tied in the titanium electrode rod with a nickel wire as a cathode.
Electro-Deoxidation Process
Anhydrous NaCl and CaCl 2 salt (500 g in molar ratio 0.48:0.52) were placed in graphite crucible and dried in the steel reactor at 473 K for 8 h to remove moisture in the salt. When the molar ratio was NaCl:CaCl 2 = 0.48:0.52, the lowest eutectic temperature point of the binary salt was 762 K [21]. In order to ensure that the molten salt system has low viscosity and high conductivity, the temperature conducted for this experiment is 1073 K. Then the temperature of the binary salt was programmatically raised in the reactor to 1073 K, while argon was continuously pumped into the reactor. The anode was graphite crucible, which was connected by a titanium electrode rod. The electro-deoxidization experiment was conducted at a constant potential of 3 V for 6 h. The schematic diagram of the experimental device was shown in Figure 1. The obtained cathodic products were washed by deionized water in the ultrasonic cleaners and vacuum dried at 333 k. In order to clarify the deoxidation process of TiO2 in molten salt electrolysis, the preparation of titanium by direct electro-deoxidation of TiO2 in NaCl-CaCl2 binary molten salt system was carried out in this work. The reduction process and electrochemical mechanism of the molten salt electrolysis from TiO2 to titanium were studied by potentiostatic electrolysis and electrochemistry analysis in detail.
Raw Materials and Cathode Precursor Preparation
TiO2 (96 wt.%) and carbon (4 wt.%) powders of 2 g were used as raw materials and mixed homogeneously. The mixed powders were die-pressed at 20 MPa in a cylindrical mold (30 mm in diameter). The die-pressed bodies were sintered at 353 K for 8 h; then, the sintered disc was tied in the titanium electrode rod with a nickel wire as a cathode.
Electro-Deoxidation Process
Anhydrous NaCl and CaCl2 salt (500 g in molar ratio 0.48:0.52) were placed in graphite crucible and dried in the steel reactor at 473 K for 8 h to remove moisture in the salt. When the molar ratio was NaCl:CaCl2 = 0.48:0.52, the lowest eutectic temperature point of the binary salt was 762 K [21]. In order to ensure that the molten salt system has low viscosity and high conductivity, the temperature conducted for this experiment is 1073 K. Then the temperature of the binary salt was programmatically raised in the reactor to 1073 K, while argon was continuously pumped into the reactor. The anode was graphite crucible, which was connected by a titanium electrode rod. The electro-deoxidization experiment was conducted at a constant potential of 3 V for 6 h. The schematic diagram of the experimental device was shown in Figure 1. The obtained cathodic products were washed by deionized water in the ultrasonic cleaners and vacuum dried at 333 k.
Electrochemical Test
The electrochemical deoxidation process from TiO2 to titanium was evaluated in a three-terminal electrochemical cell by PARSTAT 2273 electrochemical workstation. Pt wire (99.99%, φ = 0.5 mm), Mo wire (99.99%, φ = 0.5 mm), and graphite crucible were used as the reference, work, and counter electrodes, respectively. Cyclic voltammetry (CV) and square wave voltammetry were used to analyze the reduction of TiO2 to titanium in NaCl-CaCl2 at 800 °C. The schematic diagram of the experimental platform is shown in Figure 2.
Electrochemical Test
The electrochemical deoxidation process from TiO 2 to titanium was evaluated in a three-terminal electrochemical cell by PARSTAT 2273 electrochemical workstation. Pt wire (99.99%, ϕ = 0.5 mm), Mo wire (99.99%, ϕ = 0.5 mm), and graphite crucible were used as the reference, work, and counter electrodes, respectively. Cyclic voltammetry (CV) and square wave voltammetry were used to analyze the reduction of TiO 2 to titanium in NaCl-CaCl 2 at 800 • C. The schematic diagram of the experimental platform is shown in Figure 2.
Characterization
The electrolytic voltage was supplied by DC power supply (DP310, MESTEK, China). The phase composition of the solid precursors and cathodic products were determined by X-ray diffraction (XRD) (Noran7, Thermo Fisher, Waltham, MA, USA). Each scan was 5 • -90 • and step size is 0.02 • . The morphology and chemical composition of the solid precursors and cathodic products were characterized by scanning electron microscopy
Characterization
The electrolytic voltage was supplied by DC power supply (DP310, MESTEK, China). The phase composition of the solid precursors and cathodic products were determined by X-ray diffraction (XRD) (Noran7, Thermo Fisher, Waltham, MA, USA). Each scan was 5°-90° and step size is 0.02°. The morphology and chemical composition of the solid precursors and cathodic products were characterized by scanning electron microscopy (SEM) (S-4800, Hitachi, Tokyo, Japan) and energy dispersive X-ray spectroscopy (EDX). The acceleration voltage of SEM is 20 kV and the working distance (WD) is 10 mm.
Calculation of the Theoretical Decomposition Potentials
Alkaline metal molten salts with low melting point, wide electrochemical window, and good electrical conductivity are commonly used as electrolytes for electrochemical preparation of metals. The Gibbs free energy of the possible reactions can be calculated by HSC thermodynamics software. The theoretical decomposition potentials (E) of the metal molten salts and TiO2 were calculated by the following equation [22,23]: where ΔG Θ (kJ/mol) is the standard Gibbs free energy change; n and F represent the electron transfer number and Faraday's constant (96,485 C/mol), respectively. The theoretical decomposition potentials and reactions that occurred in the electro-deoxidation cell from 773 K to 1273 K are listed in Figure 3. The results show that the theoretical decomposition potentials of TiO2 and the binary salt are positively correlated with temperature. The theoretical decomposition potentials of NaCl and CaCl2 is −3.29 V and −3.23 V, respectively, which is much higher than that of TiO2. It indicates that the experiment voltage of 3 V, conducted in a two-electrode system, is sufficient to electro-deoxidize TiO2 to titanium without the electrolyte decomposition.
Calculation of the Theoretical Decomposition Potentials
Alkaline metal molten salts with low melting point, wide electrochemical window, and good electrical conductivity are commonly used as electrolytes for electrochemical preparation of metals. The Gibbs free energy of the possible reactions can be calculated by HSC thermodynamics software. The theoretical decomposition potentials (E) of the metal molten salts and TiO 2 were calculated by the following equation [22,23]: where ∆G Θ (kJ/mol) is the standard Gibbs free energy change; n and F represent the electron transfer number and Faraday's constant (96,485 C/mol), respectively. The theoretical decomposition potentials and reactions that occurred in the electro-deoxidation cell from 773 K to 1273 K are listed in Figure 3. The results show that the theoretical decomposition potentials of TiO 2 and the binary salt are positively correlated with temperature. The theoretical decomposition potentials of NaCl and CaCl 2 is −3.29 V and −3.23 V, respectively, which is much higher than that of TiO 2 . It indicates that the experiment voltage of 3 V, conducted in a two-electrode system, is sufficient to electro-deoxidize TiO 2 to titanium without the electrolyte decomposition. Figure 4 presents the XRD patterns of the products at different electro-deoxidation time. It can be seen from the product electrolyzed for 0 h that TiO2 is the main component of the cathode precursor, which indicates that the little carbon did not react with TiO2 in Figure 4 presents the XRD patterns of the products at different electro-deoxidation time. It can be seen from the product electrolyzed for 0 h that TiO 2 is the main component of the cathode precursor, which indicates that the little carbon did not react with TiO 2 in the sintering process. The product electrolyzed for 8 h shows the intermediate valence titanium oxides (Ti 4 O 7 , Ti 2 O 3 , TiO) and CaTiO 3 are the main phases of the product after 8 h electrolysis. CaTiO 3 is generated by the reaction between TiO 2 and calcium ions in molten salt and oxygen ions extracted from TiO 2 . Table 1 lists the possible reaction ∆G Θ in the electrolysis process at 1073 K. Reaction (1) has an extremely negative ∆G Θ (−1045.43 kJ/mol) at 1073 K, indicating that the formation of CaO betweent Ca 2+ and O 2− extracted from TiO 2 is easy to proceed. The ∆G θ of CaTiO 3 generated by the reaction of CaO and TiO 2 was −86.94 kJ/mol, demonstrating that the reaction could occur spontaneously. Literatures show that there is a high concentration of oxygen in the material at this stage; that is, CaTiO 3 will be spontaneously formed when calcium ions and oxygen ions existed in the molten salt [24]. The diffraction peak of titanium detected in the product electrolyzed for 8 h indicates that titanium metal can be reduced after 8 h of electrolysis. Compared with the product of electrolysis for 8 h, the diffraction peak of titanium in the product of electrolysis for 24 h is significantly increased (shown in the XRD pattern of the product electrolyzed for 24 h), indicating that the reduction of titanium metal is further carried out with the extension of the electrolysis time. Figure 5 presents SEM images and EDX analysis of the products electrolysis for 8 h and 24 h. Combined with XRD data analysis in Figure 4, they show that CaTiO 3 was formed in the products electrolysis for 8 h during the electrolysis process, shown in reaction (2). The main phase is the intermediate valence titanium oxides, and the CaTiO 3 phase almost disappears in the products electrolysis for 24 h, which is due to the spontaneous decomposition between CaTiO 3 and titanium, shown in reaction (3). The deposited carbon can react with the metal on the cathode, resulting in high carbon content in the cathode product. It can be explained by the following two reactions.
Electro-Deoxidization of the Cathode Precursor
In anode: In cathode:
Electro-Deoxidation Thermodynamics of Titanium Oxides in Molten Salt Systems
The main phases in TiO2 electro-deoxidation products include Ti4O7, Ti2O3, TiO, and Ti. When graphite was used as the anode material, the main anode product in molten salt electrolysis was CO2 [25]. In order to simplify the calculation, CO2 was considered as the only gas component in the anode product. Table 2 listed ΔG θ and E of TiO2 electro-deoxidation reactions at 1073 K. The theoretical decomposition potentials of TiO2 deoxidized to Ti4O7 is 0.34 V, which is lower than TiO2 deoxidized to Ti2O3, TiO, and Ti. Therefore, the reaction (4) is preferentially carried out under the voltage driving force, and the first step reaction controlled by electrochemistry produces Ti4O7 [26].
Electro-Deoxidation Thermodynamics of Titanium Oxides in Molten Salt Systems
The main phases in TiO 2 electro-deoxidation products include Ti 4 O 7 , Ti 2 O 3 , TiO, and Ti. When graphite was used as the anode material, the main anode product in molten salt electrolysis was CO 2 [25]. In order to simplify the calculation, CO 2 was considered as the only gas component in the anode product. Table 2 listed ∆G θ and E of TiO 2 electrodeoxidation reactions at 1073 K. The theoretical decomposition potentials of TiO 2 deoxidized to Ti 4 O 7 is 0.34 V, which is lower than TiO 2 deoxidized to Ti 2 O 3 , TiO, and Ti. Therefore, the reaction (4) is preferentially carried out under the voltage driving force, and the first step reaction controlled by electrochemistry produces Ti 4 O 7 [26]. In the same way, the third step reaction was Ti 2 O 3 deoxidized to TiO. Finally, TiO deoxidized to Ti. According to the products obtained at different electrolysis times and electro-deoxidation thermodynamics analysis, the molten salt electrolysis from TiO 2 to titanium is a multi-step electrochemical reaction process, which can be summarized as: TiO 2 →Ti 4 O 7 →Ti 2 O 3 →TiO→Ti.
Analysis of Electrochemical Deoxidation of TiO 2 in NaCl-CaCl 2 System
Then, 3 wt.% TiO 2 was added to NaCl-CaCl 2 binary molten salt system, and then the samples from the upper, middle, and lower crucibles were taken for XRD analysis after being heated to 1073 K for 4 h. The XRD patterns ( Figure 6) show that no other substances were found in the samples taken from the upper and middle crucibles and TiO 2 was deposited in the bottom of the crucible. It indicates that there is no chemical dissolution of TiO 2 in the molten salt system. CaTiO 3 cannot be formed spontaneously, because there is no electro-deoxidation reaction conducted to produce oxygen ions in the binary molten salt system. Figure 7 displays the CV curves of NaCl-CaCl 2 system before and after TiO 2 addition. There is no redox peak found in the CV curve of NaCl-CaCl 2 system without 3 wt.% TiO 2 ; it demonstrates that the electrochemical properties of the binary molten salt electrolyte are stable, and the trace impurities in the salt have no influence on the experiment. CV curve of NaCl-CaCl 2 system with 3 wt.% TiO 2 shows that there are four reduction peaks, a, b, c, and d, which appear in the reduction process, and one oxidation peak d' appears in the oxidation process. The asymmetric CV curve of NaCl-CaCl 2 system without 3 wt.% TiO 2 and |i pa /i pc | =1 prove that the existence of reduction was an irreversible process. According to the four reduction peaks on the CV curve, the reduction of TiO 2 to titanium metal may be divided into four steps, which was consistent with the above thermodynamic calculation results. Figure 8 displays the CV curves of NaCl-CaCl2-TiO2 system with different scan rates. With the increase of the scan rate, the peak currents of the four reduction peaks gradually increased. The reduction potential corresponding to peaks a, b, and c shifted negatively with the increase of the scan rate, indicating that the reduction process was irreversible or quasi-reversible. Figure 9 displays the relationship between the scan rates of peaks a, b, and c and the peak current in NaCl-CaCl2-TiO2 system. It can be seen that the square root of the scan rate of reduction peaks a, b, and c has a linear relationship with the peak current, demonstrating that the reduction processes of a, b, and c are completely irreversible processes controlled by diffusion. The potential of peak d has no obvious deviation, so the reduction process corresponding to peak d is a reversible reaction. In consequence, both reversible and irreversible processes exist in the electrochemical reduction of TiO2 to titanium metal in the NaCl-CaCl2 binary system. With the increase of the scan rate, the peak currents of the four reduction peaks gradually increased. The reduction potential corresponding to peaks a, b, and c shifted negatively with the increase of the scan rate, indicating that the reduction process was irreversible or quasi-reversible. Figure 9 displays the relationship between the scan rates of peaks a, b, and c and the peak current in NaCl-CaCl 2 -TiO 2 system. It can be seen that the square root of the scan rate of reduction peaks a, b, and c has a linear relationship with the peak current, demonstrating that the reduction processes of a, b, and c are completely irreversible processes controlled by diffusion. The potential of peak d has no obvious deviation, so the reduction process corresponding to peak d is a reversible reaction. In consequence, both reversible and irreversible processes exist in the electrochemical reduction of TiO 2 to titanium metal in the NaCl-CaCl 2 binary system. For the irreversible process of the potentiodynamic scanning, the peak potential and logarithm of scan rate has the following relation, as shown in Equation (2). When Epc and lnv are in a linear function, the electron transfer number (n) in the process can be calculated according to the slope (k) of the fitting curve, shown in Equation (3).
where E is the peak potential (V); R, T, n, v, α, and F represent the ideal gas constant (8.314 J/(mol·K)), absolute temperature (K), the electron transfer number, the scan rate (V/s), the charge transfer coefficient, and Faraday's constant (96485 C/mol), respectively. According to the CV curve, the reduction potential difference of peak a and b is 0.15 V, which is consistent with the theoretical decomposition potentials difference 0.14 V of the reactions (4) and (8). Figure 10 shows the fitting curves of the peak potential (Epc) and For the irreversible process of the potentiodynamic scanning, the peak potential and logarithm of scan rate has the following relation, as shown in Equation (2). When E pc and lnv are in a linear function, the electron transfer number (n) in the process can be calculated according to the slope (k) of the fitting curve, shown in Equation (3).
where E is the peak potential (V); R, T, n, v, α, and F represent the ideal gas constant (8.314 J/(mol·K)), absolute temperature (K), the electron transfer number, the scan rate (V/s), the charge transfer coefficient, and Faraday's constant (96,485 C/mol), respectively. According to the CV curve, the reduction potential difference of peak a and b is 0.15 V, which is consistent with the theoretical decomposition potentials difference 0.14 V of the reactions (4) and (8). Figure 10 shows the fitting curves of the peak potential (E pc ) and the logarithm scan rate (lnv). According to the slope of the fitting line, the electron transfer number in the combined process of peaks a and b was calculated to be 1.303, approximately 1, but there were also non-stoichiometric Ti 4 O 7 in the reduction process of TiO 2 to Ti 2 O 3 . Due to the small theoretical decomposition potential difference, the two independent peaks a and b could be approximately regarded as one peak. Peaks a and b represent the reduction process from TiO 2 to Ti 2 O 3 by direct reduction or a step-by-step process with an electron transfer number of 1, and Ti 4 O 7 reduced to Ti 2 O 3 was also controlled by diffusion [25]. The electron transfer number of peak c was calculated to be 1.298, approximately 1. According to the electron transfer number, the diffusion coefficients of diffusion-controlled processes A, B, and C are 0.349 × 10 −5 cm/s and 0.2352 × 10 −4 cm/s, respectively. The formula is shown in Equation (4) [27]: where i P is peak current density (A/cm 2 ); C o , A, and D o represent the concentration of the reactants (mol/cm 3 ), work electrode area (1.95465 cm 2 ), and diffusion coefficient (cm 2 /s), respectively.
where iP is peak current density (A/cm 2 ); Co, A, and Do represent the concentration of the reactants (mol/cm 3 ), work electrode area (1.95465 cm 2 ), and diffusion coefficient (cm 2 /s), respectively. Figure 10. Fitting curves of the peak potential (Epc) and the logarithm scan rate (lnv). Figure 11 shows the square wave voltammetry curve of the NaCl-CaCl2-TiO2 system. Three obvious reduction peaks between −1.5 V and 0 V can be seen from the curve. The first peak of process a and b is near −0.5 V, the second peak of process c is near −1.0V, and the third peak of process d is near −1.4 V, which is roughly the same as the reduction peak potential of the CV curve. The irreversible process in the reduction process is the main reason for a little shift of the reduction peak. Process d is a reversible process, so the relationship between the half-peak width and the electron transfer number can be expressed in Equation (5) [28]. The electron transfer number in process d calculated by Equation (5) is 2.324, approximately 2, which corresponds to reaction (13). The reduction process of TiO2 to titanium was further confirmed as TiO2→Ti4O7→Ti2O3→TiO→Ti. Figure 11 shows the square wave voltammetry curve of the NaCl-CaCl 2 -TiO 2 system. Three obvious reduction peaks between −1.5 V and 0 V can be seen from the curve. The first peak of process a and b is near −0.5 V, the second peak of process c is near −1.0 V, and the third peak of process d is near −1.4 V, which is roughly the same as the reduction peak potential of the CV curve. The irreversible process in the reduction process is the main reason for a little shift of the reduction peak. Process d is a reversible process, so the relationship between the half-peak width and the electron transfer number can be expressed in Equation (5) [28]. The electron transfer number in process d calculated by Equation (5) is 2.324, approximately 2, which corresponds to reaction (13). The reduction process of TiO 2 to titanium was further confirmed as TiO 2 →Ti 4 O 7 →Ti 2 O 3 →TiO→Ti.
Conclusions
Titanium metal was prepared by the electrochemical reduction in NaCl-CaCl2 binary molten salt at 1073 K, and the reduction process of TiO2 to titanium can be summarized as TiO2→Ti4O7→Ti2O3→TiO→Ti. As an intermediate product in the deoxidation process of TiO2, CaTiO3 can be spontaneously generated among Ca 2+ , O 2− , and TiO2 in the NaCl-CaCl2 system. The dissolution behavior of TiO2 showed that there is no chemical dissolution of TiO2 in the NaCl-CaCl2 molten salt system at 1073 K. Electro-deoxidation thermodynamics and electrochemical studies further confirmed that the reduction of TiO2 to ti- Figure 11. Square wave voltammetry curves of the NaCl-CaCl 2 -TiO 2 system with 4 V/s scan rate.
Conclusions
Titanium metal was prepared by the electrochemical reduction in NaCl-CaCl 2 binary molten salt at 1073 K, and the reduction process of TiO 2 to titanium can be summarized as TiO 2 →Ti 4 O 7 →Ti 2 O 3 →TiO→Ti. As an intermediate product in the deoxidation process of TiO 2 , CaTiO 3 can be spontaneously generated among Ca 2+ , O 2− , and TiO 2 in the NaCl-CaCl 2 system. The dissolution behavior of TiO 2 showed that there is no chemical dissolution of TiO 2 in the NaCl-CaCl 2 molten salt system at 1073 K. Electro-deoxidation thermodynamics and electrochemical studies further confirmed that the reduction of TiO 2 to titanium in four steps, and the processes were controlled by diffusion. | 2022-06-04T15:19:11.807Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "505a11bc37f6b5628c746d99b43165b1f0105972",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/11/3956/pdf?version=1654144077",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0dfab1e2c9821f843d05c1fd36057780cb259ec5",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
11537591 | pes2o/s2orc | v3-fos-license | Toward Measuring the Scaling of Genetic Programming
Several genetic programming systems are created, each solving a different problem. In these systems, the median number of generations G needed to evolve a working program is measured. The behavior of G is observed as the difficulty of the problem is increased. In these systems, the density D of working programs in the universe of all possible programs is measured. The relationship G ~ 1/sqrt(D) is observed to approximately hold for two program-like systems. For parallel systems (systems that look like several independent programs evolving in parallel), the relationship G ~ 1/(n ln n) is observed to approximately hold. Finally, systems that are anti-parallel are considered.
INTRODUCTION
Most genetic programming experiments appear to evolve solutions to very small problemssmall in terms of program size and/or the number of variables used. For example, people have evolved sorting programs. Nobody has evolved an operating system, a database, or a working air-traffic control system for the United States.
It seems, then, that genetic programming doesn't scale well to larger, more difficult problems. But how do you measure how large or how difficult a problem is?
My initial approach was to keep the problem constant, and vary the set of statements that programs were implemented in. (It is a common observation that it is more difficult to write programs in a low-level language than in a high-level one. That is, the same problem is "harder" or "more complicated" when written in a low-level language, and the source code for the program to solve the problem is larger.) The difference in how many generations it took to evolve a working program would then be entirely due to the change of difficulty of implementing the program in the statement set, because the problem would be constant. This approach was used for the first two systems. As research progressed, it became apparent that other, sometimes system-specific parameters gave more precise control over the difficulty of the problem.
The rest of this paper is organized as follows: Section 2 describes the first system. Section 3 presents the results in the form of a number of datasets, each of which contains only one varying parameter. Section 4 demonstrates the relationship between the density of working programs and the median number of generations needed to evolve a working program. Section 5 describes a second system that exhibits similar scaling behavior. Section 6 describes two parallel systems -systems that have multiple dimensions, where each dimension can be optimized independently. Section 7 describes three anti-parallel systems -systems that have multiple dimensions, but none of the dimensions can be optimized independently. Section 8 presents some conclusions, and section 9 presents some open questions.
THE FIRST SYSTEM: LINEAR PROGRAMS, SORT-ING INTEGERS
I chose sorting a list of integers as the first problem.
This system had a fixed number v of writable variables, numbered 1 through v. ("Fixed" here means that it did not evolve; however, it could be changed between runs via a command-line parameter.) It also contained two read-only "variables". Variable 0 always contained 0, and variable v + 1 always contained the number of integers in the list being sorted.
A program consisted of a series of statements. Within one run, all programs of all generations had the same length.
The initial programs contained random statements. The default population size was 20 programs. The most fit programs (default 4) were chosen to produce the next generation. If there was a tie among programs for which was most fit (or, more importantly, 4th-most-fit), a winner was randomly selected from the tied programs.
Fitness was tested by having each program attempt to sort three lists of numbers, which respectively contained 10, 30 and 50 values. The lists contained the values from 1 to the size of the list, in random order. After a program attempted to sort a list, the forward distance was computed as follows: For each location in the list, the absolute value was taken of the difference between the value at that location in the list as sorted by the program, and the value that would be at that location if the list were perfectly sorted. A perfectly sorted list therefore had a forward distance of zero. The reverse distance was identical, except that the "perfectly sorted" list was replaced by one that was perfectly sorted in reverse order. In general, the forward and backward distances were larger for the longer lists. To address this, a normalized metric was created for each list, which was the reverse distance minus the forward distance, divided by the sum of the forward and reverse distances. This evaluated to 1 for a perfectly sorted list, and to −1 for a list that was perfectly sorted in reverse. Finally, the program's fitness function was the average of the normalized metrics for the three lists.
A program was considered to be terminated when the last statement was executed, if the last statement was not a jump, or when a jump was executed to one past the last statement.
(This is equivalent to saying that all programs had an End as the assumed last statement, and the End could not mutate.) If the program executed 10 times as many statements as were required for a bubble sort for the same list, the program was considered to be in an infinite loop, and terminated. No fitness penalty was imposed for this condition.
Cross-breeding was done by choosing two programs, randomly choosing a location within the list of statements of the programs (the same location for both), cutting each program into two pieces at that location, and swapping the pieces to create two child programs. This ensured that the children were the same length as the parents. Also, during this process, a statement in a child program could randomly mutate into another statement with some probability (default 0.2).
Programs were composed of statements (instructions) that were members of a set of statements.
Statement set 1 contained two statements: CompareSwap (compare two numbers in the list, and swap them if they are out of order), and For (a C-style for loop with a loop variable, a variable from which to initialize the loop variable, and a limit variable to compare the loop variable to). Programs with this statement set defaulted to 5 statements long, even though a bubble sort can be written with three such statements (two For statements and one CompareSwap statement). This "slack" in the number of statements gave more rapid evolution than a length that had no more statements than were absolutely necessary.
Statement set 2 contained IfVarLess (if the value in one register is less than the value in another register, execute the next statement), IncrementVar (increment a register), AssignVar (copy the value from one register to another), GoTo, and CompareSwap. With this statement set, I could write a bubble sort in 11 statements, but programs created from statement set 2 evolved better with 25 statements per program.
Statement set 3 contained IfVarLess, IncrementVar, AssignVar, GoTo, IfListLess (if the list entry at the index contained in the first variable is less than the list entry at the index contained in the second variable, execute the next statement), and Swap (an unconditional swap). With this statement set, I could write a bubble sort in 12 statements, but programs created from statement set 3 evolved better with 30 statements per program.
The unsorted lists of numbers were randomly created. New lists were created for each generation. (If the same lists were used for all generations, statement set 2 would sometimes be unable to evolve a working program.) An evolution started with a random collection of programs, and proceeded until a program evolved that worked. An evolution was characterized by the number of generations required to evolve a working program. However, since evolution is a random process, a repeat of the evolution would take a completely different number of generations.
A run was a number of evolutions, all with the same statement set and the same parameters. It was characterized by the median of the number of generations required for each evolution in the run. (The distribution of the number of generations had a very long tail. The presence or absence of one anomalous evolution could significantly shift the average, so the median was the appropriate choice here.) For statement set 1, runs consisted of 1000 evolutions, and the results were quite repeatable (within 10%, and often closer to 2%). For statement sets 2 and 3, runs were reduced to 100 evolutions, because the evolutions took far longer (both because it took more generations to evolve a solution, and because each program could execute many more statements before it was declared to be in an infinite loop). Re-runs of statement sets 2 and 3 could give results that differ by as much as 30% from the first run.
I also measured the density of working programs in the universe of all possible programs, by generating a large number of random programs and seeing how many of them worked "as is", that is, with no evolution. When measuring density for statement set 1, I made sure that the sample was large enough to contain at least 1000 working programs. For statement sets 2 and 3, I only tried for a sample large enough to contain 100 working programs, because otherwise the density run times became extremely long.
DATA AND ANALYSIS
Changing number of variables Adding variables increased the size of the solution space. As the solution space got larger (and the number of programs that work increased, too, but not as fast, as we will see in the next section), the number of generations climbed dramatically. As the statements became simpler, the problem became more complex in terms of the solution language, and the number of generations exploded. (Statement set 2 takes 11 statements to write a bubble sort in, versus 3 statements for statement set 1, that is, statement set 2 takes 3.67 times as many statements to implement one particular algorithm to solve this problem. But it took 124 times as many generations to evolve a working program with 2 variables, and 57 times as many generations to evolve one with 10 variables.) This is quite surprising! Even though the problem grew more complex in terms of the statements, the median number of generations went down dramatically, especially with a larger number of variables.
A possible explanation would be the program length. (Statement set 3 defaulted to 30 statements per program, and statement set 2 to 25.) But with 10 variables and 25 statements per program, statement set 2 took 152464 generations, and statement set 3 took 115857. So program length doesn't seem to be the reason that statement set 2 took more generations than statement set 3.
I see another possible explanation, however: In statement set 2, it was hard to build a loop -it took 5 statements. But one CompareSwap could give you, on average, some improvement. So a mutation from some other statement to a CompareSwap statement could destroy a working (or almost working) loop and actually improve the program's score. Statement set 1 didn't have this problem, since loops were only one statement long. Statement set 3 didn't have this problem, either, since an unconditional Swap, on average, would not cause any improvement.
This may not be the correct explanation of this anomaly. But we are going to see in the next section that something is very wrong with statement set 2.
Since statement set 2 is somewhat suspect, let us repeat the previous comparison with statement set 3. Statement set 3 takes 12 statements to write a bubble sort in, versus 3 statements for statement set 1, so statement set 3 takes 4 times as many statements to implement one particular algorithm to solve this problem. But it took 144 times as many generations to evolve a working program with 2 variables, and 48 times as many generations to evolve one with 10 variables.
Changing Population Size (number of programs and number of parents) Doubling the number of programs in statement set 2 seemed to give much less than 50% improvement in the number of generations. The only exception was going from 160 programs to 320, where the number of generations was reduced by 58%. Here we see that the optimal length of the program increased slowly as the number of variables increased.
RELATIONSHIP BETWEEN SOLUTION DENSITY AND NUMBER OF GENERATIONS
By density, we mean the fraction of working programs within the universe of all possible programs for that statement set and number of variables.
Density data for statement set 1: 3.23 × 10 −8 3 1.867 × 10 −8 (All of the above densities were with the default number of programs, and with the default program length for the statement set.) Combining these densities with the median number of generations to reach a working program, we observe a pattern: When we hold everything else constant and change the number of variables, the median number of generations needed to evolve a working program is almost proportional to the reciprocal of the square root of the density. That is, if G is the median number of generations and D is the density of working programs, then K = G × √ D is almost constant. This value (K) rises slowly as the number of generations increases and the density decreases. Note how high the K values are for statement set 2 compared to either statement sets 1 or 3. This is why I said that something was wrong with statement set 2.
Statement set 2 didn't completely follow the pattern. Looking more closely, we see an anomaly: 7 variables required more generations than 10 variables did. I reran both the evolution runs and the densities, and extended it to 12 variables. The new results were: In the repeated run, the anomaly is gone, and the regularity we observed before is seen to approximately hold.
However, the same regularity did not hold for changing the program length. Here, though the density actually increased as the program length increased, the number of generations increased anyway.
Statement set 1, 2 variables: It appears, then, that we can say that K = G × √ D is at best almost constant, but the number of generations could be considerably higher if the program length was not optimal.
Earlier, we saw that the optimal length of program for statement set 1 increased as the number of variables increased. What happens if, for each number of variables, we take the optimal length? 5 THE SECOND SYSTEM: TREE-STRUCTURED PRO-GRAMS, n-BIT PARITY For the second system, I changed nearly everything. Instead of sorting integers, I changed the problem to calculating n-bit parity. The fitness function was, out of all possible inputs of n bits, the number of inputs for which the program computed the correct parity. Rather than demanding perfection, a program was regarded as working if the fitness function equaled or exceeded a threshold value (called the termination condition). Each parent was chosen by randomly selecting seven programs and choosing the most-fit from among the seven to be the parent. (This approach was also used for all subsequent systems.) The mutation probability was decreased to 1% (also for all subsequent systems). Instead of a linear list of statements, a program was represented as a LISP-like tree structure. Programs were variablelength rather than fixed-length. Mutations could alter a whole sub-tree, rather than a single node. The default population size was 1000, rather than 20. And, optionally, subroutines could be automatically generated. (This is similar to the system in [1]. I also used the same probabilities of creating subroutines and other program-transforming events as his system used.) The difficulty was changed by increasing the number of bits, and by increasing the termination condition.
This system presented a new problem when measuring densities, because the universe of all possible programs was not a simple n-dimensional cube as it was in the first system. Instead, due to the variable length of the programs, and the fact that almost all non-leaf nodes took two arguments, there were about twice as many possible programs with 200 nodes (the maximum length for the system) as there were possible programs with 199 nodes. In turn, there were twice as many possible programs with 199 nodes as there were with 198 nodes, and so on. In fact, about half of the programs in the universe of all possible programs had the maximum length.
But the evolved programs had a very different length distribution, with nothing below a length of about 10, then a relatively uniform distribution up to about 50 nodes, then slowly tailing off, with only about 10% (range 0% to about 40%) having a length greater than 100 nodes. As noted in [1], programs of exceptional length rarely contribute much to the solution of a genetic programming problem. In fact, the longer programs had a lower density of working programs than shorter programs did.
As an evolution proceeds, the length distribution of the population of programs should become more and more similar to the distribution of working programs, and less and less similar to the distribution of the universe of all possible programs. Given, then, that the universe of all possible programs is structurally very different from both the working programs that are evolved and from the population during an evolution, how can we get meaningful density data? I chose the approach of trying to create self-consistent population distributions -that is, population distributions such that, when populations with that length distribution were evolved, the resulting working programs had the same distribution of lengths. (In practice, this could only be approximately achieved.) If we measure the density of a population of programs with the same length distribution as the working programs, we obtain density data that we can meaningfully combine with the median number of generations, to see if the relationship observed with the first system also holds here. (The alternative -the density data coming from populations that are unlike the population of working programs -clearly is less likely to provide meaningful data.) The same approach -finding self-consistent distributions -was also applied to the number of subroutines, when subroutines were allowed.
Statement set 1
Statement set 1 contained the following node types: Xor, And, Or, and Not. Also, there were constant nodes, which contained either 0 or 1. Statement sets 2, 3, 4, and 5 seemed to scale better than 1 √ D , rather than worse. But a look at the evolved programs revealed that in each case, their length distribution departed from the expected length distribution (the distribution that was used to generate the programs). This is not surprising, since the expected length distribution of generated programs was created to match the distribution of working programs shown by statement set 1. Also, statement sets 2 and 4 departed from the expected distribution for the number of subroutines. It must be noted, however, that at higher termination conditions, the length profile of the working programs still did not match the new length profile of the generated programs, so the validity of this data is suspect.
PARALLEL SYSTEMS
The first two systems (sorting and parity) are classic computer science problems. In each case, the output is a very complex function of the input. (Actually, in the sorting case, correct output is always the same, but the transformation from input to output is very complex.) For the third system, I chose a very smooth function -an n-dimensional Gaussian curve, centered at the origin in the n-dimensional cube which extended over the interval [−1, 1) in each dimension. (The exclusion of 1.0 was an artifact of the means of generating random real numbers; it does not seem possible for it to affect the results.) "Programs" were really data, represented as an n-vector lying within the n-dimensional cube.
Obviously, programs were of fixed length. Unlike the previous system, this system (and all subsequent ones) mutated at most one element of the vector.
The first two systems were program-like -the programs looked like statements to be executed. In contrast, the third system was data-like -programs looked like coordinates at which a function was to be evaluated.
A program's fitness function was e −r 2 , where r was the Euclidean distance from the program's vector to the origin. If the fitness function was equal to or greater than the termination value, the program was considered to be fully working. (Perfection -a fitness function of 1.0 -was not realistically achievable for this system.) Unfortunately, sometimes the density became so low that it could not be measured by the standard Monte Carlo method that was used on previous systems (at least not within a reasonable amount of CPU time). However, the density can be calculated. If the termination threshold is t, then all points in the n-dimensional space that lie inside the sphere with radius r = −ln(t) meet the termination condition. For n dimensions, the volume V of the sphere is given by V = (2π) n/2 r n 2×4×...×n for even n, and V = 2 (2π) (n−1)/2 r n 1×3×...×n for odd n. The volume of the entire space is 2 n , since it extends from −1 to 1 in all n dimensions. The density of working programs is then D = V 2 n . These results use the calculated density exclusively. Here are the results for this system: Obviously, this system did not demonstrate the "slightly worse than 1 √ D " behavior that the first two systems showed! Other than smoothness, the Gaussian system has another difference from the first two systems: if all but one of the variables are held constant, the result is a one-dimensional Gaussian curve in the remaining variable. Further, the one-dimensional Gaussian curve and the multidimensional Gaussian curve are centered at the same value of the non-constant variable. This means that the Gaussian system can optimize each variable independently. In general, the first two systems could not do this.
The Gaussian system can therefore be considered a parallel system, in that it is conducting n essentially independent evolutions in parallel, with the results all multiplied together into one fitness function.
To explore such systems further, I built a second parallel system. The "program" for this system consisted of n variables, each of which could range from 1 to p. The fitness function was the number of variables that had value p (hence this system may be called the "highest" system). No termination condition was used for this system; an n-dimensional program had to have a fitness function equal to n to be considered working. The density is therefore Here are the results for this system: Once again, this system did not demonstrate the "slightly worse than 1 √ D " behavior that the first two systems showed. But there is more to see here. Looking just at the median number of generations to evolve a working program, we see that the ratio between the number of generations for p = 2000 and the number of generations for p = 50 stayed remarkably consistent as the number of dimensions changed. It looks like G might be separable into a component that depends on p and a component that depends on the number of dimensions (n), that is, G = f 1 (p) × f 2 (n), for some f 1 and f 2 .
Empirically, f 2 (n) ≅ 1 n ′ ×ln(n ′ ) , where n ′ = n + 0.6. If all the variation for the number of dimensions is in f 2 , then f 1 must be the same for all values of n. In particular, it must be the same for n = 2. So it seems reasonable for f 1 to depend only on the density for n = 2. Is f 1 the familiar "slightly worse than 1 √ D " behavior?
(n+δ) ln(n+δ) , where D 2 is the density for n = 2. That is, 1 √ D 2 is a candidate for f 2 . For this system, we get the following results: Two dimensions: Turning back to the Gaussian system, we find that it has a good fit with f 2 (n) ≅ 1 n ′′ ×ln(n ′′ ) , where n ′′ = n + 0.05. We get the following results for the Gaussian system: These results are similar to those of the "highest" system, with the K ′ values relatively flat for the same termination condition, independent of dimension. Also, as the problem gets harder, K ′ exhibits the "slightly worse than 1 √ D " behavior -though it seems steeper than the first two systems for low termination conditions and flatter for high termination conditions. At higher termination conditions, the usual "slightly worse than 1 √ D " behavior is clearer if we use the density from n = 1 (one dimension) rather than from n = 2. But the argument for why we can use the density from n = 2 may not be valid when applied to n = 1, since a system with only one dimension can only evolve by mutation. While I suspect that this makes no difference for the density data, it seems safer to use the data from n = 2.
ANTI-PARALLEL (TWISTED) SYSTEMS
A parallel system is one where the fitness function can be optimized for each dimension independently. In contrast, a system where the fitness function must be optimized for all dimensions simultaneously may be called anti-parallel. (Attempting to optimize just one variable gives the wrong answer for that variable.) I built three such systems.
The "binary" system had n dimensions and b bits. Each dimension had an integer variable, with a range from 1 to 2 b − 1. (In practice, it was implemented with the range from 0 to 2 b − 2, to be able to use a 0-based lookup table for the fitness.) The fitness value of each dimension was the value of that dimension's variable, reduced to just the least set bit. For example, with b = 4, the values were {1, 2, 1, 4, 1, 2, 1, 8, 1, 2, 1, 4, 1, 2, 1}. The value of the fitness function was the sum of the fitness values for each dimension. This system is a parallel system.
To make a parallel system into an anti-parallel system, we perform a rotation of axes. Given that in the binary system, the variables must be integers in the correct range, I could not use the obvious transformation, which is u = (x+y) √ 2 , v = (x−y) √ 2 . Instead, I used u = (x+y) 2 (integer division, that is, discarding any remainder), and v = |x − y|. (This is another place where using a range that starts at 0 helped the implementation.) I paired the dimensions to do this; this required that n be even. This means that this system was only anti-parallel for pairs of dimensions. As the investigation of this system did not proceed beyond n = 2, this was not an issue.
This system could not evolve. If it didn't have a working program in the first generation, it almost always got permanently stuck. With 20 programs, about 1 3 of the time it could not evolve n = 2, b = 3 with a termination value of 8, even though there were only 49 programs possible with those values, and two of them met the termination criterion! ("Permanently stuck" is impossible to prove. However, it got stuck with the best program having a fitness value of 6 for 100,000,000 generations; at that point the evolution was terminated. That was as close to "permanent" as I had patience for.) It is easy to see why the twisted binary system got stuck. With two dimensions and three bits, to get a fitness function of 6, (x, y) must be one of (0, 3), (3, 0), (3, 4), (4, 3), (1, 6), or (6, 1). For a fitness function of 8, (x, y) must be (2, 5) or (5, 2). (Note that these are 0-based x and y, not 1-based.) If the entire population reaches a fitness function of 6, the evolution is stuck. It must change both x and y to reach a fitness function of 8, it cannot get either x or y to the right value by any combination of parents, it can only mutate x or y (but not both) in one generation, and the result of mutating either x or y is a less-fit offspring. Also, a fitness value of 7 is not possible in this system, so there is no way to reach 8 by taking two steps.
The only possible way forward would be for there to be at least seven programs with a mutation in the previous generation (probability 10 −2 for each one, so 10 −14 for all seven, but it could be any seven out of the population of 20, which gives us 77520 ways that it could happen, for a total probability of 7.752×10 −10 ). Then all seven mutated programs would have to be chosen for the competition for a parent of a program in the next generation (one out of 77520, but there are 40 such competitions, so the probability is 40 77520 -though this is not quite exact). Then the mutation would have to be passed on to the next generation (probability 0.5). Then there would have to be another mutation in the child program (probability 0.01). Finally, the mutations would have to give rise to the right values so that the resulting fitness function was 8 (probability 2 7 for the first mutation, and 1 7 for the second). This combination of events is immensely unlikely (total probability 8.16 × 10 −17 per generation).
The second anti-parallel system I build was based on the "linear" system. This system had n dimensions. Each dimension had a real variable, in the range [−1.0, 1.0). The fitness function was 1, minus the sum of the absolute values of the variable for each dimension. This "linear" system was a parallel system.
To convert the linear system into an anti-parallel system, I rotated it through 45 degrees in each of the Euler angles. I then scaled the rotated variables by different amounts: the first rotated variable was scaled by 1.0, the second by 1.5, the third by 1.5 2 , and so on. The fitness function was 1, minus the sum of the absolute values of the rotated and scaled variables. This created, essentially, a diagonal "ridge", with the fitness function falling away more steeply in other directions. This "twisted linear" system was anti-parallel.
It may be easier to see why this system is anti-parallel in the two-dimensional case. Holding one variable constant defines a line; optimizing the other variable means finding the highest point on the line, which is where the line intersects the ridge. But because the line is parallel to one of the axes and the ridge is diagonal, this gives a value for the optimized variable that is different from the coordinate of the peak (the highest point on the ridge).
This system could barely evolve at all. Starting at two dimensions with termination value 0.7, it occasionally took half a million generations to evolve a solution, even though the solution has a density that was greater than 1%. At termination value 0.8, it once only made it to fitness = 0.67 in 112 million generations. The evolution was terminated at that point. This was slightly better than the twisted binary system, but it still essentially could not evolve anything more than the most trivial problems. This happens because of the shape of the fitness function. The absolute values cause a discontinuous first derivative at the ridge. From a point on the ridge, moving in any direction parallel to an axis (that is, changing any one variable) reduces the fitness function. Also, from a point near the ridge, the only way to improve the fitness function is to move closer to the ridge.
The third anti-parallel system was created by applying the rotations and scaling of the twisted linear system to the Gaussian system. The fitness function was e −R 2 , where R was the Euclidean length of the vector composed of the rotated and scaled variables. Like the Gaussian system, the density of this system was easy to calculate. It was the same as the density of the Gaussian system, except for the scaling. These results use the calculated density exclusively.
Here are the results for this system: This system clearly scaled better than the first two systems. Even without knowing the appropriate value of δ to use, we can see that this system scaled worse than a parallel system.
Finally, it scaled much better than the twisted binary and twisted linear systems. This is because the twisted Gaussian system has continuous derivatives. The gradient is nonzero everywhere except at the peak. This means that, unlike the twisted linear and twisted binary systems, the twisted Gaussian system could always make progress by changing only one variable.
CONCLUSIONS
Genetic programming scales very well for data-like problems with continuous first derivatives (except for the problem of getting stuck on a sub-peak). But for program-like problems, genetic programming doesn't seem to scale very well to larger, more difficult problems. As the size of the solution space increases, the number of working programs also increases, but more slowly. So the density of working programs decreases, and the number of generations required to evolve a working program increases.
For example, let us suppose that we have a simple programming language in which there are only ten possible statements -not types of statements, but ten statements total. Also, let us suppose that the number of working programs increases as the square root of the total number of possible programs. (My first system was rather different, in that the number of possible statements increased as the number of variables increased. But using my system as a rough guide, for a program that is 3 statements long -the minimum needed for statement set 1 -the density of solutions D was proportional to 1 V 4 for large V , where V is the number of variables. The size of the solution space was proportional to V 9 , again for large V . So for large V , the number of working programs must be proportional to V 5 . This is slightly more than the square root of the total number of possible programs.) Then, in our hypothetical example, if the program is 20 statements long, the total solution space is 10 20 , there are about 10 10 possible working programs, and the density of working programs is 10 −10 . The number of generations needed to evolve a solution is of the order of 10 5 , which is quite doable. But if the problem requires a program that is only twice as long (40 statements), there are 10 40 possible programs, only about 10 20 of them work, and the number of generations is of the order 10 10 . At this point, you need either a cluster of machines, or an uninterruptible power supply and some patience. Make the problem harder again, so that the program needs 80 statements, and the size of the solution space is 10 80 , there are about 10 40 working programs, and it will take of the order of 10 20 generations to evolve a solution. Now you need a big cluster and a lot of patience. Make the problem harder once more, so that the program needs 160 statements, and the size of the solution space is 10 160 , there are about 10 80 working programs, and it will take of the order of 10 40 generations to evolve a solution. This is hopeless -genetic programming simply isn't a reasonable way of solving a problem of this size, on any hardware. And this is for a program that is only 160 statements long! As programs go, this is still a very small one. A competent programmer can write such a program in a day or two -if the problem is within the scope of the programmer's competence. If it is a problem that the programmer has no idea how to approach, the literature will help -if the answer has been published.
It seems, then, that genetic programming is best for smaller problems that we don't yet know how to solve. If the problem is a standard one, like parity or sorting, a human programmer will run rings around genetic programming. But for problems where the solution is not yet known to mankind, genetic programming beats both brute-force search and (at least sometimes) human ingenuity. Ironically, then, humans are better at the boring parts of programs, and genetic programming is better at the really interesting problems (as long as they are small).
For program-like problems, one way to keep the problem small is to use the most powerful statements that you can. "Small" really means that the universe of all possible programs is small. This in turn means that only a small number of statements is needed to write the program. Also, it seems to help if the statement set is all at the same level of abstraction.
FURTHER QUESTIONS
What is the formula for the "slightly worse" part of "slightly worse than 1 What is the proportionality "constant"? (It's not really constant, since it varies with statement set, population size, and maybe other parameters.) Perhaps the most interesting question: What is the density of working solutions for DNAbased biological systems in the total possible DNA space? Or, on a smaller scale, what is the density of working solutions for protein sequences that will bind at a specific site, when implemented in the "statements" of DNA? | 2011-02-12T21:09:32.000Z | 2011-02-12T00:00:00.000 | {
"year": 2011,
"sha1": "810f8aeb95bb47120e4633921bf37c4e0474c054",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "810f8aeb95bb47120e4633921bf37c4e0474c054",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231991634 | pes2o/s2orc | v3-fos-license | Effect of Spironolactone on COVID-19 in Patients With Underlying Liver Cirrhosis: A Nationwide Case-Control Study in South Korea
Purpose: On the basis that spironolactone is involved in ACE2 expression and TMPRSS2 activity, previous studies have suggested that spironolactone may influence the infectivity of COVID-19. Research has suggested that cell entry of SARS-CoV-2, the virus that induces COVID-19, is associated with the ACE2 receptor and TMPRSS2. The purpose of this study was to investigate whether spironolactone has a protective effect against COVID-19 and the development of associated complications in patients with liver cirrhosis. Methods: We conducted a nationwide case-control study on liver cirrhosis patients with or without COVID-19 from the population-based data acquired from the National Health Insurance Systems of Republic of Korea. After 1:5 case-control matching, multivariable adjusted conditional logistic regression analysis was performed. Results: Among the patients with liver cirrhosis, the case group with COVID-19 was found to be significantly less exposed to spironolactone compared with the control group without COVID-19. The adjusted odds ratio (OR) and 95% confidence interval (CI) between the two groups was 0.20 (0.07–0.54). In addition, regardless of cumulative dose of spironolactone, exposure to spironolactone was associated with lower COVID-19 infection. In terms of the development of complications due to COVID-19, spironolactone did not show any significant association between the patients with and without complications (P = 0.43). The adjusted OR and 95% CI between the two groups was 1.714 (0.246–11.938). Conclusion: We conclude that spironolactone may reduce susceptibility to COVID-19 but does not affect the development of its associated complications; however, further studies are needed to confirm the exact association between spironolactone and COVID-19 infection.
INTRODUCTION
The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) is a novel coronavirus that causes coronavirus disease 2019 . COVID-19 has rapidly spread globally, and the World Health Organization declared COVID-19 a pandemic on March 11, 2020. The mortality rate based on cumulative data is around 3.4% in China and 0.4% outside of China (1). Despite the relatively low mortality rate, COVID-19 can cause severe complications such as acute respiratory distress syndrome (ARDS), with elderly patients being of particularly high risk (2).
Spironolactone is used primarily to treat heart failure, edematous conditions such as ascites in severe liver diseases, secondary hyperaldosteronism due to liver cirrhosis, and essential hypertension (3). The pharmacodynamics of spironolactone are diverse; for example, it is a mineralocorticoid receptor antagonist that tends to disclose favorable patterns of renin-angiotensin-aldosterone system (RAAS) and angiotensin-converting enzyme-2 (ACE2) expression. It also reduces transmembrane serine protease 2 (TMPRSS2) activity through its antiandrogenic activity (4)(5)(6). Previous studies have noted that cell penetration of SARS-CoV-2 is associated with the ACE2 receptor and TMPRSS2 (7)(8)(9). Research has therefore suggested that spironolactone may influence the infectivity of COVID-19 (4,10,11).
In light of this theory, we have conducted a nationwide case-control study investigating whether spironolactone exposure could be associated with SARS-CoV-2's infectivity and complication rate in COVID-19 patients with liver cirrhosis. The null hypothesis was that there are no differences between patients with or without spironolactone exposure in terms of SARS-CoV-2's infectivity and complication rate of COVID-19.
Data Source and Study Population
This study was approved by the Institutional Review Board of Asan Medical Center (IRB number: 2020-1153) and written informed consent was waived by the board due to the deidentified nature of the data. The anonymized data obtained from the National Health Insurance claims of Republic of Korea were analyzed. The flow of the population in this case-control study is represented in Figure 1.
In detail, the population-based dataset comprised all patients tested for COVID-19 from January 20, 2020, when the first case of COVID-19 was observed in South Korea, to May 15, 2020, including suspected and confirmed cases, with demographic information and medical services history for the past 3 years. The analysis was performed on 234,427 patients tested for COVID-19 with the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) Abbreviations: ARDS, Acute respiratory distress syndrome; CCI, Charlson Comorbidity Index; CI, Confidence interval; ESRD, End-stage renal disease; OR, Odds ratio; RAAS, Renin-angiotensin-aldosterone system; DDD, Defined daily dose; SARS-CoV-2, Severe acute respiratory syndrome coronavirus-2. diagnosis codes of B342, B972, Z208, Z290, U18, U181, Z038, Z115, U071, and U072 . Screening was conducted by performing polymerase chain reaction amplification of the viral E gene and the RdRp region of the ORF1b gene was amplified to confirm COVID-19. Among the total 234,427 patients with COVID-19 screening test results, 6,462 subjects were confirmed to have liver cirrhosis over 19 years. The presence of liver cirrhosis was established based on ICD-10 codes for liver cirrhosis (K702, K703, K704, K717, K720, K721, K729, K740-K746, K761, K766-K767, R18, I850, I859, I864, I868, I982, I983) (12). Among patients with liver cirrhosis, there were 67 (1.0%) confirmed COVID-19 cases in the case group and 6,395 (99.0%) uninfected cases in the control group. Cases and controls were matched according to a 1:5 ratio based on covariates such as sex, age, region, and tested hospital, considering the explosive outbreak in Daegu and Gyeongbuk regions (13,14). Patients were classified to either Daegu and Gyeongbuk regions or other regions, and hospitals in which patients had been tested were classified to tertiary hospitals and others. Patients' covariates were matched, but the nearest neighbor matching was performed on age, with a caliper width of 0.1 in propensity scores. The final numbers of cases and controls were 67 and 332, respectively. Then, whether the subjects were exposed to spironolactone within 1 year from when the patients were tested for COVID-19 was evaluated.
Further subgroup analysis for complication rate was done on the case group. Complications due to severe COVID-19 disease were defined as cases requiring intervention, such as oxygen therapy, anti-viral therapy, vasopressors, admission to the intensive care unit, continuous renal replacement therapy, or death (15) (Supplementary Table 1). Patients were divided into two groups: those with complications and those without complications (16). There were 35 and 32 patients with and without complications, respectively.
Exposure to Spironolactone
Exposure to spironolactone was defined as the administration of spironolactone at least once within 1 year before the date of COVID-19 testing. Two additional sensitivity analyses were performed to verify the robustness of the study findings. With at least one claim within 6 months and 3 months for prescription of spironolactone, we classified these according to exposure to spironolactone and performed additional analyses. In addition, to quantify the exposure to spironolactone and to determine the dose-response association, the cumulative defined daily dose (cDDD) of spironolactone during the exposure period was calculated (≤30 cDDD or >30 cDDD) (17). The DDD was used for measuring a prescribed amount of a given drug and was considered the assumed average daily maintenance dose of a drug according to its main indication in adults (determined from the ATC/DDD system of the WHO Collaborating Center for Drug Statistics and Methodology) (18). For spironolactone, the WHO DDD is 75 mg. cDDD was calculated as the total amount of drug divided by the amount of that drug in DDD. The illustration for the study design and spironolactone exposure is presented in Supplementary Figure 1.
Definitions of Covariates
Underlying diseases were established based on diagnosis codes of the ICD-10. The considered comorbidities were decompensated liver cirrhosis, diabetes, hypertension, dyslipidemia, cardiovascular disease including myocardial infarction and stroke, cancer, lung disease including chronic obstructive pulmonary disease and asthma, end-stage renal disease (ESRD) with dialysis, and immunocompromised status including autoimmune diseases and human immunodeficiency virus infections. These comorbidities in the present study were chosen based on the announcement of Centers for Disease Control and Prevention in the U.S that these comorbidities increased risk of severe illness from COVID-19 infection (19) (Supplementary Table 1) The Charlson Comorbidity Index (CCI) was also used as a covariate (20), and a higher CCI score indicated a greater likelihood that the predicted outcome would result in mortality.
Statistical Analysis
Baseline characteristics of case and control groups were presented as mean with standard deviation for continuous variables, and the number with percentage (%) for categorical variables. Comparisons between both groups were performed using Student's t-tests for continuous variables and chisquared or Fisher's exact tests for categorical variables. After 1:5 ratio case-control matching, the odds ratio (OR) and 95% confidence interval (CI) were calculated with conditional logistic regression analyses. For multivariable-adjusted analysis according to COVID-19 status, two models were used because of the limited study population. Model 1 was adjusted for hypertension, dyslipidemia, and CCI because CCI does not include hypertension and dyslipidemia. Model 2 was adjusted for decompensated liver cirrhosis, hypertension, cardiovascular disease, cancer, lung disease, ESRD with dialysis, and CCI, which were significant at the P < 0.10 level for the univariable analysis. Subgroup analysis was performed for COVID-19 status by dividing the study group by sex (male and female) and age (age ≥60 and <60 years). For multivariable-adjusted analysis according to the presence of complications, the model was adjusted for age, diabetes, hypertension, cancer, and CCI, which were significant at the P < 0.10 level in univariable analysis. The statistical software SAS for version 9.4 (SAS Inc., Cary, NC, USA) was used to perform all statistical analyses. A P < 0.05 was considered to be statistically significant.
Baseline Characteristics
Before matching, the number of patients in the case and control groups were 67 and 6,395, respectively. After matching, a total of 399 subjects were analyzed. The baseline characteristics of the study population are presented in Table 1. The mean age was 60.2 years, and the proportion of male subjects was 59.4%. The proportions of decompensated liver cirrhosis, hypertension, cardiovascular disease, cancer, lung disease, and ESRD with dialysis were significantly higher in the control group compared with the case group. The CCI was higher in the control group than case group (6.3 vs. 4.3). The complication rate was 52.2% in the case group and 16.6% in the control group (P < 0.0001). Among complications, the presence of oxygen therapy and anti-viral therapy was significantly higher in the case group. The proportion of spironolactone exposure was 10.5% in the case group and 33.4% in the control group (P = 0.0002).
Of the patients exposed to spironolactone, four case and 60 control patients had a spironolactone cDDD of >30, whereas, three case and 51 control patients had a spironolactone cDDD of ≤30.
Association Between Exposure to Spironolactone and Risk of Infection With COVID-19
The results of the logistic regression analysis for COVID-19 infection according to exposure to spironolactone are shown in Table 2. The adjusted OR (95% CI) in model 2 for COVID-19 between patients who were and were not exposed to spironolactone within 1 year was 0.20 (0.07-0.54). Additional analyses within 6 months and 3 months also show a significant difference between case and control groups (P < 0.05). Using non-users as reference, the adjusted ORs for patients with a spironolactone cDDD of ≤30 and >30 were significant regardless of different definitions for the timing of spironolactone exposure. However, a dose-response relationship was not shown for the association between spironolactone exposure and COVID-19 ( Table 2).
Comparison Between the Complication and No Complication Groups of Patients With Liver Cirrhosis and COVID-19
Baseline characteristics of the complication and no complication groups of patients with liver cirrhosis and COVID-19 infection are shown in Table 3. The proportions of diabetes, hypertension, and cancer were significantly higher in the complication group than in the no complication group. There was no significant difference in the proportion of patients exposed to spironolactone between the complication and no complication groups (P = 0.43). The crude and adjusted ORs (95% CI) of spironolactone exposure for the development of COVID-19related complications were 2.50 (0.45-13.91) and 1.714 (0.25-11.94), respectively.
DISCUSSION
To summarize, the results showed that a significantly low proportion of cirrhosis patients with COVID-19 had previous exposure to spironolactone. Spironolactone was not significantly associated with complications. The factors associated with complications in cirrhotic patients with COVID-19 were diabetes, hypertension, cancer, and CCI score. This result of highrisk factors coincides with those indicated in previous studies (21,22). Therefore, the null hypothesis was partially accepted and partially rejected. The value of our study is that it provides theoretical evidence for the role of spironolactone in terms of COVID-19 susceptibility. A previous study by Cadegiani et al. (4) has proposed that spironolactone may have protective effects against COVID-19. Cadegiani et al. suggested that spironolactone could be a plausible candidate for prophylactic and early treatment of COVID-19. This was based on the theory that spironolactone could avoid SARS-CoV-2 cell entry by modulation of ACE2 expression, decreasing viral priming by reducing TMPRSS2 activity, attenuating the damage caused by the overexpression of angiotensin II-AT-1 axis, and inducing anti-inflammatory effects in the lungs through pleiotropy. Our study has shown that patient cases with COVID-19 had statistically significant lower exposure to spironolactone compared with patients without COVID-19 in liver cirrhosis controls. Considering that decompensated liver cirrhosis, hypertension, cardiovascular disease, cancer, ESRD, and CCI were higher in patients without COVID-19, it can be concluded that spironolactone may have protective effects against SARS-CoV-2's infectivity.
In our study, the result showed that there were no statistically significant correlations between complication rate and spironolactone exposure. This result could be distorted because there were only 35 patients in the complication group, which were too small, and comorbidities were unequally distributed, specifically the significantly higher CCI score of the complication group compared with the no complication group, which could raise the complication rate. When baseline characteristics from previous studies were analyzed (diabetes, hypertension, cancer, and CCI) as risk factors for COVID-19 complications, they were higher in patients in the complication group compared with those in the without complication group (21,22). For these reasons, the protective effect against COVID-19 complication of spironolactone could be masked.
We acknowledge the limitations of our study. First, we used data from national health insurance claims, which potentially caused some discrepancies between actual therapeutic practices. In addition, due to the nature of the present study, biases from the unequal distribution of comorbidities between the two groups might have affected the association between the use of spironolactone and COVID-19, despite statistical adjustments. Second, it was challenging to define ARDS, so complications induced by this condition included cases treated with oxygen therapy and other severe complications related to the disease. Third, the susceptibility of contagious diseases can be affected by multiple factors such as sociocultural factors, which can be difficult to anticipate. We were also not able to gather information regarding patients' lifestyle-related factors such as smoking and alcohol drinking, which might affect the outcome of our study. Additionally, there was a small number of COVID-19 cases in patients with liver cirrhosis. Moreover, our study lacked detailed information about severity or stage of liver cirrhosis. Therefore, our results should be interpreted with caution because only complications in patients with COVID-19 and liver cirrhosis, and whether these patients were exposed to spironolactone, were investigated. Our results should therefore be validated in a larger cohort study.
Our study is the first to investigate the impact of spironolactone on patient susceptibility to COVID-19, and the prevalence of its associated complications. Based on relevant statistical analysis, patients who were infected by COVID-19 with underlying liver cirrhosis showed significantly lower spironolactone exposure rate compared to patients who were not infected by COVID-19 with underlying liver cirrhosis. Therefore, our results suggested that exposure of spironolactone may reduce susceptibility to COVID-19 in patients with liver cirrhosis. Further studies are needed to confirm the exact association between spironolactone and COVID-19.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: https://hira-covid19.net/.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Institutional Review Board of Asan Medical Center, Seoul, Republic of Korea (IRB number: 2020-1153). Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
DJ, MS, and JC were responsible for the conception and design of the study, acquisition, analysis and interpretation of the data, and drafting of the manuscript. MS performed the statistical analyses. All authors have full access to all data used in the study and take responsibility for the integrity of the data and the accuracy of the data analysis, and approved the final version of the manuscript. | 2021-02-23T14:15:27.563Z | 2021-02-23T00:00:00.000 | {
"year": 2021,
"sha1": "65a19fa67904bf719e964eeb37079e51afcf60c6",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.629176/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65a19fa67904bf719e964eeb37079e51afcf60c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234808580 | pes2o/s2orc | v3-fos-license | Depth Control of Autonomous Underwater Vehicle Using Robust Tracking Control
Since the behavior of an autonomous underwater vehicle (AUV) is influenced by disturbances and moments that are not accurately known, the depth control law of AUVs must have the ability to track the input signal and to reject disturbances simultaneously. Here, we proposed robust tracking control for controlling the depth of an AUV. An augmented closed-loop system is represented by an error dynamic equation, and we can easily show the asymptotic stability of the overall system by using a Lyapunov function. The robust tracking controller is consisted of the internal model of the command signal and a state feedback controller, and it has the ability to track the input signal and reject disturbances. The closed-loop control system is robust to parameter uncertainties. Simulation results showed the control performance of the robust tracking controller to be better than that of a P + PD controller.
Introduction
Recently, AUV is frequently used to carry out a variety of missions including exploration for the ocean floors or military purposes. But there are many difficulties to control AUV partly due to complex nonlinear dynamics and partly due to the sever and unpredictable environment in the ocean.
Generally, the controlling scheme of an AUV are categorized into three categories such as heading control, dive plane control, and speed control. We only considered the dive plane dynamics for depth control in this thesis. Many control techniques have been proposed for the depth control of AUV. Kadam et al. [3] linearized and approximated the overall depth control system of AUV as IPDT(Integral Plus Dead Time) system, and tuned PD (Proportional-Derivative) controller with disturbance observer. But they did now show the disturbance rejection ability explicitly. Park et al. [4] suggested PD controller control the vehicle pitch in an inner loop controller, and P controller control the depth of AUV as a outer loop controller. Vahid et al. [6] proposed the same P+PD controller in order to promote the control ability of PD controller, but this method had the difficulty in tuning the controller and inability to disturbance rejection. Hong et al. [8] used P+SMC(Sliding Mode Control) as feedback controller and adaptive feedforward controller to compensate the pitch angle, but this overall control system was too complicate to realize.
The FLC (Fuzzy Logic Control) [12] and Neuro-fuzzy controller [13] are adequate for the complex industrial process, but they may not give the excellent control performance if the controlled plant has uncertainty and high nonlinearity. Moreover, Traditional FLC showed the steady state error if the type of the controlled system is 0-type.
Ma et al. [2] proposed SMC (Sliding Mode Control) and Kadar et al. [9] suggested the DSMC (Discrete SMC) to control the depth of AUV. In spite of the SMC is robust in the sliding mode, the equivalent control input depends on the nominal parameters of AUV in the approaching mode. So the SMC may not guarantee the robust in case of severe parameter uncertainties.
In this thesis, we proposed robust tracking control of the depth control for AUV to tackle the tracking the desired input and the rejecting disturbance simultaneously. The technique is developed based an error dynamic equation using state feedback, the closed-loop control system will have the desired poles using the pole placement theory. So the asymptotic stability of the overall system is guaranteed by Lyapunov function, and it have the ability of tracking the desired input, disturbance rejection and robustness property. The simulation results showed that the proposed controller have better control performances than the results of P+PD in presence of environmental disturbances and uncertainties.
Dynamic equations of AUV
The motion of an AUV can be derived by six degrees of freedom differential equations of motion [1,2] using two coordinate frames shown in Fig. 1 The linear velocity vector of AUV with respect to earth-fixed frame can be obtained by the time rate of the displacements as follows: where is the transformation matrix between two coordinate frame.
where and are abbreviation of sin and cos respectively.
Therefore, the velocity vectors with respect to body-fixed frame is as follows: In the same way, the angular velocity vector with respect to body-fixed frame as follows: The kinematic equation of motion in depth plane with respect to pitch and heave (assume ) are given as (5) and (6) from (2) and (4).
Assume the forward speed of AUV to be constant at the steady state, then (7) can be linearized as (7) and (8), because of sin ≃ and cos ≃ .
Therefore, the simplified equation of motion in depth plane can be written by assuming the origin of the body-fixed frame is center of mass as follows: Together the equations (7)~(10) can be conveniently written in a matrix form as: The heave velocity during diving is less than , thus terms containing and can be neglected. So the state-space model in depth plane of AUV can be expressed as: where is pitch moment due to , is pitch moment due to , is vehicle inertia around the pitch axes, is the hydrostatic moment coefficient, as (13).
Robust Tracking Controller
In this section, we present the design of robust tracking controller to have the ability to track the step input and to reject the step disturbance.
Consider a dynamic equation given by (14).
where is the state vector, is the input, and is the output.
Define the tracking error for step input as follows: Taking the time derivative of equation (15) yields We define the two intermediate variables as follows: and (17) Then an augmented system is given as follows: If the augmented system in equation (18) is completely controllable, we can find the closed loop system to be as equation (19) using the control feedback of the form as equation (20).
where is scalar, is × vector. is the order of the system matrix . The character equation associated with equation (18) is as follows: If all the roots of the characteristic equation place in the left half-plane using pole-placement theory, then the closed-loop system is asymtotically stable for any initial conditions . is approaching to 0 as is approaching to infinity.
As the augmented error equation have the tracking ability to step input, the steady-state error is zero.
Integrating the equation (17) into the equation (21), and the corresponding the block diagram of the closed-loop system including the controller is shown in Figure 2.
In Figure 2, the controller includes one integrator because of the Laplace transformation of step input, so this method is also called the internal model design technique. . 2 Integral Controller for a step input [12] Depth Control of Autonomous Underwater Vehicle Using Robust Tracking Control 한국기계가공학회지 제 권 제 호 : 20 , 4 The design procedures of robust tracking controller for any other non-decaying input are the same as step input, the internal model of ramp input has two integrals.
Computer Simulation and Discussion
In this thesis, we simulate the depth control for AUV under the various condition using robust tracking control technique.
We first consider the step response of AUV, when the magnitude of step input is . If we assign the desired poles of the closed-loop characteristic equation as ± and , then and Fig. 3(b), the step response using robust tracking controller has maximum overshoot at sec and the step response using P+PD controller has the maximum overshoot at sec. The control performance such as rise time, settling time are almost same, but the response using P+PD controller is a little faster than the response using robust tracking controller. In order to show the robustness for the step disturbance, we simulate the disturbance response when the magnitude of the disturbance is +2. The maximum value of disturbance response using robust tracking controller is about 0.12, and the disturbance is rejected at 3[sec] perfectly. Since the closed-loop transfer function from disturbance to output with P+PD controller is 0-type, so P+PD controller have no ability to handle the disturbance, in this case the final values of step disturbance is 5. Now, we consider the robustness for the parameter uncertainty. The dynamic equation of the worst case [12] in case of 50% variations from the nominal values are as follows: settling time is almost the same as without parameter uncertainty. This shows the proposed controller is robust in spite of severe parameter uncertainties. Finally, we will show the simulation results when the disturbance and parameter uncertainties are exist simultaneously. Fig. 6 shows the disturbance response simulation results in the worst case. When the disturbance response simulation results when the parameter uncertainties. The maximum overshoot is at sec, this value exceeds the sum of the disturbance response and the response when parameter uncertainties. And the output tracks the desired command signal precisely at sec without steady-state error. This shows that the
Conclusion
In this paper, we proposed the depth control of AUV using robust tracking control technique. Since the behavior of AUV is effected by the poorly known disturbance forces and moments, the depth control law is equipped to track the command signal and to reject disturbance at a time. Since the proposed control system contains the internal model of the Laplace transformation of the command signal and the state feedback controller, the overall closed-loop control system is stable and robust amidst severe ocean environment.
The computer simulation results showed the excellent control performance to track the command signal and the robustness under the severe parameter uncertainties. | 2021-05-21T16:57:43.691Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "ac094ba53d495e8b156c618173fc9db8f254f919",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14775/ksmpe.2021.20.04.066",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e4f741874942e2651e5836074b23495b7997e3b5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213215378 | pes2o/s2orc | v3-fos-license | CONCILIATION PROCEDURES IN THEIR RELATIONSHIP WITH SCIENTIFIC CATEGORIES OF CIVIL PROCEDURE
Информация о статье Дата поступления – 18 марта 2019 г. Дата принятия в печать – 20 мая 2019 г. Дата онлайн‐размещения – 12 сентября 2019 г. Ключевые слова Примирительные процедуры, примирение, цивилистический процесс, гражданское процессуальное правоотношение, гражданская процессуальная форма, гражданское судопроизводство, правосудие, предмет гражданского процессуального права Примирительные процедуры анализируются в контексте цивилистического процесса с точки зрения определения их соотношения с отраслевыми научными категориями «гражданская процессуальная форма», «гражданское судопроизводство», «граждан‐ ское процессуальное правоотношение». Поддерживается точка зрения о возможно‐ сти существования не только вертикальных (суд – участники процесса), но и горизон‐ тальных (между участниками процесса) процессуальных правоотношений. С этих по‐ зиций обосновывается правомерность отнесения отношений, возникающих между участниками организуемых в рамках судопроизводства примирительных процедур, к разряду процессуальных. Рассматривается вопрос о расширении предмета правового регулирования гражданского процессуального права.
The subject of the paper is conciliation procedures as a phenomenon of the modern civil process. The purpose of the article is to confirm or disprove hypothesis that the relations arising between the participants of judicial conciliation procedures are procedural legal relations. It leads to the expansion of the subject of legal regulation of civil procedural law. The research was carried out with use of main scientific methods (analysis, induction and deduction), special (statistical) method as well as the method of interpretation of the legal acts. The main results and scope of their application. The paper shows the theoretical development of the matter of the relationship of the concept of "conciliation procedures" and such categories as "civil procedure", "civil procedural form", "civil procedural legal relations". The solution of this scientific problem has fundamental theoretical and methodological significance for substantiating the place of conciliation procedures in the civil process system. Given the trend of strengthening of the private law elements of the civil process, the author advocates the need to revise the traditional approaches of the definition of the concepts of "civil procedure" and "civil procedural legal relations". The author supports the point of view of the possibility of the existence of not only vertical (between court and trial participants), but also horizontal (between participants in the process) procedural legal relations. From this point of view, the relations arising between the participants of judicial conciliation procedures can be attributed to procedural legal relations. The author also joins the position of those scholars who advocate a broad understanding of civil proceedings as a combination of various judicial procedures, not all of which should correspond to the signs of civil procedure form.
Conclusions. Entrusting the court with the function of reconciliation in civil and economic cases substantially changes the established concept of civil procedure as an exclusively jurisdictional process. So the subject civil procedure is not only the competitive activities of private dispute parties on proving a certain evidential composition required by the court to enforce the law and a resolute the dispute. On this basis, the author puts forward a thesis on the expansion of the object of legal regulation of civil procedural law.
Introduction
It is necessary to agree with M. S. Nakhov that having adopted the law on mediation and having made the corresponding changes in material and procedural acts, the legislator has set before scientists rather difficult and multidimensional task -to bring under the constructed norms doctrinal provisions of procedural science [1]. On the agenda -the solution of questions about the ratio of conciliation procedures with the concepts of "legal proceedings" and "civil procedure»; about possibility of reference of the public relations arising between participants of the conciliation procedures organized by court to the category of civil procedural legal relations; about legitimacy of qualification of actions for reconciliation of the parties performed by the intermediary involved for these purposes as procedural [2; 3].
According to A.V. Treshcheva and T. S. Taranova, the nature of conciliation procedures differs significantly from the law enforcement activity of the court, which makes the content of the proceedings, which does not allow to include such procedures, even if organized by the court when considering the case, to the number of civil procedural relations [4; 5]. A similar approach can be traced in the works of G. V. Sevastyanov [6] and M. E. Morozov [7], which substantiate the concept of private procedural law (the right of alternative dispute resolution). In the works of T. V. Sakhnova [8; 9] and M. S. Fokina [10; 11] judicial conciliation procedures, on the contrary, are positioned as a kind of civil procedural legal relations.
Resolution of the controversial issue of the nature of conciliation procedures initiated by the court during the proceedings (referring them to the number of procedural or non-procedural), requires the analysis of a complex theoretical problem -the relationship of the concepts of "procedure", "process", "trial", "justice", "proceedings", "procedural form", "procedural attitude".
On the question of notions
The legal science of the Soviet period was characterized by the opposition at the theoretical and methodological level of the concepts of "procedure" and "process". Due to the specificity of the court as a body administering justice, the concept of "civil procedure" was introduced as a set of essential rules and guarantees that determine the special status of judicial protection. As a result, the use of the term "process" in relation to the court's activities has become sustainable, and the term "procedure" in relation to non-judicial activities. For example, E. V. Slepchenko defines judicial activity as "public law enforcement activity carried out by the court in procedural form, i.e. within the framework of a special legal procedure". According to the author, other legal procedures applied by administrative bodies, arbitration courts and other organizations do not possess such qualities [12].
The doctrine has also developed a broader approach to the concept of "procedure", according to which it refers to the activities of not only the court, but also other jurisdictional bodies. The main feature that distinguishes the process from the procedure, according to scientists who adhere to this approach, is the nature of jurisdictional activities as aimed at protecting the law. As pointed out by A. V. Yasinskaya-Kazachenko, procedural relations differ from the procedural topics that encompass the activities of the judicial body engaged in the protection of subjective rights based on the application of legal norms [13]. A similar point of view is held by A. N. Manukovskaya, which justifies the possibility of singling out an independent branch of law -labor procedural law governing social relations arising during the consideration of labor disputes by various authorized state bodies (the author separates the relevant norms from the substantive and procedural norms of labor law) [14].
In modern legal literature, the opposition between the concepts of "process" and "procedure" is softened. The view that the procedure is an integral part of any jurisdictional process (including a judicial process), forming its basis, has become widespread. In the science of administrative law, the procedure is considered as the content of the administrative-procedural form, as the primary element, the totality of which forms the internal structure of the process (R. N. Marifkhonov [15]; E. A. Degtyareva [16]). N. I. Gromoshina, based on the theory of regulatory and protective material relations, substantiates the concept of "procedural procedure" as a procedure aimed at the implementation of the protective material relations. Depending on the subject, which carries out the legal function, the author divides the procedural procedures into judicial and nonjudicial [17].
At the same time, the view remains stable that the activities of the court are characterized by a special procedural form, which is characterized by the regularity of the legislation, a high degree of guarantees, the possibility of judicial review. This creates a certain theoretical complexity in determining the nature of writ proceedings and other simplified procedures that do not fully meet the criteria of civil procedural form [18]. For example, L. A. Terekhova justifies the legality of the inclusion of these procedures in the boundaries of civil proceedings is to treat them as "proximity justice", which is due to deferred procedural guarantees [19, p.109].
With regard to civil procedural legal relations, the procedural doctrine is dominated by the view that the court is a mandatory participant. This approach is based on the ideas of the German jurist XIX V. O. Bulova that procedural relations are not private, but public, so that their subjects are the court and the parties [20]. A number of authors consider the civil process as a system of procedural relations between the court and each individual participant in the process. Others (M. A. Gurvich, V. P. Mozolin, K. S. Judelson) argue that procedural attitude is a unified, multi-actor and complex, representing the system of elementary relations that are classified into basic, additional and auxiliary [21; 22, p. 46; 23]. Nevertheless, the possibility of the existence of procedural relations between the participants of the process is deniedit is argued that the participants in the process have procedural rights and bear procedural obligations only in relation to the court, even when their procedural actions are externally addressed to each other [22, p. [24, p. 29], according to which the procedural relationship also exists between the parties to the process, for example, the plaintiff and the defendant.
The development of procedural legislation, including the emergence of simplified and conciliatory procedures in civil and economic proceedings, makes it necessary to rethink the established approaches to the definition of the essence of the civil process and civil procedural legal relations. As correctly noted by T. V. Sakhnova, civil procedure is evolutionary, and if traditionally legal proceedings were connected with the public beginning and the civil procedural form, an integral feature of which is participation in the legal relations of the court and the exercise of power to resolve the case, the future of civil law process is associated with the development of private law in the methods of judicial protection [27].
From the point of view of the new approach, the civil process is beginning to be understood as a set of various "judicial procedures", some of which may not meet the criteria of civil procedural form. For example, Yu. a. Popova supports the need to preserve the concept of "procedural form" to characterize the procedure in the field of justice, but notes that not every activity of the court should be called a process. In particular, writ proceedings and preparation of the case for trial are defined by the author as special procedures of the court, which are not related to the administration of justice and judicial proceedings [28, p. 79-80; 29].
Place of conciliation procedures in the civil process system
The position, which the researcher defines for himself as the starting point in the analysis of doctrinal concepts of the civil process, predetermines his answer to the question of what is the place of conciliation procedures in the system of the civil process.
Scientists, opposes "process" and "procedure", implying the process is exclusively the activity of the court in the administration of justice and protection of rights for which there is a special procedural form (regulated by the law and are guaranteed by), take out the conciliation procedure in the framework of civil procedure. A similar conclusion is also given by the view of the civil procedural attitude as a power relationship, in which the court necessarily takes part. The independent Правоприменение 2019. Т. 3, № 2. С. 84-94 ISSN 2658-4050 (Online) settlement of the dispute by the interested parties through the coordination of interests does not fit into the concept of the civil process as an activity for the protection of law, which involves the resolution of the dispute on the basis of legal norms by making a decision by a subject with jurisdictional powers, secured by the force of enforcement. In the best case, if the term "judicial mediation" is still allocated, it includes the actions of the court initiating the negotiations of the parties (the explanation of the parties rights to the conclusion of a settlement agreement or the agreement on the application of mediation, dispatching of information and assessment meeting with a mediator, the appointment of a judicial mediator), and associated approval of the settlement agreement. For example, A. A. Rusman [30] and L. N. Liango [31], solving a similar problem about the place of conciliation procedures in the criminal process, noted that the procedural form covers only the initiative of reconciliation and the order of termination of the case in connection with reconciliation, while the procedure of drafting the terms of the agreement by the parties is nonprocedural.
If the content of the civil process is analyzed from the point of view that it is not limited to the concept of justice, and to the civil procedural attitude to be treated as allowing the existence of not only "vertical" (between the court and the participants in the process), but also "horizontal" (between the participants in the process) relations, then the statement about the procedural nature of the conciliation procedure (including its negotiation stage) becomes quite legitimate.
The latter approach is fully consistent with the trends of the civil process, which are manifested in the following: in its development, the idea of cooperation between the court and the parties, which is the modern value of the adversarial principle, is transformed into the cooperation of all participants in the process, including the plaintiff and the defendant. Competitiveness is complemented by the principle of solidarity of the parties in their promotion of justice [21, p. 117-119]. Despite the strengthening of the procedural activity of the court, the parties retain a significant amount of procedural actions that they must perform not only in relation to the court, but also to each other (exchange of procedural documents, disclosure of evidence); the leading idea is the strengthening of private, dispositive principles in the organization of protection of law (this trend is known in legal science under the terms of "materialization" and "privatization" of the civil process) [32; 33; 34, 35]. Further reform of the civil process involves reducing the role of adversarial trials and the use of simpler and more diverse methods of dispute resolution, including those that do not require the direct participation of judges [36, p. 44]. In the modern sense, the ADR is not an alternative to the state legal proceedings, but an integral part of it, ensuring the increased accessibility of justice [37]. In particular, as I. A. Belskaya points out, the incorporation of the conciliation procedure into the proceedings is consistent with the private-public principles of procedural law and the imperative-dispositive method of procedural law, as a result of which there is no alternative to the conciliation procedure in relation to the judicial process [38, p. 12, 16, 18].
In these conditions, in our opinion, the concept of procedural legal relationship as a power relationship, the obligatory participant of which is the court, requires adaptation to the new trends of the civil process. It seems correct that with the General approach to civil procedural legal relationship as a single multi-subject legal relationship, the main backbone of which is the court, at the level of elementary procedural relations should be allowed the existence of direct links between the participants in the process.
The thesis that the relations on reconciliation of the parties within the framework of the judicial process cannot be recognized as procedural, as a rule, is based on two arguments: 1) the court is not a participant in these relations; the relations between the parties in the case, not mediated by the court, cannot be recognized as procedural. In our opinion, both of these statements are refutable.
Thus, the point of view of A. N. Kuzbagarov, who spoke in favor of the procedural nature of conciliation procedures, deserves attention. According to the author, recognizing the court as an obligatory participant in any civil procedural legal relations, it should be taken into account that the degree of its activity in various respects may be different. In conducting conciliation procedures, the court's participation is minimal, but it nevertheless acts as a subject of these relations -in particular, the role of the court is to analyze various aspects of the dispute in order to determine the possibility of conciliation procedures, in recommending the parties to participate in such procedures [39, pp. 229-231, 234].
As for the idea that the parties are not related to each other by procedural relations and their activities in the process of legal proceedings represent only the Commission of unilateral actions by each of them addressed to the court, it is only a consequence of certain conceptual approaches conditioned by the relevant cultural and historical prerequisites.
First of all, as correctly pointed out by M. A. Rozhkova, this idea was due to the exaggeration of the adversarial beginning of the civil processnamely, the approach to it as a confrontation between the parties. At the same time, according to the author, the formal dispositivity of the civil process (the freedom of the parties to dispose of their procedural rights and influence the course of the process) objectively involves the procedural interaction of the parties -for example, their joint will in the form of procedural agreements [20].
The fact of its historical belonging to the continental system of law is also important in relation to the thesis of the absence of procedural relations between the parties established in the domestic doctrine of the civil law process. Since this system of law was initially characterized by the inquisitorial type of civil proceedings, involving the activity of the court and the passivity of the parties in the process, this contributed to the formation of a theoretical approach to the court as a mandatory subject of civil procedural relations, and to civil procedural law -as a branch of law governing the court. In contrast, in the Anglo-American system of law, the civil process has historically been understood as the activity of the parties themselves [40].
In the modern understanding of competition as interaction between the court and the parties and the optimum ratio of their activity is lawful, the question of the reinterpretation of traditional domestic procedural civil law definition of civil procedure law as a branch of regulating the activity of the court in the consideration and resolution of civil cases. As T. V. correctly notes. Sakhnova, once axiomatic for the continental type of process thesis "the dispute belongs to the parties, and the process -to the court" is no longer relevant, the modern process is thought of as a joint procedural activity of the court and the parties, built on the private law method [8, p. 14]. In these circumstances, more adequate, in our opinion, is the definition of the industry as a set of rules governing the activities of not only the court, but also the parties themselves to resolve a legal dispute, which is the basis for the recognition of the existence of procedural relations between the parties. Modern procedural legislation is already implementing this approach, regulating the preparation of the case for trial as a stage of the process, which is not only the court, but also the parties, carrying out against each other a number of actions provided by law (for example, Art. 149 CPC RF, Art. 260-1 CPC of the Republic of Belarus).
In connection with the recognition of reconciliation of the parties as the purpose (task) of the proceedings, the question of the need to expand the subject of legal regulation of civil procedural law is also updated. It should be agreed with D. Ya. Maleshin that the boundaries of the civil process and its structure depend on the objectives of civil proceedings [41, p. 59]. The author demonstrates the validity of this thesis on the example of two historical types of legal proceedings -continental and Anglo-American, in relation to which the purpose of the civil process is formulated differently. In the countries of continental law, the purpose of the proceedings is defined as the actual restoration of the violated right, in the Anglo-Saxon doctrine -as the resolution of the dispute. Accordingly, in the continental tradition, the boundaries of civil procedure include issues of execution, while the English procedural experts define civil proceedings as a dispute resolution process [41, p. 13, 59-61]. Following this logic, the inclusion of reconciliation of the parties among the goals of the civil process entails the inclusion of this activity in the subject of Based on the concept of polystructurality of the civil process proposed by D. Ya. Maleshin, in accordance with which the latter combines two types of procedural activitiesdispute resolution and the execution of judicial decisions, which, in turn, include more private activities characterized by the specifics of the purpose, subject composition and methods used [41, p. 74-75], it seems legitimate to determine the reconciliation of the parties as a private direction of procedural activities within these structural elements of the civil process.
In support of our point of view, we cite the words of M. S. nakhov, who believes that the incorrectly formulated purpose of civil proceedings (defined by law as the correct and timely consideration and resolution of the case), as well as a narrow understanding of the tasks to achieve it (as the resolution of the case on the merits by a decision) limit the possibilities of the judicial form of protection of the right. According to the author, taking into account international standards of justice and the actual needs and demands of modern society, the purpose of civil proceedings should be defined as "accessible, fair, effective and real protection of violated and (or) disputed rights, freedoms and legitimate interests of subjects of substantive law." Thus achievement of the specified purpose is seen by M. S. Nakhov through realization by court of one of two functions -1) justice by pronouncement of the decision or 2) reconciliation of the parties by the statement of result of conciliatory procedure [1].
Conclusions
Scientific categories are the backbone that gives stability to a certain area of knowledge and ensures continuity in its development. However, scientific categories are not something frozen. Their content is subject to the General dialectical principle of continuous development and is conditioned by both scientific progress and changes in the socio-economic basis. The doctrinal concepts of the science of the civil process in this respect are no exception. Their content is historical in nature and gradually changes (evolves) following the transformations that occur at the systemic, conceptual level, which determines the basic principles of the civil process in the framework of the corresponding historical type of economic relations and socio-political structure of society.
Traditional for the domestic procedural civil law identification of the concepts of "civil proceedings" and "civil justice", operating the concept of "civil procedural form" as a necessary feature of the court to review and resolve civil cases, the theory of civil procedural legal relationship as a relationship, mandatory subject of which is the court, formed at a time when under the influence of socio-political processes of strengthening and centralization of state power the concept of the court as a body, resolving exclusively private law dispute was transformed into the concept of the court as a body that protects the violated subjective right in the interests of the whole society [42, p. 69, 75, 77]. At the present stage, at a new stage of historical development, due to the relevant economic and socio-political transformations in the life of society, the existing ratio of private and public principles of legal proceedings is significantly changed [43], which is reflected, inter alia, in the development of the ideas of "alternative justice" and the court's assistance to reconciliation of the parties.
Conciliation procedures receive legal regulation at the level of procedural legislation and are actively integrated into the practice of legal proceedings, which is why they are beginning to be perceived as an integral part of the civil process. At the same time, at the level of doctrine, the question of the place of conciliation procedures in the branch system of civil procedure remains open, since these procedures do not "fit" into the established definitions of civil proceedings and civil procedural legal relations.
The organization within the framework of civil and economic proceedings of conciliation procedures actualizes the need to revise the content of the doctrinal concepts of procedural civil law. The modern civil process should be established in its understanding as a set of procedures, some of which are characterized by more intensive activities of the court and strict procedural form, and for others (for example, conciliation) -the activities of the parties in the case and relatively informal. The content of the civil process should be analyzed from the position that it is not limited to the concept of justice, and to the civil procedural attitude to be treated as allowing the existence of not only "vertical" (between the court and the participants in the process), but also "horizontal" (between the participants in the process) links. This approach is fully consistent with the trends of the civil law process and makes it possible to define more clearly the place of conciliation procedures in the civil law process. | 2020-02-13T09:23:05.305Z | 2019-09-13T00:00:00.000 | {
"year": 2019,
"sha1": "9f75fba467e11a30a9a2f67b265e9280e8495750",
"oa_license": "CCBY",
"oa_url": "https://enforcement.omsu.ru/jour/article/download/234/354",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0c6b59d470e23b60c3e3c528887eff538e878792",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
46316827 | pes2o/s2orc | v3-fos-license | Adult Patient with Anomalous Origin of Anterior Interventricular Artery at Main Pulmonary Artery
Adult; heart defects, congenital; pulmonary artery/surgery. From all congenital cardiopathies, anomalous origin of anterior interventricular artery occurs once per 300,0000 live births, reporting high mortality in the first year after birth. However, if good collateral circulation is available for the artery related to the abnormality, the patient may be kept asymptomatic until mature age. This is the report on a rare case of a 43-year-old patient who was oligosymptomatic, with normal ventricular function in this pathology presentation. The patient was submitted to surgical treatment with no extracorporeal circulation. Adult Patient with Anomalous Origin of Anterior Interventricular Artery at Main Pulmonary Artery
Introduction
Anterior interventricular artery from main pulmonary artery is a rare, low incidence and high mortality rate disease within the first year of life, and is responsible for approximately 0.24% of congenital cardiopathies 1 .Given the early manifestation of symptoms -most times quite severe -high mortality is reported in the first year of life 2 .If symptoms are not manifested in childhood, this cardiopathy may reach diagnosis in the second or third decade in life.The presence of collateral circulation on anterior interventricular artery distal groove explains the few number of symptoms or their absence 3,4 .Surgical treatment is the choice option for such cases; however, hardly ever can it be postponed until adult life 1 .
Our purpose is to report on a case of anomalous origin of anterior interventricular artery from main pulmonary artery.The adult patient presented coronary heart disease symptoms which disappeared after surgical treatment without extracorporeal circulation.
Case Report
A 43-year-old male patient, working as a street sweeper, was assisted for complaints of precordial pain on light effort.The pre-cordial pain had been associated to dyspnea for 8 years, with relief at rest.The patient denied any other risk factor for coronary atherosclerotic disease.The patient reported having been treated for urinary infection one month prior to hospitalization, as well as previous surgical correction for bilateral inguinal hernia, and surgery on right knee meniscus.The patient was on propranolol (80 mg/day), isosorbide (30 mg/day) and acetylsalicylic acid (200 mg/day).Blood pressure was 110 x 80 mmHg and heart rate 60 beats per minute on physical examination.The patient was feverish, acyanotic, presented facial redness and good peripheral perfusion.Cardiopulmonary auscultation was normal, with no abdominal mass or visceromegalies on palpation.Peripheral pulses were symmetric and normal.
Laboratory exams showed total cholesterol at 172 mg/dl, HDL 54 mg/dl, LDL 105 mg/dl, triglycerides 93 mg/dl, glycemia 93 mg/dl and all other exams within normal range.Sinus-ECG, no conduction disorders or ischemic changes.Thoracic X-ray showed normal cardiac area and pulmonary vasculature.Tread mill test was positive for myocardial ischemia.
In the face of the diagnostic hypothesis of atherosclerotic coronary heart disease, the indication was cinecoronariography, which did not give evidence of coronary lesions, but did show that anterior interventricular artery originated from main pulmonary artery; left coronary branch was formed by circumflex and diagonalis arteries (calibrous), and right coronary was non-dominant (Figure 1).Left ventricle was normal.By the end of contrast injection in right coronary artery the anterior interventricular artery could be seen up to pulmonary branch through collateral circulation (both from right coronary artery and circumflex artery).Systolic, mean, an diastolic pulmonary pressures were respectively 44mmHg, 22 mmHg, and 12 mmhg.
Once anomalous origin diagnosis had been defined for anterior interventricular artery at main pulmonary artery, myocardial revascularization surgery with no extracorporeal circulation (ECC) was the choice for treatment.The patient was referred to surgical treatment, which included: median esternotomy with posterior dissection of left internal thoracic artery; opened pericardium view showed cardiac chambers slightly enlarged; anterior interventricular artery was then exposed and stabilizer was put in place; anterior interventricular artery was opened and intracoronary perfusor (shunt) deployed to keep native blood flow; termino-lateral anastomosis of left internal thoracic artery was initiated with the interventricular artery larger than 3.5 mm-diameter; perfusor was removed before full closing of anastomosis; anterior interventricular artery was then ligated at its origin at pulmonary artery; after hemostasis review, a Post-surgical course was excellent, and the patient has been asymptomatic ever since.On June 15, 2006, 33 months after the surgery, a control heart catheterism was performed.Excellent late surgical results were reported (Figure 2).The patient was submitted to effort test in January, 2007, with negative results for ischemia.
Discussion
The anomalous origin of anterior interventricular artery at main pulmonary artery is a rare congenital abnormality.Incidence is 0.24%; occurrences are 1:300,0000 live births.Mortality rate is high, ranging from 80% 2 to as high as 93% 1 in the first year after birth.This myocardial ischemia causing cardiopathy presents the following predominant symptoms: heart failure typically associated to enlarged cardiac area, and signs of mitral regurgitation from papillary muscle ischemia.Symptoms manifestation usually occurs in the first weeks after birth, but may be discreet or non-existent, which would reveal cardiopathy only in adult age 3,5 .
The availability of collateral circulation for an ischemic area may protect it from a more relevant muscular loss under an acute scenario.Chronic coronary occlusion may occur concurrently to totally normal ventricular function 4 .Coronary artery disease patients submitted to elective percutaneous coronary angioplasty are excellent illustrations to confirm the relevance of collateral circulation in preserving or reducing muscular loss in a coronary acute occlusion.Cohen & Rentrop 4 have demonstrated the direct relationship between the degree of collateral circulation, the electrocardiographic changes, and ischemia area on left ventriculography.Likewise, patients on well-developed collateral circulation report less precordial pain on effort after coronary occlusion 4 .The anomalous origin of anterior interventricular artery at pulmonary branch is rare, and its manifestation at adult age is even less frequent, especially if the patient presents very few symptoms.Surgical treatment should be the conduct whenever necessary, provided there is myocardial viability 5 .However, when identified after birth, the surgical approach must be postponed and replaced by drug therapy in the period between 18th month and 7 years of age 1 .That could be done in three different ways: 1) ligation of main branch anterior interventricular artery only 5 .Pressure gradient between coronary arterial system and pulmonary branch lead flow from anterior interventricular artery to the pulmonary system via well-developed collateral circulation, in which case ligated anterior interventricular artery redirects collateral circulation to myocardial musculature 3 , which could be -or not -sufficient to eliminate symptomatology 4,5 ; 2) anterior interventricular artery reimplantation in the aorta; 3) a combination between anterior interventricular artery ligation and left internal mammal artery anastomosis or saphenous vein graft in AD.
Myocardial revascularization through venous graft without the use of ECC has become widely utilized in medical practice since last decade.Acquired experience and technical improvement contributed for its use in other areas which had not been investigated before 6 .Today, left internal thoracic artery anastomosis to anterior interventricular artery is utilized as a choice option due to lower morbidity and mortality rates, in addition to the elimination of symptoms and long-term permeability 6 .The conclusion is that for these patients, the use of the myocardial revascularization technique without the use of ECC and using left internal thoracic artery, followed by anomalous artery ligation in its origin -pulmonary branch -was shown to be an excellent therapeutic method.Despite being a single case, it presented good late development.
Potential Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Sources of Funding
There were no external funding sources for this study.
Study Association
This study is not associated with any graduation program. | 2017-11-07T00:34:37.038Z | 2008-06-01T00:00:00.000 | {
"year": 2008,
"sha1": "c07ca427a9c067c0b00a4be4f832eb4f20bbff69",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abc/a/NknbWx4sSYkNvm8Dz6NcLFC/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ac0611e817eee9956d56cae80c08235d00d8344",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115753614 | pes2o/s2orc | v3-fos-license | Impacts of Load Profiles on the Optimization of Power Management of a Green Building Employing Fuel Cells
This paper discusses the performance improvement of a green building by optimization procedures and the influences of load characteristics on optimization. The green building is equipped with a self-sustained hybrid power system consisting of solar cells, wind turbines, batteries, proton exchange membrane fuel cell (PEMFC), electrolyzer, and power electronic devices. We develop a simulation model using the Matlab/SimPowerSystemTM and tune the model parameters based on experimental responses, so that we can predict and analyze system responses without conducting extensive experiments. Three performance indexes are then defined to optimize the design of the hybrid system for three typical load profiles: the household, the laboratory, and the office loads. The results indicate that the total system cost was reduced by 38.9%, 40% and 28.6% for the household, laboratory and office loads, respectively, while the system reliability was improved by 4.89%, 24.42% and 5.08%. That is, the component sizes and power management strategies could greatly improve system cost and reliability, while the performance improvement can be greatly influenced by the characteristics of the load profiles. A safety index is applied to evaluate the sustainability of the hybrid power system under extreme weather conditions. We further discuss two methods for improving the system safety: the use of sub-optimal settings or the additional chemical hydride. Adding 20 kg of NaBH4 can provide 63 kWh and increase system safety by 3.33, 2.10, and 2.90 days for the household, laboratory and office loads, respectively. In future, the proposed method can be applied to explore the potential benefits when constructing customized hybrid power systems.
Introduction
Today's energy crises and pollution problems have increased the current interest in fuel cell research. One of the most popular fuel cells is the proton exchange membrane fuel cell (PEMFC), which can transform chemical energy into electrical energy with high energy conversion efficiency by electrochemical reactions. At the anode, the hydrogen molecule ionizes, releasing electrons and H + protons. At the cathode, oxygen reacts with electrons and H + protons through the membrane to form water. The electrons pass through an electrical circuit to create current output of the PEMFC. The PEMFC has several advantageous properties, including a low operating temperature and high efficiency. However, it also has very complex electrochemical reactions, so attempts to develop dynamic models for PEMFC systems have become an active research focus. For example, Ceraolo et al. [1] developed a PEMFC model that contained the Nernst equation, the cathodic kinetics equation, and the cathodic gas diffusion equation. Similarly, Gorgun [2] presented a dynamic PEMFC model that included water phenomena, electro-osmotic drag and diffusion, and a voltage ancillary. These models
System Description and Modelling
The green building, as shown in Figure 1 [34], is located in Miao-Li County in Taiwan. It was constructed by China Engineering Consultants Inc. (CECI) and was equipped with a hybrid power system that consisted of 10 kW PV arrays, 6 kW WTs, 800Ah lead-acid batteries, a 3 kW PEMFC, and a 2.5 kW electrolyzer with a hydrogen production rate of 500 L/h. The building was autonomous and did not connect to the main grid, i.e., its electricity was supplied completely by green energy, such as solar and wind. The energy can be stored for use when the green energy is less than the load demands. These components were originally selected to provide a daily energy supply of about 20 kWh based on the National Aeronautics and Space Administration (NASA) data [34], as illustrated in Table 1. Solar energy was abundant in the summer but poor in the winter, so wind energy was expected to compensate for solar energy in the winter. However, Chen and Wang [32] applied the Vantage Pro2 Plus Stations [35] to measure the real weather data on the building site and found that the wind energy was not sufficient to compensate for the reduced solar energy in the winter. Further analyses of the energy costs also revealed that the wind energy was not economically efficient for this building, as illustrated in Table 2. Therefore, the following component selection principles were suggested to improve system performance [32]: (1) Energy sources: the use of PV and PEMFC in the green building was suggested, because solar energy was the most economical energy source and the PEMFC could guarantee energy sustainability. The PEMFC can be regarded as an energy source that provides steady energy and as an energy storage system when coupled with a hydrogen electrolyzer. Considering the transportation, storage, and efficiency of energy conversion, the PEMFC with chemical hydrogen generation by NaBH 4 [36] was suggested for the system. (2) Energy storage: the lead-acid battery was suggested because of its greater than 90% efficiency [37].
Though the PEMFC with a hydrogen electrolyzer can also store energy, the conversion efficiency from electricity into hydrogen was only about 60% [33]. Therefore, the total energy storage efficiency was about 36%, because the PEMFC converted hydrogen into electricity with an efficiency of about 60% [38]. Note that the LiFe battery has a higher efficiency (more than 95%) but is much more expensive than a lead-acid battery. Therefore, the lead-acid battery was preferred for the green building.
That is, the selection of multiple energy sources and storages depended on the local conditions and load requirements.
The Hybrid Power Model
A general hybrid power model, as shown in Figure 2, was developed to evaluate system performance at different operating conditions (e.g., varying the component sizes and power management strategies) [32,33,39]. The model consisted of a PV module, a WT module, a battery module, a PEMFC module, an electrolyzer module, a chemical hydrogen generation module, and a load module. The power management strategies were applied to operate these modules based on battery state-of-charge (SOC). The module parameters were adjusted by the component characteristics and experimental responses to allow prediction and analysis of the system dynamics without the need for extensive experiments [39,40].
The Hybrid Power Model
A general hybrid power model, as shown in Figure 2, was developed to evaluate system performance at different operating conditions (e.g., varying the component sizes and power management strategies) [32,33,39]. The model consisted of a PV module, a WT module, a battery module, a PEMFC module, an electrolyzer module, a chemical hydrogen generation module, and a load module. The power management strategies were applied to operate these modules based on battery state-of-charge (SOC). The module parameters were adjusted by the component characteristics and experimental responses to allow prediction and analysis of the system dynamics without the need for extensive experiments [39,40].
First, the 1 kW PV module was developed based on the following equation [32,41]: where P PV (Watt) and E (Watt per meter square) represent solar power and irradiance, respectively. Second, the WT module was presented as a look-up table, according to the relation between wind power and wind speed [33,42]. Third, the PEMFC acted as a back-up power source to guarantee system sustainability based on the following management strategies (see Figure 3a) [39]: (1) When the battery SOC dropped to the lower bound, SOC low , the PEMFC was switched on to provide a default current of 20 A at the highest energy efficiency [41]. (2) When the SOC continuously dropped to SOC low − 5%, the PEMFC current was increased according to the required load until the SOC was raised to SOC low + 5%, where the PEMFC provided a default current of 20 A. (3) When the battery SOC reached SOC high , the PEMFC was switched off.
where PPV (Watt) and E (Watt per meter square) represent solar power and irradiance, respectively. Second, the WT module was presented as a look-up table, according to the relation between wind power and wind speed [33,42]. Third, the PEMFC acted as a back-up power source to guarantee system sustainability based on the following management strategies (see Figure 3a) [39]: (1) When the battery SOC dropped to the lower bound, SOClow, the PEMFC was switched on to provide a default current of 20 A at the highest energy efficiency [41].
(2) When the SOC continuously dropped to SOClow − 5%, the PEMFC current was increased according to the required load until the SOC was raised to SOClow + 5%, where the PEMFC provided a default current of 20 A. (3) When the battery SOC reached SOChigh, the PEMFC was switched off. Therefore, the power management can be adjusted by tuning SOClow and SOChigh. As a last stage, the hydrogen electrolyzer transferred redundant energy to hydrogen storage based on the following strategies (see Figure 3b) [33]: (1) When the battery SOC was higher than 95%, the extra renewable energy was regarded as redundant.
(2) The electrolyzer module would wait for ten minutes to avoid chattering. If the total redundant energy increased during this period, the electrolyzer was switched on. (3) When the hydrogen tank was full or the battery SOC dropped to 85%, the electrolyzer was switched off.
Thus, the electrolyzer produced hydrogen when the battery SOC was between 85% and 95%. The electrolyzer module was set to produce hydrogen at a rate of 1.14 L/min by consuming a constant power of 410 W, based on the experimental results [33]. [39]; (b) The electrolyzer management [33].
Inputs Energy and Output Loads
We applied the historical irradiation and wind speed data [32], as shown in Figure 4, to the PV and WT modules, respectively. As shown in Figure 4, solar radiation was abundant in the summer but poor in the winter; therefore, solar energy in the summer can be stored for use in the winter. Conversely, the wind speed was high in the winter but low in the summer, so wind energy was expected to compensate for the lack of solar energy in the winter. However, the compensation effects [39]. (b) The electrolyzer management [33]. Therefore, the power management can be adjusted by tuning SOC low and SOC high . As a last stage, the hydrogen electrolyzer transferred redundant energy to hydrogen storage based on the following strategies (see Figure 3b) [33]: (1) When the battery SOC was higher than 95%, the extra renewable energy was regarded as redundant.
(2) The electrolyzer module would wait for ten minutes to avoid chattering. If the total redundant energy increased during this period, the electrolyzer was switched on. (3) When the hydrogen tank was full or the battery SOC dropped to 85%, the electrolyzer was switched off.
Thus, the electrolyzer produced hydrogen when the battery SOC was between 85% and 95%. The electrolyzer module was set to produce hydrogen at a rate of 1.14 L/min by consuming a constant power of 410 W, based on the experimental results [33].
Inputs Energy and Output Loads
We applied the historical irradiation and wind speed data [32], as shown in Figure 4, to the PV and WT modules, respectively. As shown in Figure 4, solar radiation was abundant in the summer but poor in the winter; therefore, solar energy in the summer can be stored for use in the winter. Conversely, the wind speed was high in the winter but low in the summer, so wind energy was expected to compensate for the lack of solar energy in the winter. However, the compensation effects were not as significant as originally designed because the wind was not sufficiently strong and the energy cost was much higher (see Table 2) when compared to other energy sources. Note that both solar and wind energy were concentrated in the daytime, indicating that this energy should be stored for use at night. Three standard load profiles [43, 44], as illustrated in Figure 5, were applied to the load module to investigate the impacts of loads on the optimization of the hybrid power system. The 61-day historical data were used for simulation and optimization analyses. Table 3 illustrates the statistical data of these load profiles, where the household had the largest historically peak and the office had Three standard load profiles [43,44], as illustrated in Figure 5, were applied to the load module to investigate the impacts of loads on the optimization of the hybrid power system. The 61-day historical data were used for simulation and optimization analyses. Table 3 illustrates the statistical data of these load profiles, where the household had the largest historically peak and the office had the largest daily average peak, while the laboratory load had the greatest energy consumption. Therefore, we used these three typical loads to demonstrate how load characteristics can affect the performance optimization of the hybrid power system.
(d) Average daily wind speed. Three standard load profiles [43,44], as illustrated in Figure 5, were applied to the load module to investigate the impacts of loads on the optimization of the hybrid power system. The 61-day historical data were used for simulation and optimization analyses. Table 3 illustrates the statistical data of these load profiles, where the household had the largest historically peak and the office had the largest daily average peak, while the laboratory load had the greatest energy consumption. Therefore, we used these three typical loads to demonstrate how load characteristics can affect the performance optimization of the hybrid power system.
Design Optimization of the Hybrid Power System
The hybrid power model was applied to predict system responses under different conditions, such as the use of varying components and loads. We defined three indexes to evaluate the performance of the hybrid power system: cost, reliability, and safety, as described by the following: (1) System cost: the system cost J (b, s, w) consisted of two parts, J i and J o , as follows [39]: where J i and J o indicate the initial and operation costs, respectively. The subscripts b, s, and w represent the numbers of batteries, PV arrays, and WTs in units of 100Ah, 1kW, and 3kW, respectively. The initial cost J i accounted for the investment in the components, such as the PEMFC, power electric devices, PV arrays, WT, hydrogen electrolyzer, chemical hydrogen generator, and battery set, as follows: where k = PEMFC, DC, solar, WT, HE, CHG, and batt, respectively. The operation cost J o included the hydrogen consumption and the maintenance of the WT and PV arrays, as in the following: where l = NaBH 4 , WT, and solar, respectively. We calculated the initial costs J k i(b, s, w) and the operation costs J l o(b, s, w) as follows: in which C and n are the price per unit and the installed units, respectively, for each component k. CRF represented the capital recovery factor that was defined as [32,33,39]: where ir is the inflation rate, which was set as 1.26% in this paper by referring to the average annual change of consumer price index of Taiwan [39], and ny is the expected life of the components. The price and expected life of the components are illustrated in Table 4 were used to calculate the system costs in the following examples. (2) System reliability: the reliability of the hybrid system was defined as the loss of power supply (LPSP) as follows [32,33,39]: in which LPS(t) was the shortage (lost) of power supply at time t, while P(t) was the power demand of the load profile at time t. Therefore, LPS(t)dt indicated the insufficient energy supply and P(t)dt represented the total energy demand for the entire simulation. If the power supply met the load demand at all times, (i.e., LPS(t) = 0, ∀t), then the system was completely reliable with LPSP = 0.
(3) System safety: system safety was defined as the guaranteed sustainable period of the hybrid power system under extreme weather conditions when no solar or wind energy was available. Suppose the energy stored in the system was E store and the average daily energy consumption was E day ; then, the system safety can be defined as follows: For example, average daily energy demand is 19.96, 30.41, and 22.32 kWh for the household, laboratory, and office, respectively (see Table 3). Therefore, if the energy stored in the battery and hydrogen is 60 kWh, the system safety is 3.01, 1.97, and 2.69 days for the laboratory, office, and household, respectively. When considering the efficiency of the battery and inverter both as 90%, then the system safety is 2.70, 1.78, and 2.42 days, respectively. We applied the three typical loads to investigate their impacts on the optimization of the hybrid power system by tuning the component sizes and power management strategies.
Household Load
Applying the household load (see Figure 5) to the original system layout (b, s, w) = (8, 10, 2) and management settings of (SOC low , SOC high ) = (40%, 50%) gave the system's reference plot shown in Figure 6a, where the system cost was estimated as J = 1.300 $/kWh with LPSP = 4.89% (see Step 1 of Table 5). From Figure 6a, the system cost can be reduced to J = 1.169 $/kWh by adjusting the components as (b, s, w) = (18, 9, 2) but with a possible power cut (LPSP = 2.61%, see Step 2 of Table 5). If the requirement was LPSP = 0, then the optimal system cost was J = 1.189 $/kWh, achieved by setting (b, s, w) = (18, 10, 2) (see Step 3 of Table 5). That is, we can reduce the system cost from J = 1.300 to 1.189 $/kWh, while improving the system reliability from LPSP = 4.89% to 0.
Laboratory Load
Similarly, the results of applying the laboratory load (see Figure 5) to the hybrid power model are shown in Figure 7 and Table 6. First, the original system layout (b, s, w) = (8, 10, 2) with management settings of (SOClow, SOChigh) = (40%, 50%) resulted in a system cost of J = 1.100 $/kWh and LPSP = 26.42%. Note that the LSPS was much higher than was obtained for the household, because the laboratory load was mainly at night and the stored energy by hydrogen electrolyzation failed to provide sufficient energy. The initial component optimization can reduce the system cost to J = 0.929 $/kWh by setting (b, s, w) = (27, 15, 2) but with LPSP = 2.34% (see Step 2 of Table 6). The sub-optimal settings of (b, s, w) = (30, 16, 2) gave LPSP = 0 with J = 0.944 $/kWh (see Step 3 of Table 6), i.e., the reliability was improved by 26.42%, while the cost was reduced by 14.18%.
Because the WT was not economically efficient for this building, setting w=0 can greatly reduce the system cost to J = 0.684 $/kWh with LPSP = 0 by (b, s, w) = (31, 21, 0) (see Step 4 of Table 6). The iterative procedures could then further improve the system cost to J = 0.668 $/kWh with LPSP = 0 by setting the power management as (SOClow, SOChigh) = (30%, 40%), and the cost finally converged to J = 0.660 $/kWh with LPSP = 0 by setting (b, s, w) = (27, 21, 0) and (SOClow, SOChigh) = (30%, 40%). When compared with the original cost, the cost was reduced by 40%, while the system reliability was reduced by 26.42%. Because the cost of wind energy was much higher than the cost of solar energy (see Table 2) and the compensation effects were not significant (see Figure 4), the use of solar and a PEMFC with chemical hydrogen production was viewed as economically efficient for the green building [32]. Therefore, we set w = 0 and the resulting optimization showed that the system cost can be significantly reduced to J = 0.822 $/kWh by setting (b, s, w) = (15, 15, 0), as illustrated in Step 4 of Table 5. Furthermore, when we fixed the component settings of (b, s, w) = (15, 15, 0) and tuned the power management strategies (SOC low , SOC high ) = (30%, 40%), the system cost was further decreased to J = 0.810 $/kWh (see Step 5 of Table 5). Steps 6 and 7 illustrate the iterative tuning of component size and power management, respectively. The results indicated that the system cost converged to J = 0.794 $/kWh with (b, s, w) = (23, 15, 0) and (SOC low , SOC high ) = (30%, 40%). Compared with the original cost, the cost was reduced by 38.9%, while maintaining complete system reliability. Note that the iterative method can greatly reduce the computation time because the simultaneous optimization of four parameters (b, s, SOC low , SOC high ) took much longer than iterative optimization, as indicated in [45]. Therefore, Energies 2019, 12, 57 10 of 16 the proposed iterative optimization can be applied for a quick estimation of the system behavior. Simultaneous optimization can be considered for potentially better optimization if time permits.
Laboratory Load
Similarly, the results of applying the laboratory load (see Figure 5) to the hybrid power model are shown in Figure 7 and Table 6. First, the original system layout (b, s, w) = (8, 10, 2) with management settings of (SOC low , SOC high ) = (40%, 50%) resulted in a system cost of J = 1.100 $/kWh and LPSP = 26.42%. Note that the LSPS was much higher than was obtained for the household, because the laboratory load was mainly at night and the stored energy by hydrogen electrolyzation failed to provide sufficient energy. The initial component optimization can reduce the system cost to J = 0.929 $/kWh by setting (b, s, w) = (27, 15, 2) but with LPSP = 2.34% (see Step 2 of Table 6). The sub-optimal settings of (b, s, w) = (30, 16, 2) gave LPSP = 0 with J = 0.944 $/kWh (see Step 3 of Table 6), i.e., the reliability was improved by 26.42%, while the cost was reduced by 14.18%.
Office Load
The analyses of the office load (see Figure 5) are shown in Figure 8 and Table 7. First, the original system layout (b, s, w) = (8, 10, 2) with management settings of (SOClow, SOChigh) = (40%, 50%) gave a system cost of J = 1.107 $/kWh and LPSP = 5.08%. Optimizing the settings slightly reduced the system cost to J = 1.106 $/kWh with LPSP = 0 using (b, s, w) = (23, 11, 2) (see Step 2 of Table 7). Note that the system reliability was better than the house and the laboratory loads at this step, because the office load profile was basically synchronized with the irradiation and wind curves and the solar energy could be used directly to supply the loads. Therefore, we omitted Step 3 that represented the optimization with w = 2 and LPSP = 0 in Tables 5 and 6.
Office Load
The analyses of the office load (see Figure 5) are shown in Figure 8 and Table 7. First, the original system layout (b, s, w) = (8, 10, 2) with management settings of (SOC low , SOC high ) = (40%, 50%) gave a system cost of J = 1.107 $/kWh and LPSP = 5.08%. Optimizing the settings slightly reduced the system cost to J = 1.106 $/kWh with LPSP = 0 using (b, s, w) = (23, 11, 2) (see Step 2 of Table 7). Note that the system reliability was better than the house and the laboratory loads at this step, because the office load profile was basically synchronized with the irradiation and wind curves and the solar energy could be used directly to supply the loads. Therefore, we omitted Step 3 that represented the optimization with w = 2 and LPSP = 0 in Tables 5 and 6. Note that the system reliability was better than the house and the laboratory loads at this step, because the office load profile was basically synchronized with the irradiation and wind curves and the solar energy could be used directly to supply the loads. Therefore, we omitted Step 3 that represented the optimization with w = 2 and LPSP = 0 in Tables 5 and 6.
Cost and Energy Distributions
The optimal system designs for the three loads, based on the reference plots, are illustrated in Tables 5-7. We further analyzed the cost and energy distributions of these systems, as shown in Table 8. First, the laboratory achieved the lowest unit energy cost because its average daily energy consumption was the largest; therefore, the initial costs were shared. The household load showed an opposite result. Second, the laboratory used the most solar panels and batteries, while the household applied the fewest solar panels and batteries, to produce and store sufficient energy for the load requirements. Third, the optimal battery units for all loads did not differ much (23-27 units); this was not intuitive because the laboratory load was mainly at night, while the office load was mainly in daytime. The reason for this was that the battery life was shortened if only a small amount of the battery energy was used. Therefore, using a large amount of the battery energy increased the initial cost but it also helped to extend the battery life, thereby reducing the battery costs. For instance, for the laboratory load, the battery cost was the lowest even though the laboratory load used the largest amount of battery energy. Because the initial battery SOC was set as 80% in the simulation model, a negative energy supply distribution of battery means the battery SOC is higher than 80% at the end of the simulation, i.e., the battery is charged by the renewable energy so that its final SOC is greater than the initial SOC. Fourth, the costs of the solar panels, battery, and the PEMFC system (including the chemical hydrogen production system, PEMFC, and NaBH 4 ) are about 40%, 25%, and 20%, respectively, for all loads. That is, the cost distributions are almost the same for all systems after optimization. Finally, solar energy provided nearly 100% of the required load demands because the current high cost of hydrogen requires that the system avoid using the PEMFC unless necessary. The current optimal costs are 0.794, 0.660, and 0.791 for the household, lab, and office loads, respectively. Although the costs cannot compete with the grid power, the system provides a self-sustainable power solution for remote areas and islands without grid power. The energy cost can be greatly reduced when the component prices are reduced with popularity. For example, the analyses in [33] indicated that the critical hydrogen price is about 10 NT$/batch (one batch consumes 60 g of NaBH 4 to produce about 150 L of hydrogen). That is, more hydrogen energy will be used in an optimal hybrid power system if the hydrogen price is less than 1/15 NT$/L.
Safety Analyses
The optimization designs illustrated in Tables 5-7 were based on historical weather data, where the solar and wind energy co-assisted the sustainability of the power system. Because the aim of the hybrid power system is to provide uninterrupted power, we further investigated its ability to perform in extreme weather conditions when no solar or wind energy is available.
We applied the optimal settings in Tables 5-7 to the hybrid power model and recorded the lowest battery SOC during the 61-day simulation to calculate the lowest remaining energy and system safety by Equation (9). The results are illustrated in Figure 9 and Table 9, where the lowest SOC (stored energy) for the household, laboratory, and office loads were 29.99% (11.03 kWh), 26.04% (7.83 kWh), and 27.18% (8.97 kWh), respectively. Therefore, the equivalent sustainable operation periods of the system are 0.49, 0.23, and 0.36 days, respectively, considering the average daily energy consumption shown in Table 3 and assuming a battery efficiency of 90%. If a longer sustainability is required, we can adopt sub-optimal settings. For example, the minimal settings and costs to sustain 1 day or 2 days are labeled in Figure 9. Suppose the safety requirement is 1 day; then, the lowest system costs to guarantee 1 day of operation are 0.8952 USD/kWh, 0.7603 USD/kWh, and 0.8735 USD/kWh, respectively, for the household, laboratory, and office loads. The corresponding component sizes are (b, s, w) = (33, 26, 0), (b, s, w) = (40, 24, 0), and (b, s, w) = (40, 17, 0), respectively.
Another way to extend the guaranteed system sustainability is to use the chemical hydrogen generation system to produce hydrogen for the PEMFC as a means of providing back-up power. Referring to [36], one mole of NaBH 4 can generate four moles of hydrogen, so 20 kg of NaBH 4 can produce 4.16 kg of hydrogen, which would provide 63 kWh of electricity for the system. Therefore, a further sustainability guarantee is possible by stocking more NaBH 4 with the auto-batching system developed in [36], which can produce hydrogen when the system requires energy from the PEMFC. For example, if the system stores 20 kg of NaBH 4 , the safety indexes for the household, laboratory, and office loads can be extended by 3.33, 2.10, and 2.90 days, respectively, assuming an inverter efficiency of 90%. Installing 40kg of NaBH4 could guarantee 6.17, 3.96, and 5.44 days of operation for the household, laboratory, and office, respectively, in the worst case scenario.
Results and Conclusions
This paper has demonstrated the optimization of a green building that was autonomous and did not connect to the main grid. The building can be applied to remote stations and small islands, where no grid power is available. We discussed the impacts of three typical loads on the optimization of a hybrid power system. First, we built a general hybrid power model based on a green building in Taiwan. The model consisted of PV, WT, batteries, PEMFC, electrolyzer, and chemical hydrogen production modules. Second, we evaluated the system performance by applying the household, laboratory, and office load profiles to the model. The results indicated that the combination of PV, battery, PEMFC, and chemical hydrogen production can guarantee system reliability. When compared with the original settings, the total system cost was greatly reduced by The choice of a sub-optimal design or extra NaBH 4 stock would depend on the estimated extreme weather conditions and the price of NaBH 4 . For instance, if the expected extreme weather happens one day during the 61-day simulation, the total system costs are increased by $25.81, $85.70, and $48.47 for the household, laboratory, and office, respectively, using the sub-optimal settings. Conversely, the required extra NaBH 4 to guarantee sustainability under the worst-case conditions are 3.59 kg, 8.26 kg, and 5.04 kg, respectively, assuming an inverter efficiency of 90%. This will increase the system cost by $35.90, $82.60, and $50.40 for the household, laboratory, and office, respectively, if the NaBH 4 price is 10 NT$/kg. Therefore, the second option (using extra NaBH 4 ) will be the better choice if the cost of NaBH 4 is less than 7.19, 10.38, and 9.62 NT$/kg for the household, laboratory, and office, respectively. Note that these analyses are based on the worst-case conditions, where the battery SOC is at the lowest when the extreme weather happens. Hence, in general, the cost should be lower and more benefits are possible by storing extra NaBH 4 with the auto batch system [36].
Results and Conclusions
This paper has demonstrated the optimization of a green building that was autonomous and did not connect to the main grid. The building can be applied to remote stations and small islands, where no grid power is available. We discussed the impacts of three typical loads on the optimization of a hybrid power system. First, we built a general hybrid power model based on a green building in Taiwan. The model consisted of PV, WT, batteries, PEMFC, electrolyzer, and chemical hydrogen production modules. Second, we evaluated the system performance by applying the household, laboratory, and office load profiles to the model. The results indicated that the combination of PV, battery, PEMFC, and chemical hydrogen production can guarantee system reliability. When compared with the original settings, the total system cost was greatly reduced by 38.9%, 40%, and 28.6% for the household, laboratory, and office loads, respectively, while the system reliability was significantly reduced by 4.89%, 24.42%, and 5.08%, respectively. Third, the cost distribution showed similar results for the three loads: the battery, PV, and PEMFC systems accounted for about 25%, 40%, and 20% of the system costs for all three cases. Note that the current usage of lead-acid battery is a compromise between cost and efficiency. For example, applying the hybrid system with LiFe battery [33], the optimal system costs became 2.237, 1.846, and 1.853 per kWh for the household, lab, and office loads, respectively. That is much higher than the current optimal costs by the lead-acid battery. Fourth, the energy distributions indicated that the PV provided nearly 99% of the required energy, because of the current high price of hydrogen. As shown in [33], hydrogen energy will be compatible when the hydrogen price drops to about one third of the current price. Finally, we evaluated the safety of these systems under extreme weather conditions and proposed two methods for extending system sustainability: using a sub-optimal design or using more NaBH 4 . The latter method tended to be more flexible and was more able to cope with uncertainties. For example, adding 20 kg of NaBH 4 will increase the system safety by 3.33, 2.10, and 2.90 days for the household, laboratory, and office loads, respectively. These findings can be considered when developing customized hybrid power systems in the future. | 2019-04-16T13:29:18.891Z | 2018-12-25T00:00:00.000 | {
"year": 2018,
"sha1": "722da434f8fb42ebdf7e607a92b845fb18f05f0e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/1/57/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "de08c403ca761a4911ed988eae642d742a52499d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250009601 | pes2o/s2orc | v3-fos-license | Understanding the Teachers’ Lived Experience in Teaching Technology and Livelihood Education (TLE) Using Modular Distance Learning in Now Normal
: This qualitative study examined the lived experiences of teachers of teaching Technology and Livelihood Education (TLE) using modular distance learning in now normal utilizing daily. A review of literature was completed concerning the areas of teachers and the history of modular distance learning. The research question for the study was, what are the participants’ lived experience in teaching and assessing students in modular distance learning class of TLE in the now normal? And how do they cope up with the challenges of teaching and assessing students in modular distance learning class of TLE in the now normal? The research design for the study was Phenomenological qualitative research in interpretive design. Interviews were completed. The data were disaggregated according to themes and patterns. The participants of the study consisted of eight (8) participants’ experiences of TLE teachers on modular distance learning in the Department of Education in the Division of the City of Biñan. After data collection, the researcher found seven (6) common themes. Each of these themes was discussed in terms of supporting or not supporting the review of literature. The majority of the findings of the study supported by participants served as authentic guidelines of the study. A major contradiction in the findings was the fact that none of the teachers in the study felt a need for more professional development and community involvement. This study was situated within the indigenous context of the 8 participants’ experiences of TLE teachers on modular distance learning in the Department in the Division of the of Biñan. Targeted 8 teacher-informants were from and National are targeted participants in this study.
for self-employment and provides pathways for further education and training in a chosen career. The practical and relevant course offerings are responsive to individual and community needs and will provide students with opportunities to become economically productive even if they decide to leave the formal school system at any given time (Valencia, 2015).
The Covid 19 pandemic practically changed both the local and global arena of educational systems. Countries continuously looking for and developing different strategies on how to deliver quality education. Lockdowns and staying home became main strategies to flatten the curve and control the transmission of the disease (Sintema, 2020). Reviews of existing Department of Education (DepEd) policies has led the researcher to believe that the CoVid-19 health crisis has exacerbated the delivery of education to learners in terms of quality educational access. This health issue has profoundly affected every identifiable sector -education, health, public service, industry, among others. The impact is felt more acutely in education particularly as children and adolescents (WHO, 2020) have fewer resources for well-being, mental health, and coping strategies than adults.
The current COVID-19 crisis has obliged most education systems to adopt alternatives to face-to-face teaching and learning. Many education systems moved activities online, to allow instruction to continue despite school closures (OECD, 2020). The shift of the teaching learning delivery in schools to modular distance learning made more challenging, on the part of the school personnel, the delivery of basic quality education. That is why DepEd leaders are always finding avenues to solve the problems and capacitating its teachers and school heads to become more effective in the field of modular distance learning (Bagood, 2020).
The Philippines' Department of Education (DepEd) in order to pursue its mandate of delivering quality education while coping up with the demands of health protocols implemented by the Inter-Agency Task Force on emerging disease (IATF) adopted the Modular Distance Learning (MDL) approach across all programs. This MDL approach puts the programs of vocational education especially on teaching Technology Livelihood Education (TLE) as provided for on Republic Act 10647 at a disadvantage. TLE contributes meaningfully to the solutions of the problems of the society such as unemployment, poverty, malnutrition (Doucet et al., 2020). The TLE learning process whose terminal objective is to instill skills among learners requires a traditional teaching or the face-to-face method. The MDL approach created two main problems among TLE teachers, primarily on how to teach skills among learners through Modular Distance Learning and secondly, how to evaluate objectively the growth and performance of learners. Malipot (2020) stressed that teachers also air their problems on modular distance learning. The issues in teaching TLE specialization courses through Modular Distance Learning has been huge and debilitating . Distance learning poses a challenge for accessing teaching-learning resources that usually found in a classroom setting (Pedragoza, 2021).
Modular Distance Learning poses challenges for TLE teachers and learners in teaching and in the conduct of assessment, including limitations on giving immediate feedback, and the need to account for different contexts in designing, implementing and grading assessment tasks. The grading system, which is divided into two, written work is thirty percent (30%) and the performance task is seventy percent (70%) (DepEd, 2021). It is used to keep track of the progress of the students in achieving the learning standards and in the development of 21st century skills, to promote self-reflection and personal accountability among students, and provide basis for the profiling of student performance.
In this study therefore, where in-depth experiences of TLE teachers were qualitatively gathered and interpreted, the researcher hopes that this research would shed empirical light on the current Modular Distance Learning (MDL) of the Department of Education (DepEd) specifically in teaching TLE and its assessment and grading policies. The researcher further hopes that through the lived experiences that were shared by seasoned TLE teachers, new teaching strategies and evaluation methods will emerge particularly in the DepEd division in the City of Binan, province of Laguna.
Research Design
This paper drew inspirations from a qualitative inquiry that was conducted during the implementation of modular distance learning by DepEd. This study used a Psychological Phenomenology approach in Interpretive Qualitative analysis. Interpretivism seeks to build knowledge from understanding individuals' unique viewpoints and the meaning attached to those viewpoints (Creswell & Poth, 2017). The interest of the research are the phenomenon on modular distance learning and the livedexperiences of TLE teachers on this phenomenon. There were 8 teachers from the secondary schools in the Division of Binan Laguna that participated in the study and shared their experiences in modular distance learning during now normal.
This veered into their experiences through the lens of examining human lived-experiences as source of critical knowledge and understanding.
Participants of the Study
This study was situated within the indigenous context of the 8 participants' experiences of TLE teachers on modular distance learning in the Department of Education in the Division of the City of Biñan. Targeted 8 teacher-informants were from Jacobo Z. Gonzales Memorial National High School, Biñan Integrated National High School, Dela Paz National High School, St. Francis Integrated National High School, Biñan City Science and Technology National High School, Biñan Secondary School of Applied Academics, Southville 5A Integrated National High School, and Mamplasan National High School are targeted participants in this study.
The inclusion criteria in choosing the participants include (1) experience in teaching TLE in modular distance learning class; (2) More than 5 years in teaching TLE; 3) openness in sharing experiences towards teaching modular distance Learning in TLE ; and 4) voluntarily signs the waiver of participation.
Data Analysis
The gathered qualitative and empirical data conducted through from the TLE teachers of the eight (8) identified school were analyzed using interpretivist lens guided by constructivist philosophy. Using thematic analysis the researcher strived to identify patterns of themes in the interview data following the framework of Moustakas as cited in Creswell (2017). It is generally considered to be an effective data analysis tool as a systematic approach to organizing interview data. This lessens the possibility for personal biases clouding the interpretation of data. It also makes it easier to explain and convincingly assure the reliability of findings and conclusions from the study.
Since the nature of the data are highly qualitative, the researcher considered a data analysis strategy that can appropriately effect in the forwarding some knowledge claims as building blocks for policy crafting. These knowledge emanating from the participants served as guides for educational policy-makers for the new or next 'now normal'. These textual analysis were gathered information about how TLE education teachers make sense of the phenomenon on TLE modular distance learning. By means of representation of TLE teachers' narratives of lived-experiences through which the meanings of the conversation can be deduced by the researcher to understand the challenges they faced and understand the challenges for just and accurate evaluative assessments of the work-skills acquired by the students. The researcher chose the three representation theory by Hall (as cited in Leve, 2012) in analyzing the gathered qualitative data: (a) reflective, in which language reflects an already existing meaning in the world; (b) intentional, in which language expresses the producer's intended meaning; and (c) constructionist, in which meaning is constructed in and through language.
RESULTS AND DISCUSSION
This qualitative-phenomenological study looked into the lived experience of TLE teachers in (1) teaching and assessing and (2) challenges of teaching and assessing students in modular distance learning class of TLE in the now normal 1. What are the participants' lived experiences in teaching and assessing students in modular distance learning class of TLE in the now normal? Theme 1: Teaching and learning as key concept. Based on this theme, the participants mentioned about utilizing modular distance learning approach to teaching technology and Livelihood Education (TLE) increased learner's motivation and interest in teaching and learning as key concept, students are provided with huge opportunities through modular instruction that make TLE lessons more interactive, educational, and fun as an instrument of learning. The teachers also shared that through this approach, they are able to emphasize inquiry-based learning where students investigate, explore, and find through traditional and modern learning platforms. This in turn develops the higher order thinking skills especially for TLE where students need to very curious, inquisitive, and investigative. As the teachers recalled: Participant #2 Modular Distance Learning modality is a way of learning with this new normal to cater the education in this time of pandemic wherein school provide the self-learning modules for learners to study at home and continue the education. It can be modular print or digital print; it depends upon the available resources of the school as well as the learners.
Participant #4 Modular distance learning is an alternative mode of learning delivery designed for the teaching and learning continuity during the pandemic. The students are learning on their own pace remotely. They are answering their learning tasks at the comfort of their homes without the physical presence of their fellow students and their subject teachers. They are free to ask assistance from their guardian and parents or surf the web for other learning resources but with certain limitations. They can also ask the help of their teachers via messenger, texts, and other platforms. Modular distance learning advocates argue further those additional reasons for embracing this medium of instruction include current technology's support of a degree of interactivity, social networking, collaboration, and reflection that can enhance learning relative to normal classroom conditions (Doucet et al., (2020) and (Basilaia & Kvavadze, 2020).
Modular distance learning also enables the students to become more motivated and more involved in the learning process as key concept, thereby enhancing their commitment and perseverance, Sumitra Pokhrel, and Roshan Chhetri (2021). Students have both reported that the online components of modular distance learning encourage the development of critical thinking skills. Student satisfaction has also been reported to be higher in blended learning courses compared with purely face-to-face courses, (Adedoyin and Soykan, 2020 ; Ali, 2020; Bao, 2020).
Theme 2: Learning during Pandemic. In this theme the students nowadays who are declared as advance knowledge in terms of technology are constantly exposed to various learning platforms, many of which exist in the Internet, the participating Technology and Livelihood Education (TLE) teachers shared that through modular distance learning style, it becomes more of a facilitator of learning because modular instruction becomes more dynamic, participative and active since students maximize learning platforms in using the internet which results in the ease of their understanding. This allows teachers to think more in terms of better lesson: Participant #7 This is the learning modality that enables to continue the education as mandated by DepEd Secretary, Leonor Magtulis Briones to continue the education amidst pandemic eras. In this modality, the learners will be given activities for which their parents will pick it and deliver to the school.
Participant #8 Modular Distance Learning is a type of learning modality in which learning happens without physical interaction between teachers and students. In this form of learning modality, the students can learn at home using printed modules and other learning resources. They are provided with the learning materials like worksheets/activity sheets and Learning Packet (Leap) containing different activities based from the Most Essential Learning Competencies (MELCs). They are also given with printed weekly home learning plan as their guide in answering the learning tasks given in the module and worksheets. To facilitate the learning process, the students are allowed to ask assistance from their parent/guardian and/or from the teacher via phone call, text message, private message, etc. during his/her office hours. The teacher takes responsibility of monitoring the progress of the students by checking the output submitted every end of the week.
Modular distance learning benefits students and institutions. It facilitates improved learning outcomes, access flexibility, a sense of community, the effective use of resources, and student satisfaction. Several research studies have demonstrated that courses as a delivery method contribute to improved learning outcomes for students (Oselumese, et al., (2016).
Nessipbayeva, (2018) reports on an evaluation of the use of modular distance learning identifies positive aspects of virtual learning environments and critical issues for those considering the use of those environments as part of a lecturing module. It is the positive aspects of TLE that includes and enhances participation, increased enjoyment of learning, ability to facilitate group work in an efficient manner, and the provision of a standardized, user-friendly interface across courses Theme 3: Teachers' clash of emotion. This theme refers to the participants' experience when it comes to teaching Technology and Livelihood Education leading to their clash of emotions. For some, their second and random thoughts made them question their ability whether they could perform the job because of emotion. For others, what they felt in emotionally was being challenged but somehow disappointed because this is not what they prepared for when they were studying. For the rest, although nervous, they felt a sense of happiness because at least they could make a difference in the students' lives by teaching and inspiring them. As can be considerably noted, the emotion of the participants did not matter much because what truly counted is their ability to accomplish things which are expected of them by the school administration, much more by their students This theme stresses that by means of modular distance learning approach to teaching TLE, the teachers emotionally observed better learning engagement among their students. This is because of modular instruction and interfaces results in a developed focus on deeper learning. Additionally, this approach paved the way for the avenue of modular distance learning space where students can enjoy interactive contents like codified texts, videos, relevant articles, archival documents, news, pictures, as well as documentaries that make their students more inquisitive and practical to learning since accessibility to these materials can be made available at their fingertips. The participants noted that: Participant #4 Honestly, at first it didn't bother me and thought it's the best for everyone. It's a relief at first knowing that I'll be safe from Covid, and I will not face the everyday face to face setting/scenario. But along the way I realized that it's much harder to everyone especially in terms of assessing the students learning/skills emotionally. Weeks before the school opening, I got anxious, scared of the sudden changes, imagining the possible worst-case scenarios during modular classes. Thinking how I can reach out my students and provide learning materials that cater their different needs.
Understanding the Teachers' Lived Experience in Teaching Technology and Livelihood Education (TLE) Using Modular Distance Learning in Now Normal
IJMRA, Volume 5 Issue 06 June 2022 www.ijmra.in Page 1486 Participant #5 When the DepEd decided to use MDL in the classroom to teach TLE, I had some concerns. For example, how will I teach the lesson effectively if the students will use the self-learning modules? How will I assist students emotionally who are academically challenged that need someone to explain the lesson clearly in order for them to learn? How will I monitor students learning, check and evaluate their outputs, and provide feedback on my student's performance?
Research has also shown that modular distance learning can foster a decrease in student attrition and facilitate an increase in the passing rate for student examinations emotionally, Dangle and Sumaoang (2020). However, other studies point to the need for a more nuanced understanding of how modular distance learning delivery affects student learning. Elcano , Day et al., (2021) explored the relationships between students' perceptions of the e-learning environment, their approaches to study, and their academic performance. They found that students differed widely in their perceptions, resulting in variations in study approaches and grades students with positive perceptions of the e-learning environment tended to obtain better grades, and vice versa.
As to how do they cope up with the challenges of teaching and assessing students in modular distance learning class
of TLE in the now normal? Theme 4: The perks of collaboration. Surfacing in this theme is the participants' receiving administration support in order for them to improve their working knowledge of what they teach. Although not all participants have sought and received school management support, they see to it that they had consultation with seasoned teachers in the field like TLE teaching Modular Distance Learning as well as those who are experts in teaching methodologies by asking for their best practices to maximize student learning that emphasizes learner centered approaches. In so doing, they were able to compensate for their knowledge and practice gap by embracing the pieces of advice shared with them by the experienced in-field educators. Continuously coping with the challenges of modular distance learning in terms of technology integration, the participating teachers always strive for meaningful, and competency based TLE classroom where standards and performance-based considerations are constantly applied to make sure that combination of face-to-face classroom instruction and modular distance learning is successfully carried out to meet desired learning outcomes. One participant disclosed: Participant #5 By means of collaboration and constant communication with the school head, co-workers, parents and to our students are the factors needed in facing the new normal set up in TLE. The school head will always give us the information of the DepEd policies as to what to do in implementing it. The co-workers will work as a team for the students' progress in this new normal. The role of the parent is to guide them at home during study hours, a difficult task for the parent to act as a teacher in this new set up in education. Moreover, adapting to new teaching strategies fitting for the new normal, learning new applications and skills by attending various seminars and trainings Participant #6 In the traditional teaching TLE, there is a direct interaction of collaboration with the teacher and students. The learners are able to understand the lesson by doing tasks guided by the teacher (through demos, etc.). In the "now normal" setting, students rely mostly on the LMs so most of the time, they cannot comprehend the lessons as effectively as those on the traditional setting.
Participant #7 In the now normal, still there is possibility of collaboration of learning though they are not physically together. It can be self-paced depending on their capabilities. There are different apps to use to reinforce learning. The teachers were able to produce 21 st century learners as they themselves are making changes in the education process.
Teacher collaboration, when practiced with a focus on instructional strategies, curriculum, and assessment particularly, has benefits for both teachers and students. Results are even more promising when the collaboration is extensive and perceived by teachers as helpful. Collaboration among teachers even influences the results of teachers who do not experience directly the same high-quality collaboration, Pedragoza, (2021).
Theme 5: Students' evaluation. As teachers it is a must to have an evaluation in order to come up with the best result when it comes to finalization of the final result of the grades. This evaluation further pushes teachers to always determine whether what they do in education system the cognitive, affective and psychomotor as part of the evaluation process This is so because some learners may easily absorb what they teach using modular distance learning but there are also those who are also challenged in terms of technology use, added to the availability and accessibility of online materials and other digital contents that are supposed to maximize their learning encounters in TLE. Some participants mentioned that: Participant #4 The evaluation of students' performance in the now normal is very challenging and controversial along with the issue of academic integrity of the students. I am not confident enough to say that I am effectively evaluating my students' performance during these times, because we all know that there are possibility that it is not them who answered all the Participant #6 The use of the bricks as a way of evaluating students' performance likable and then "now normal" setting the only challenge is something to do with whether the student with the activities by himself or herself or someone did it for them (guardian, friend, etc.).
Darsih, (2018) proposes a framework using a modular distance learning approach for higher education institutions faced with challenges of developing and deploying continuing professional development in the construction industry. The framework can be used by continuing education providers to determine the most suitable combination of media for a modular distance learning intervention, taking into consideration learner and instructor characteristics, the desired instructional goals and strategies, the nature of the learning environment, and the availability of resources. Research has also been published, Tumbali, Aaron Jed (2021) in which the key factors for successful implementation of modular distance learning are discussed. Among these key factors are the availability of financial resources, support from senior management, and access to personnel with the requisite technological capabilities and skills.
Theme 6: Resiliency, flexibility make things possible. The last theme speaks of resiliency and flexibility that make things possible. Enclosing the reality that once a teacher, forever a student, the participants clearly communicated their resiliency and flexibility to teaching TLE using modular distance learning by going an extra mile in what they do in order to make certain that their teaching competence translates to quality learning. This said, they have a strong conviction that even though they have a productive experience that tells and can be beneficial if handles correctly provided that the starting point of resiliency and flexibility turns to positive influence on the students' learning. As being testified by some participants that Participant #8 We came up with Project TOOLS (Technical Online Orientation and Learning System) as our resiliency and flexibility to help and assist our students on their modular distance learning journey. We are conducting FGD to discuss issues and concerns with our subject area and to address the needs/problems during MDL and appropriate interventions for non and low performing students. We have our online class once a week to discuss the lessons and at the same time assist them in answering the activities in the module. We are giving our students a feedback form for their submitted output. We have practiced the reward system to encourage them to perform the tasks given weekly. We usually give the rewards to those students who did an excellent performance on the task.
Participant #4 As a teacher we are expected to be flexible, resilient, and agile. Though I am not fully successful in coping up with challenges and issues in the evaluation of performance of my students, I acknowledge the fact that those webinars and trainings are helpful. Also the comfort that I'm getting from my colleagues knowing that I am not alone in facing difficulties help me a lot in enduring, bearing and accepting these challenges. Checking of the output is also one of the issues that we're getting in these kind of setting, to protect my credibility I am checking all those outputs and keep all the records that I have, just like what we're doing during pre-pandemic. I think creating harmonious relationship and open communication with everyone including the parents and students is also one of the factors that can help us cope with the challenges.
Participant #5 Because of the pandemic, I always demonstrate my flexibility, open-mindedness and reliance when evaluating my students. I did this because my daughter experiences anxiety or depression when answering the learning tasks that have been assigned to her. This experience in my family has given me a better understanding of the situation of students. Furthermore, by being more adaptable, students can be exposed to new methods of assessing their performance.
Teachers' resiliency and flexibility shows their students interest especially in Modular Distance learning. It is a pedagogical approach that denotes the effectiveness of the e-classroom with the technological enhancements Ajzen (2019). Contained within is a paradigm change in which the emphasis shifts from teaching to learning, Elcano, Day et al. (2021). A modular distance learning course should also increase the interaction between the instructor and students. It should furthermore enhance the mechanism for integrating formative and summative feedback in order to boost students' learning experiences (Pedragoza, 2021). Therefore, modular distance learning is a fundamental redesign using their resiliency and flexibility where students become active and interactive learners. Essence. The reality of teachers teaching TLE using modular distance learning in the Philippines as shared by the collective accounts of the selected participants in the public high schools Laguna, is an affirmation of a committed and devoted community of teachers who can confidently confront and embrace a fact which they successfully dealt with through experience in welcoming various reactions, setting their mind right and coming face to face with the students. Although struggling in the early stages of the phenomenon, the participating teachers were resolute and ready to stand their ground by clinging to the power of | 2022-06-25T15:17:03.817Z | 2022-06-23T00:00:00.000 | {
"year": 2022,
"sha1": "3fc3f18d9038abf1c6034e1a7be536b2fb262f1a",
"oa_license": "CCBYNC",
"oa_url": "https://ijmra.in/v5i6/Doc/35.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0fd3a033f3a2c0f3da0b2b693111b1bacdc47c10",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
146094651 | pes2o/s2orc | v3-fos-license | Short certificates for chromatic equivalence
The chromatic polynomial gives the number of proper colourings of a graph in terms of the number of available colours. In general, calculating chromatic polynomials is #P-hard. Two graphs are chromatically equivalent if they have the same chromatic polynomial. At present, determining if two graphs are chromatically equivalent involves computation and comparison of their chromatic polynomials, or similar computational effort. In this paper we investigate a new approach, certificates of chromatic equivalence, first proposed by Morgan and Farr. These give proofs of chromatic equivalence, without directly computing the polynomials. The lengths of these proofs may provide insight into the computational complexity of chromatic equivalence and related problems including chromatic factorisation and chromatic uniqueness. For example, if the lengths of shortest certificates of chromatic equivalence are bounded above by a polynomial in the size of the graphs, then chromatic equivalence belongs to NP. After establishing some links of this type between certificate length and computational complexity, we give some theoretical and computational results on certificate length. We prove that, if the number of different chromatic polynomials falls well short of the number of different graphs, then for all sufficiently large n there are pairs of chromatically equivalent graphs on n vertices with certificate of chromatic equivalence of length Ω(n2/ logn). We give a linear upper bound on shortest certificate length for trees. We designed and implemented a program for finding short certificates of equivalence using a minimal set of certificate steps. This program was used to find the shortest certificates of equivalence for all pairs of chromatically equivalent graphs of order n ≤ 7. Submitted: November 2016 Reviewed: January 2019 Revised: January 2019 Accepted: February 2019 Final: March 2019 Published: April 2019 Article type: Regular paper Communicated by: M. Kaufmann Research supported in part by Australian Research Council Discovery Project DP110100957. E-mail addresses: Zoe.Bukovac@monash.edu (Zoe Bukovac) Graham.Farr@monash.edu (Graham Farr) Kerri.Morgan@deakin.edu.au (Kerri Morgan) 228 Bukovac, Farr, Morgan Short certificates for chromatic equivalence
Introduction
The chromatic polynomial P (G; λ) of a graph G gives the number of λ-colourings of G.It was first introduced by Birkhoff as a possible algebraic approach to a proof for the Four Colour Theorem [1].Calculating the chromatic polynomial is #P-hard [11,13], even when restricted to the family of subgraphs of square lattices [9].In general, for all λ > 2, determining if a graph is λ-colourable is NP-complete [12].
Two graphs G and G are chromatically equivalent, written G ∼ G , if they have the same chromatic polynomial.It is possible to have chromatically equivalent graphs that are not isomorphic.No good characterisation of chromatically equivalent graphs is known, but there is a wealth of research on chromatic equivalence, much of which is summarised in [5] and [7].Research in this area focuses on either small sets of graphs that have been found to be chromatically equivalent or infinite families of chromatically equivalent graphs.A graph G is chromatically unique if the only graphs which have the same chromatic polynomial are also isomorphic to G. The idea of chromatically unique graphs was introduced by Chao and Whitehead [4].
Certificates to verify instances of chromatic equivalence and other algebraic properties of the chromatic polynomial were first introduced by Morgan and Farr [20] in 2009.A certificate of this type is a sequence of algebraic transformations based on identities for the chromatic polynomial and algebraic properties.As calculating the chromatic polynomial is #P-hard in general, any method that can verify information about the chromatic polynomial of a graph without needing to calculate it is of interest.
In this article we consider certificates that can be used to help verify that two graphs are chromatically equivalent.We describe the relationship between certificate length and computational complexity of chromatic equivalence, chromatic uniqueness and chromatic factorisation.We then show that, if the number of different chromatic polynomials of degree n falls well short of the number of non-isomorphic graphs on n vertices (a plausible hypothesis, given the data), then for all sufficiently large n there are pairs of chromatically equivalent graphs on n vertices with certificate of chromatic equivalence of length Ω(n 2 / log n).
A program for finding short certificates of equivalence was designed and implemented [3].In order to produce computationally feasible software, it uses a minimal set of certificate steps.This program was used to find the shortest certificates for all chromatically equivalent graphs of order n ≤ 7.These certificates can be grouped into 15 classes which we call schemas.Although the best known upper bound on length of certificates is exponential [20], the certificates we found were all remarkably short.We give a linear bound on the lengths of certificates of equivalence for trees.
Let G = (V, E) be an undirected graph with vertex set V = V (G) and edge set E = E(G).In general we will use n and m for the size of V and E respectively.The set of unordered pairs of elements of V is denoted V (2) .The chromatic number of a graph G, denoted χ(G), is the minimum number of colours required to colour G so that no adjacent vertices are given the same colour.We refer the reader to [6] for more information regarding common graph theory definitions.
We denote the disjoint union of two graphs G and H by G ∪ H.If G and H share exactly one vertex, then the combined graph is denoted by G ∪ 1 H.
Let u and v be vertices in G.If uv is an edge of G, then G\uv is the graph obtained by deleting the edge uv from G. We call this process edge deletion.If uv is not an edge of G, then G + uv is the graph obtained by adding the edge uv to G. We call this process edge addition.For any pair of vertices u and v the graph G/uv is the graph obtained by identifying vertices u and v in G and discarding any multiple edges or loops obtained in the identification.If u and v are not adjacent in G, we call this process vertex identification.If u and v are adjacent in G, we call it edge contraction.
If two disjoint graphs H 1 and H 2 both contain a clique of at least size r then the graph G formed by identifying an r-clique in H 1 with an r-clique in H 2 is an r-gluing.The 0-gluing operation is just the disjoint union of the graphs.A graph that can be obtained by an r-gluing of two graphs is said to be a clique-separable graph.
The following two relations can be used to calculate the chromatic polynomial recursively.For most graphs this will take exponential time.
Note that P (K 0 ; λ) = 1.This result only helps evaluate P (G; λ) when G is clique-separable, which does restrict it, but its divide-and-conquer nature means that under such circumstances it can offer a significant reduction in the complexity of calculating a graph's chromatic polynomial.It also gives a partial, initial link between factorisation of the chromatic polynomial -an algebraic property of the polynomial -and the structure of the corresponding graph.These considerations led us, in previous work [20,19], to identify other situations where chromatic polynomials factorise in a similar way.
The chromatic polynomial is said to have a chromatic factorisation if there exist graphs H 1 and H 2 such that P (G; λ) = P (H 1 ; λ)P (H 2 ; λ) P (K r ; λ) where χ(H i ) ≥ r ≥ 0 and H i K r for i = 1, 2 [20].A graph G is said to have a chromatic factorisation if P (G; λ) has a chromatic factorisation.It is clear from (3) that any clique-separable graph has a chromatic factorisation.Similarly any graph that is chromatically equivalent to a clique-separable graph has a chromatic factorisation.A strongly non-clique-separable graph is a graph that is not chromatically equivalent to any clique-separable graph.Morgan and Farr [20] showed that there exist chromatic factorisations for some strongly nonclique-separable graphs.They introduced the notion of a certificate to explain these factorisations and other properties of chromatic polynomials.
Certificates
Certificates to verify instances of chromatic equivalence and chromatic factorisation were first introduced by Morgan and Farr [20] in 2009.A certificate of this type is a sequence of transformations based on identities for the chromatic polynomial and algebraic properties.Each of the individual transformations in a certificate is called a certificate step.
The following is a description of each of the certificate steps used in [20].Each new expression in a certificate is obtained by applying one of the following certificate steps to the previous expression in the certificate: where G is isomorphic to the graph obtained by an r-gluing of G 1 and G 2 .
CS8 Applying field operations to an expression a finite number of times to produce a different expression.
Each application of a certificate step links a single expression to the next one in the certificate.In a certificate that starts with a graph G, each expression can be evaluated to a polynomial which is equal to the chromatic polynomial of G.The certificate can be verified by checking, for each pair of consecutive graph expressions in the certificate along with the nominated certificate step linking them, that the nominated certificate step correctly transforms the first expression of the pair to the second.Importantly, the actual chromatic polynomial of G is not calculated or required at any point in the process of verifying a certificate for G.
Certificates of equivalence
• a sequence of expressions E 0 , E 1 , . . ., E l where E 0 is the graph G and E l is the graph G ; • for each i ∈ {1, . . ., l}, a specification of a certificate step from {CS1,. . .,CS8} along with which graphs in E i−1 and E i it is applied to, such that the specified step applied to these graphs does indeed transform We say the certificate is a certificate from G to G .The length of the certificate is number of certificate steps, l, applied to transform G into G (as distinct from the number of expressions, l + 1, in the certificate).An algebraic certificate of equivalence is obtained from a certificate of equivalence by expanding each algebraic step is obtained by a single application of a field axiom.
The algebraic length of a certificate step is defined to be 1 for all steps except the algebraic step (CS8), and the algebraic length of an application of CS8 is defined to be the number of applications of field axioms needed to carry it out.So an algebraic step a valid application of (CS8), and has algebraic length 2, since it has two applications of field axioms: x → x + 0 and 0 → y − y.In an algebraic certificate of equivalence, it would be expanded into . We will see another example in the next subsection.The algebraic length of a certificate is the sum of the algebraic lengths of all its certificate steps, which is one less than the total number of expressions in the algebraic certificate.Clearly the length of a certificate is at most its algebraic length.
We mostly work just with certificates of equivalence and their (non-algebraic) lengths.But it is sometimes important to do the more detailed accounting required for algebraic lengths.
Certificates of factorisation
A certificate of factorisation for P (G; λ) = P (H 1 ; λ)P (H 2 ; λ)/P (K r ; λ) is defined as for a certificate of equivalence, except that the final expression Figure 2 gives a certificate of factorisation.The certificate steps performed in the certificate in Figure 2 are as follows: This certificate has length 7. Its algebraic length is 9, since the sole algebraic step CS8 involves three applications of field axioms: 1 −→ x/x twice and xy + xz −→ x(y + z) once.
Certificate length
Certificates have been a powerful tool in proving results on chromatic factorisations [20,19,21] and chromatic equivalence [18].Although the best known A certificate of factorisation for a strongly non-clique-separable graph from [17].
upper bound on certificate length is exponential, in practice certificates seem to be very short.When they exist, short certificates enable results on chromatic equivalence and chromatic factorisation to be easily verified while bypassing the cost of computing the chromatic polynomial.As certificates are useful and there was no existing software to find certificates in general, we hope that our program to find certificates of equivalence may be a valuable tool in the study of the chromatic polynomial, particularly for chromatic equivalence.
Certificate length and complexity
The lengths of certificates have potential implications for the computational complexity of determining when two graphs are chromatically equivalent, of determining when a graph is chromatically unique, and of determining when a graph has a chromatic factorisation.
Consider the following problems.CHROMATIC EQUIVALENCE can be solved quickly once the chromatic polynomials are known.It follows that it belongs to P #P , the class of sets recognisable in polynomial time with the aid of a #P-oracle.Computation of chromatic polynomials does not solve CHROMATIC UNIQUENESS, but can be used to certify chromatic non-uniqueness, with the help of a guessed graph H not isomorphic to the input graph G: use a #P-oracle to compute the chromatic polynomials of G and H, check that they are equal, and use the #P-oracle again to check that H ∼ = G (possible since GRAPH ISOMORPHISM ∈ NP ⊆ P #P ).It follows that CHROMATIC UNIQUENESS belongs to co-NP #P .
CHROMATIC EQUIVALENCE
In similar vein, if we guess a factorisation -by specifying H 1 , H 2 and r -for an input to CHROMATIC FACTORISATION, then the guess is easy to check if the chromatic polynomials of G, H 1 and H 2 are known.This means that CHROMATIC FACTORISATION belongs to NP #P .
These complexity classes -relativisations of P, NP, and co-NP with respect to a #P oracle -are very large, in the sense that they contain all the power of #P and hence, by Toda's Theorem, the entire Polynomial Hierarchy.It is natural to ask whether CHROMATIC EQUIVALENCE, CHROMATIC UNIQUENESS and CHROMATIC FACTORISATION belong to complexity classes within the Polynomial Hierarchy, and especially whether they belong to NP.
Certificates of chromatic equivalence, or chromatic factorisation, may provide a tool for attacking this question, because of the following.
Theorem 1 (a) If every pair of chromatically equivalent graphs has a certificate of equivalence of algebraic length bounded by a polynomial in n, then CHRO-MATIC EQUIVALENCE is in NP.(b) If every pair of chromatically equivalent graphs has a certificate of equivalence of algebraic length bounded by a polynomial in n, then CHROMATIC UNIQUENESS is in the class co-NP GI of problems whose complements can be solved nondeterministically in polynomial time with the aid of an oracle for the GRAPH ISOMORPHISM problem.(c) If every graph with a chromatic factorisation has a certificate of factorisation of algebraic length bounded by a polynomial in n, then CHROMATIC FACTORISATION is in NP.
Proof:
(a) Every instance of chromatic equivalence can be explained by a certificate of equivalence [20].If there always exists such a certificate that is polynomially bounded in algebraic length, then it can be used as part of the guess, in a nondeterministic polynomial-time algorithm for chromatic equivalence.In order to enable efficient verification of the guess, it needs more than just a sequence of expressions.The information required includes statements of which certificate steps are used, nominations of which graphs in the expressions are "active" in each certificate step, some mappings between vertex sets of various pairs of graphs, and nominations of vertex pairs to be joined or unjoined or identified.We now give details.
Suppose every pair of chromatically equivalent graphs has an algebraic certificate of equivalence of algebraic length ≤ cn k , where c and k are constants.Given two chromatically equivalent graphs G and H, let C be such an algebraic certificate of equivalence for them.Let the sequence of expressions in C be For each i ∈ {0, . . ., l − 1}, let s i be the number of the certificate step used to transform E i to E i+1 , where 1 ≤ s i ≤ 8 and s i indicates that certificate step CSs i is used.If an algebraic step is used (s i = 8), then we also need to specify which specific field axiom application is currently being used for this particular step in the algebraic certificate; we denote this by A i .This means specifying the axiom together with the direction in which it is used.(For example, the additive inverse axiom could be used either as 0 −→ Γ − Γ or as Γ − Γ → 0, where Γ is a graph.These are two separate field axiom applications.) For each i such that step i is not part of an algebraic step (i.e., s i = 8), we give the following information, which will enable the step to be verified.We specify which graphs in E i and E i+1 are to play the roles of the graphs on each side of the arrow "−→" in CSs i .Suppose CSs i has a i graphs on the left of its arrow and b i graphs on the right (where ai be the actual graphs in E i that are used as the graphs on the left side of the arrow in CSs i , and let bi be the actual graphs in E i+1 that are used as the graphs on the right side of the arrow in CSs i .The root graph of step i is defined to be The intention is simply that the vertex set of every graph appearing in CSs i is a subset of the vertex set of the root graph, possibly with relabelling.
For each i ∈ {0, . . ., l − 1}, we need two functions ρ 2 that specify the correspondences between the vertices in the graphs used in step i.For each j = 1, 2, the function ρ (i) j maps the vertex set of the root graph for step i to the vertex set of the j-th of the two or three non-root graphs used in CSs i for step i.These functions must respect adjacency in precisely the right way; they are not isomorphisms but, informally speaking, they are as close to isomorphisms as they can be or need to be under the circumstances.For example, consider CS1 (if s i = 1), with root graph 1 \e) that preserves adjacency except that the endpoints of e in L (i) 1 /e) that preserves adjacency except that the endpoints of e in L (i) 1 are mapped to the same vertex in 1 /e (and this vertex is not adjacent to itself in 1 /e, since loops are discarded in the version of contraction used when working with chromatic polynomials).We omit the details of the precise adjacency-respecting requirements for the other certificate steps; it is routine to work them out, based on the operations used.
If 1 ≤ s i ≤ 5, let u (i) v (i) be the vertex pair used in CSs i .These two vertices are to be nominated in the vertex set of the root graph L (i) 1 .They are adjacent if s i ∈ {1, 4} and nonadjacent if s i ∈ {2, 3, 5}.Once nominated there, the functions ρ 2 enable the corresponding vertices in the other graphs used in CSs i to be worked out.
If s i ∈ {6, 7}, let U i be the vertex set of the separating clique and let r i := |U i | be its size.This set is to be nominated as a subset of Once nominated there, the functions ρ 2 enable the corresponding subsets of vertices in the other graphs used in CSs i to be worked out.
This completes the description of the information required for nonalgebraic steps.
For each i such that step i is part of an algebraic step (s i = 8), involving application of a field axiom A i , the information we give is slightly different.The field axiom applications may be represented as a list of rules of the form: leftside −→ right-side.So we still need to nominate graphs bi in E i+1 that are used in this field axiom application A i .But it is no longer the case that {a i , b i } is always either {1, 2} or {1, 3}.For example, if we are applying the distributive law, A (3,4).We sometimes require an isomorphism between two graphs, which we denote by ρ 1 , in order to enable verification that they are isomorphic and can be cancelled (in an application of either the additive inverse or multiplicative inverse axiom).But such mappings are mostly not needed.Furthermore, there is no need to specify vertex pairs or subsets.Applications of field axioms do not change any graphs, although they may introduce or remove graphs.
When giving all this information, some data representation decisions must arise, although (within reason) these decisions make no difference to whether or not the certificates can be verified in polynomial time.For example, each occurrence of a graph in an expression could be represented anew, by its own data structure spelling out all its vertices and edges, regardless of how many times graphs isomorphic to it have appeared previously (either earlier in the same expression, or in a previous expression in this certificate).Then the certificate would also have to specify many isomorphisms so that graphs which were the same could indeed be verified to be so.Alternatively, we could give a detailed representation of a graph only the first time we use it, and thereafter just give an appropriate reference back to that graph.This is simpler and more economical, and we assume it is done this way, but our argument is easily adapted to the former approach.
We are now in a position to show that, under the given hypothesis, CHRO-MATIC EQUIVALENCE is in NP.
Given two chromatically equivalent graphs G and H, we use as our guess, or certificate (using this term now in its broader complexity-theoretic sense, which inspired but is not equivalent to our specific usage in "certificate of equivalence" etc.), the following information: ∀i ∈ {0, . . ., l − 1} : (4) There will always be some missing items in this list, and some special symbol can be used to represent them.For nonalgebraic steps (s i ≤ 7), A i is absent.The vertex pair u (i) v (i) is only needed if s i ≤ 5, and U i is only needed if s i ∈ {6, 7}.For algebraic steps (s i = 8), the function ρ 2 , vertex pairs u (i) v (i) and vertex subsets U i are not required, and the function ρ (i) 1 may not be required.For each i, the verification that E i −→ E i+1 requires us to verify that all the rules we have laid down in the construction of (4) are satisfied.This can be done in polynomial time.It includes: checking that the lists of the L (i) j and R (i) j are consistent with the nominated certificate step CSs i (and field axiom A i , where applicable); checking that the portions of the expressions that are not designated in the L (i) j and R (i) j for use by CSs i are just copied across; and checking the appropriate adjacency-respecting properties of the maps ρ 2 , taking into account the u (i) v (i) and U i .The fact that this all takes polynomial time depends on the facts that the number of graphs in each expression E i is bounded by a linear function of certificate length l, and that the numbers of vertices of the graphs in the expressions may be restricted to some linear bound (using steps CS1-CS7; some applications of field axioms in CS8 could introduce much larger graphs in cancelling pairs, but for these to have any effect they must interact, via one of CS1-CS7, with a graph that ultimately derives from G or H using nonalgebraic steps, which constrains their size).
We must do this verification for each i, but this requires at most polynomially many iterations by our initial assumption on certificate length.So the entire verification procedure takes polynomial time.Therefore, if the lengths of certificates of equivalence are polynomially bounded, then CHROMATIC EQUIVALENCE is in NP.
We do not spell out the proofs of (b) and (c) in such detail, as they are similar in essence.We just outline them and comment on the key points of difference.
(b) Suppose again that every pair of chromatically equivalent graphs has an algebraic certificate of equivalence of algebraic length ≤ cn k , where c and k are constants.Under this hypothesis, we prove that the class of graphs that are not chromatically unique belongs to NP GI .
Let G be a graph that is not chromatically unique.Let H be a graph that is chromatically equivalent to G but not isomorphic to it.Our guess is now H together with an algebraic certificate of chromatic equivalence for G and H of algebraic length ≤ cn k , with all the associated information described in part (a).In other words, the guess is H together with the information in (4).To verify this guess, we first use the GRAPH ISOMORPHISM oracle to verify that G ∼ = H, then we verify the information in (4) exactly as we did in part (a).The verification takes polynomial time, because of the oracle use and the reasoning given in (a).This completes the proof of (b).
(c) Suppose now that every graph with a chromatic factorisation has an algebraic certificate of factorisation of algebraic length ≤ cn k , where c and k are constants.Let G be a graph with chromatic factorisation P (G; λ) = P (H 1 ; λ) P (H 2 ; λ)/P (K r ; λ) using graphs H 1 and H 2 and clique K r where r ≥ 0.
Given a graph G which has a chromatic factorisation, our guess consists of an algebraic certificate of factorisation of algebraic length ≤ cn k , together with all the required associated information.This information is as in (4), except that the final expression E l is not a single graph but rather the expression H 1 • H 2 /K r .The verification is as in (a) above.
Certificate length and the number of chromatic polynomials
The numbers #CP(n) of different chromatic polynomials of connected graphs on n vertices for n ≤ 10 is given by [ The ultimate trend of #CP(n) is not clear from this data.We know that it can be no more than the number of different connected unlabelled graphs on n vertices, which is asymptotic to the number 2 n(n−1)/2 of labelled graphs on n vertices (since, asymptotically, almost all labelled graphs are connected and have identity automorphism group).Does there exist b < 1 2 such that #CP(n) ≤ 2 bn 2 ?
Although b = 0.2 would suffice for n ≤ 10, the numbers n −2 log 2 #CP(n) grow at an increasing rate over the range 6 ≤ n ≤ 10, suggesting strongly that the true value of any such b is significantly greater.This growth cannot continue forever, due to the upper bound 1 2 mentioned above.Does it flatten out strictly below this upper bound?It is impossible to tell; there is too little data for any tentative extrapolation, let alone a persuasive one.By contrast, for stability polynomials, the answer to the analogous question appears to be affirmative [16].Stability polynomials are essentially chromatic polynomials for graphic 2-polymatroids, which are cousins of graphic matroids.Such a b < 1 2 exists if and only if Bollobás, Pebody and Riordan's second conjecture, that asymptotically almost all graphs are chromatically unique [2], is false.That conjecture still seems to be wide open.The authors at the time wrote that they "do not have much evidence" for it, although "the simplest approach to disproving [it] fails" [2, p. 344].There has been little progress since, at least for general graphs.
As we have just seen, the data gives no reason for taking a position either way on the conjecture, and the analogous conjecture for graphic 2-polymatroids actually seems likely to be false.We argue that, in exploring connections between the conjecture and certificate length, both possibilities (true/false) for the conjecture deserve consideration.There seems to be little optimism in the community that it will be resolved in the near future.
This question about b has implications for lengths of certificates of chromatic equivalence.
Theorem 2 If #CP(n) ≤ 2 bn 2 for some fixed b < 1 2 , then for sufficiently large n, there exists a pair of chromatically equivalent graphs for which every certificate of equivalence has algebraic length Ω(n 2 / log n).
Proof: Assume #CP(n) ≤ 2 bn 2 for some fixed b < 1 2 .The number of connected unlabelled graphs on n vertices is at least 2 ( 1 2 −ε)n 2 , for some fixed ε > 0. So the average size of a chromatic equivalence class is ≥ 2 Therefore there exists a chromatic equivalence class consisting of at least this many graphs.Let G be one of these graphs.
Suppose that, for sufficiently large n, every pair of chromatically equivalent graphs on n vertices has a certificate of chromatic equivalence of algebraic length ≤ L.
Each expression in the algebraic certificate must have ≤ 2L terms, since the greatest possible increase in expression size, due to application of a single certificate step CS1-CS7 or a single application of a field axiom as part of CS8, is two.(The length might be unchanged, or decrease, instead.)It also has ≤ 2L instances of arithmetic operations from the standard set {+, −, ×, /}, for similar reasons.For each expression in the certificate, we apply one of CS1-CS7 or we apply a field axiom (and each field axiom might be applied in one of two directions).The total number of options here is constant, but we must then choose where in the expression to apply them.This involves choosing one or two graphs in the expression (for a non-algebraic certificate step, or for some field axiom applications) or one of the instances of an arithmetic operation (for other field axiom applications), so the number of choices of these is ≤ 4L 2 + 2L.The upshot of this is that, for each expression, there are ≤ c 0 L 2 expressions that may be obtained from it in a single non-algebraic certificate step or application of a field axiom, where c 0 is a constant.Since there are ≤ 2L expressions in the certificate, the total number of certificates is ≤ L c1L , for some constant c 1 .This implies that the size of the chromatic equivalence class of G has this same upper bound.So we have Our focus in this section on connected graphs is justified by the following remark, which tells us that disconnected graphs give us nothing really new.
Proposition 3 If G 1 and G 2 are disconnected and chromatically equivalent, then there exist two connected chromatically equivalent graphs G − 1 and G − 2 whose chromatic polynomial is the same as that of G 1 and G 2 except for a factor λ l .
Proof: Suppose two disconnected graphs G 1 and G 2 have the same chromatic polynomial.They must have the same number of components, since this number is given by the multiplicity of zero as a chromatic root.Call this number k. Suppose we do the following to each graph: mark one vertex in each nonempty component, identify all these marked vertices (so combining all the components into a single component), and add new isolated vertices if necessary to ensure that the total number of isolated vertices is k − 1.Let G 1 and G 2 be the graphs formed from G 1 and G 2 by this construction, and let G − 1 and G − 2 be the connected graphs formed from G 1 and G 2 by deleting all isolated vertices.
It is routine to show that the above construction G i → G i leaves the chromatic polynomial unchanged, using the fact that P (H 1 ∪ H 2 ; λ) = P ((H 1 ∪ 1 H 2 ) ∪ K 1 ; λ) for any disjoint graphs H 1 and H 2 (see, e.g., [8, p. 56]).We therefore have
Certificate length and trees
All trees of a given order are chromatically equivalent.In investigating the relationship between chromatic equivalence and certificate length, it is natural to take a look at trees.
Proof: We use induction on n.
If n = 3, then there is only one tree on n vertices (up to isomorphism), which has a trivial certificate of chromatic equivalence with itself, consisting just of itself, which has length 0. Now suppose n > 3. Suppose T 1 and T 2 are any two trees on n vertices.Now, any tree can be obtained from some tree with one fewer vertices by adding a leaf at an appropriate vertex.So, for i = 1, 2, we suppose T i is obtained by adding, to a tree T − i on n − 1 vertices, a leaf v i w i to some vertex v i of T − i , with w i being a new vertex of degree 1 not in T − i .Since T − 1 and T − 2 have < n vertices, the inductive hypothesis applies, and there is a certificate of chromatic equivalence between them which only uses CS1 and CS2 and has length at most 2((n − 1) − 3) = 2n − 8. Modify this certificate as follows.Every graph in it, starting with T − 1 at the beginning, has a copy of v 1 (possibly identified with other vertices as well).Attach a new leaf v 1 w 1 at every such copy of v 1 , throughout the certificate.This gives a new certificate of equivalence, since all the certification steps remain valid after addition of the leaves.This new certificate starts with T 1 and demonstrates its equivalence to T − 2 + v 1 w 1 , which is obtained by adding the leaf v 1 w 1 to T − 2 .We append two more certificate steps, certifying the chromatic equivalence of T − 2 + v 1 w 1 and T 2 : Altogether, this gives a certificate of chromatic equivalence for T 1 and T 2 of total length ≤ 2(n − 3).
This upper bound is attained by the certificate of equivalence between the path P n and the star S n , each on n vertices.
For computation of certificate lengths for trees of order ≤ 7, see §6.1.2.
Software information
This section provides some information about the certsearch software produced in this project.For more detailed information about the finer points of the program implementation, we refer the reader to the source files available at http://users.monash.edu/~kmorgan/Zoe/.
Building the software
The certsearch software was written in the C programming language and was compiled with gcc.A makefile is included with the source code at http: //users.monash.edu/~kmorgan/Zoe/.
The program certsearch uses nauty version 2.4, developed by Brendan McKay [14,15] and available at http://cs.anu.edu.au/~bdm/nauty.In the software, nauty is used for the graph isomorphism checks performed during the search.The program also uses a function from some work by Kerri Morgan [17], which is used as an interface to nauty.This function was modified during this research to make it compatible with the graph data structures used by certsearch.The source code files for nauty and the modified code from Morgan are included along with the other source files required to build the certsearch program.
Also provided are the n_polys files, which contain lists of all of the chromatic equivalence classes for all non-chromatically unique graphs of order 4, 5, 6 and 7.These files were provided by Kerri Morgan.The graphs* files are also included.They give the adjacency matrices of all graphs of orders 4, 5, 6 and 7.These files are provided by Brendan McKay and are made available at http://cs.anu.edu.au/~bdm/data/graphs.html.Both the n_polys and the graphs* files are required by the automated exhaustive search functions.
Using the software
When running certsearch the user is presented with a number of options.Option 2 runs the batch experiments for all of the pairs of graphs of order 4 ≤ n ≤ 7 and was used to find all of the computational results in this thesis.The certificates found during this search option are written out to the order_*_certificates files in the graphs directory.
Interpreting output certificates
This section contains information about interpreting the data output to file by certsearch.Please note that in the certificates in the order * certificates files and Appendix B, the graphs G and G0 are the same graph.
In the software, order n graphs are always defined over the set of vertices {0, . . ., n − 1}.The only graph operations that the software performs are edge deletion, edge addition and vertex identification.Edge contraction is implemented by removing the edge and then performing a vertex identification.Vertex identification alters the order of the resulting graph, and thus the labelling of the vertex set and the edge set.It is important to understand how this change is implemented in order to interpret the output of the program correctly.We use the following map to relabel our vertices and edges when identifying vertices v i and v j , i < j.
The graph obtained by identifying vertices v i and v j , i < j, in G has vertex set {0, 1, . . ., n − 2} and edge set {φ(v k , j)φ(v l , j) : With this information, together with the adjacency matrices in the graphs* files, it is possible to interpret the certificates in Appendix B. Note that the certificates listed in the order_*_certificates files, which are available at http://users.monash.edu/~kmorgan/Zoe/, are preceded by the edge lists of the two graphs involved for which the certificate shows chromatic equivalence.The first of the two graphs listed is G in the certificate.The second is the graph found in the final expression of the certificate.
Experiments
The experiments were carried out as follows.For each chromatic equivalence class, a list of all unordered pairs of these graphs was created.Each of these pairs were given as an input to the program which used a bounded depth first search algorithm.Each node in the search tree represents an expression E = l i=0 sign(i)G i , l ≥ 0, where the G i are graphs and sign(i)∈ {±1}.At each node we branch on all possible steps.First we branch on steps of type (CS1) and (CS3), that is deletion/contraction on edges in G i or addition/identification for non-adjacent pairs of vertices in G i , and then we branch on steps of type (CS2) and (CS4), that is, where the inverse of either an addition/identification or deletion/contraction operation can be applied to some pair of graphs in E. In order to reduce the search time, a second input to the program gave an upper bound M on the length of certificate to be found.If no certificate of length at most M was found, the bound was increased so a certificate could be found.If a certificate of length M is found during the search we record the certificate and decrement M , backtrack to the previous level of the search tree and continue the search for a shorter certificate.This algorithm found a shortest certificate for each pair and wrote this certificate out to the file of results for the corresponding graph order.This procedure was performed for graph orders 4, 5, 6, and 7.
In order to reduce the search space, our program only uses certificate steps (CS1)-(CS4).This was a natural choice as these steps are based on the fundamental operations used to compute the chromatic polynomial (see ( 1) and ( 2)).The program finds the shortest certificates that use only these types of certificate step.The lengths of these certificates give an upper bound on the shortest lengths of all certificates of equivalence for graphs of these orders.
A list of the chromatic equivalence classes for graphs of order 4 ≤ n ≤ 7 was provided by Kerri Morgan.These lists contain lists of the graphs, indexed by certain integers and arranged by equivalence class.The indices correspond to graph data provided by Brendan McKay, which is available at http: //cs.anu.edu.au/~bdm/data/graphs.html.The program also uses nauty version 2.4, available at http://cs.anu.edu.au/~bdm/nauty, also developed by McKay [14,15] to perform isomorphism checking during the running of the search algorithm.There were a total of 3821 pairs of chromatically equivalent graphs from 157 equivalence classes.
All of the experimental runs using certsearch were completed on a computer with the following specifications.
Results
One of the main reasons for conducting the experiments was to find information about the length of shortest certificates of chromatic equivalence.Table 1 lists the certificate length data from the experiments.For each graph order, it lists the number of shortest certificates found of each length.Although the experiments considered only graphs of order at most 7, the certificate lengths that they found are, relative to corresponding graph order, very short.All but seven of the certificates found have length bounded by the order of the graphs.The remaining seven certificates have length 8 and were for graphs of order 7.All certificate lengths are ≤ 2n−6.While these experiments only consider graphs of very small order, it is encouraging that so far the shortest certificates produced have been very short indeed, especially since the best known upper bound on the length of certificates is < 2 n 2 /2 , which is exponential in the order of the pair of graphs.Figure 3 is an example of a certificate found using certsearch.All certificates found during the experiments for orders 4 ≤ n ≤ 6 can be found in Appendix B. These certificates, as well as those for the graphs of order 7, are available at http://users.monash.edu/~kmorgan/Zoe/.
Schemas
A schema is a template for a certificate [20].It represents a class of certificates that all share certain common subsequences of steps.A certificate which follows the pattern of certificate steps given in a schema is said to belong to the schema.Appendix A lists the schemas to which the certificates of equivalence found for graphs of order 4 ≤ n ≤ 6 belong.These schemas were obtained by analysing the certificate data produced from the experiments.
Table 2 details the lengths of these schemas and the number of shortest certificates found by our program belonging to these schemas.Schemas given in Appendix A are not all of the possible schemas for certificates up to length 6, they are only those to which at least one certificate in the results belongs.
The program finds just one of potentially many shortest certificates for each input pair of graphs.The set of schemas to which the resulting certificates belong to are in part artefacts of how the graphs are labelled, as the labelling of edges affects the order of edge selection by the program.The order in which possible certificate steps are attempted will also affect which of the shortest Table 2: The distribution of encountered shortest certificates amongst the schemas for graphs of order ≤ 6.
certificates is found for a given graph.Consequently, the schemas to which the certificates from our results belong are not necessarily the only ones for which certificates of the same length for each pair could be produced, although they are as short as any.There may exist other certificates of the same length for a given pair of graphs that belong to some other schema; either one of the others listed in Appendix A, or another schema altogether.Nevertheless, we are still able to draw some important conclusions from the information we do have.Since all the shortest certificates that were found belong to a small set of only 15 schemas, and there certainly exist other possible schemas of these lengths, we can say that the entire set of possible schemas may not need to be considered when searching for shortest certificates.
The vast majority of the certificates found belong to Schemas 1 and 2. This is not unexpected, as these two schemas describe the only two sequences of certificate steps (when restricted to certificate steps of type (CS1)-(CS4)) that can produce a certificate of length 2. Schemas 14 and 15 both describe length 6 certificates, and the remaining schemas describe certificates with length 4.
The edge difference of graphs G and G is the smallest d ∈ N such that there exists A ⊆ E(G) and and d is even.In each of the schemas for G ∼ G , the final expression gives a graph, isomorphic to G , obtained by deleting and adding some edge.For example, the final expression in Schema 3 is the graph (G + e\f + g\h).The edge difference of pairs of graphs with certificates that belong to this schema is 4. Schemas 1 to 12 and 15 give certificates with length equal to the edge difference of the pair of graphs.
However, in Schemas 13 and 14, the edge difference for them both is two less than their respective certificate lengths.Our program only uses certificate steps (CS1-CS4).It is possible that the use of all the types of certificate steps listed in Section 3 may produce certificates for these pairs of graphs that have a length less than or equal to their edge difference.
Conclusions and future work
The certificates found using the certsearch software tool are all very short.They also belong to only a small number of schemas.We give an upper bound of 2(n − 3) on the lengths of certificates of equivalence for trees.This class of graph includes the star and path graphs, which were subsequently shown for orders 4 ≤ n ≤ 7 to have the longest certificates amongst all graphs of the same order in the experimental results.
In general, the certificates that have been found so far are significantly shorter than the upper bounds on their length known at this time, so it is possible that further research could uncover tighter upper bounds.Although the chromatic polynomial has been investigated in considerable depth, there has been little research into its algebraic theory.Chromatic equivalence has been the topic of much research, but knowledge about the characterisation of chromatically equivalent graphs in general is far from complete.The certificates of equivalence that have been found so far provide some tantalising hints as to how they may behave generally, but there remain a great number of things about them that are unknown.Consequently, there is a wealth of potential directions for further research into certificates for properties of the chromatic polynomial.Some of these avenues are outlined below.
The schemas found in this research could be used to reduce the time taken to find certificates.By first searching for certificates between pairs of graphs using the schemas found in this research, it may be possible to find certificates for larger orders of graph.Attempting to find certificates that belong to the more common schemas may improve the time taken to find certificates.
Our program finds a shortest certificate for a given pair of graphs.However, there may be other certificates for such a pair that have the same length.A search for all of the certificates of shortest length for a pair of chromatically equivalent graphs could be implemented.This may give a wider range of possible schemas which could be used in our search for short certificates.The search algorithm could be expanded to include the complete set of certificate steps studied by Morgan and Farr [20].It is quite possible that shorter certificates of equivalence could be found for some of the certificates found in this project.It is also possible that such a method would find shorter certificates, in general.
Certificates of factorisation use the same certificate steps as certificates of equivalence.Extending the search capabilities of our algorithms to include
A Schemas
All of the certificates found for pairs of chromatically equivalent graphs of order 4 ≤ n ≤ 6 belong to one of the following schemas.
B Certificates
The following are the certificates found for pairs of chromatically equivalent graphs of order 4 ≤ n ≤ 6.A somewhat more verbose version of these certificates, along with all of the certificates for the graphs of order 7, can be found in the files labelled order_*_certificates available at http://users.monash.edu/~kmorgan/Zoe/.The numbers given to denote which graphs each certificate corresponds to are those listed in the graphs* files, also found at http://users.monash.edu/~kmorgan/Zoe/.These files give the adjacency matrices of the graphs.
B.1 Order 4
Graph Pair Certificate
B.2 Order 5
Graph Pair Certificate
Figure 3 :
Figure 3: A certificate of equivalence for two graphs of order 6, belonging to Schema 14.
Table 1 :
The lengths of shortest certificates found for chromatically equivalent pairs of graphs of order ≤ 7.
Table 3
gives the lengths of the shortest certificates of equivalence found for pairs of trees of order n, for the range 4 ≤ n ≤ 7. The bound on certificate length suggested by the table aligns with Theorem 4.
Table 3 :
The lengths of shortest certificates found for chromatically equivalent trees of order ≤ 7. | 2019-05-06T11:20:14.955Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "7b4c6538d48766ad79d90e6e03f5c41e07c021d9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7155/jgaa.00490",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "21dceb7613ec48fd59cd64c82f4680ba95769dec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
17027534 | pes2o/s2orc | v3-fos-license | Utilization of the ART approach in a group of public oral health operators in South Africa: a 5-year longitudinal study
Background A significant increase in the proportion of restorations to the number of tooth extractions was reported after the introduction of ART in an academic mobile dental service in South Africa. The changes were ascribed to its less threatening procedure. Based on these findings, ART was subsequently introduced into the public oral health service of Ekurhuleni district in the South African province of Gauteng. This article reports on the 5-year restorative treatment pattern of operators in the Ekurhuleni district, who adopted the ART approach into their daily dental practice. Methods Of the 21 trained operators, 11 had placed more than 10% of restorations using ART at year 1 and were evaluated after 5 years. Data, including number of restored and extracted teeth and type of restoration, were drawn from clinical records 4 months before, and up to 5 years after training. The restoration/extraction ratio (REX score) and the proportion of ART restorations to the total number of restorations were calculated. The paired sample t-test and linear regression analysis were applied. Results The mean percentage of ART restorations after 1 year was 24.0% (SE 7.2) and significantly increased annually to 42.7% (SE 9.2) after 5 years in permanent dentitions. In primary dentitions the mean percentage of ART restorations after 1 year was 80.6% (SE 4.9) and 72.6% (SE 8.8) after 5 years. The mean REX score before ART training was 0.08 (SE 0.03) and 0.07 (SE 0.04) for permanent and primary teeth, respectively and 0.11 (SE 0.03) and 0.17 (SE 0.05) after 5 years. Conclusion Five years after training, ART had been used consistently in this selected group of operators as the predominant restorative treatment used for primary teeth and showed a significant annual increase in permanent teeth. However, this change had not resulted in an increase in the REX score in both dentitions.
Background
The South African Department of Health conducted a national oral health survey in 1988/9. It covered only urban areas in the country's nine provinces. The authors observed, among others, the need for restorative treatment in the ratio of 2 restorations to 1 extraction [1]. Ten years later a report showed that restorative care in the public oral health services was provided in a ratio of only 1 restoration to 9 extractions [2].
Using the 2001-census data, the dental operator to person ratio in South Africa was found to have been in the order of 1 to 95 727 [3]. Each dental operator in the public oral health services rendered, on average, 4 400 oral treatment procedures per year [2]. These services were provided in 490 full-time and 322 part-time operating dental surgeries [2] and have been described as palliative, demand-driven, and lacking a structured budget and functional concepts [2]. It seemed, therefore, very unlikely that the continuing use of the current traditional rotary-driven restorative treatment regime would lead to attainment of the Department's goal of reducing premature tooth loss within the population, in the foreseeable future [1].
An appropriate alternative to the traditional restorative treatment approach is Atraumatic Restorative Treatment (ART). ART has been developed for managing dental caries and relies on the use of hand instruments for removal of carious tissues and filling of the cleaned cavity and adjacent fissures with a high-viscosity glass ionomer cement [4]. Research has shown high mean survival rates for single-surface ART restorations using high-viscosity glass ionomer cement in both primary and permanent teeth [5]. Single-surface ART restorations in permanent teeth have also been reported to survive longer than comparable restorations produced through the traditional approach using amalgam, after 6.3 years [6].
Because of its independence from electricity and expensive dental equipment, the World Health Organisation (WHO) endorsed the ART approach as appropriate for public oral healthcare services in developing countries such as South Africa [7]. In 1996, ART was introduced into an academic mobile dental service (MDS) in South Africa and reported a significant increase in the proportion of restorations to the number of tooth extractions (REX) score after one year [8]. The changes in the REX score were ascribed to the less threatening procedures of ART. Since then, studies have reported that ART causes less pain [9,10] than traditional procedures do and has been associated with less dental anxiety amongst patients, because it does not involve drilling and injections [11,12]. As an association between reduced patient dental anxiety and reduced operator stress exists [13][14][15][16], health authorities of Gauteng province in South Africa assumed that dental operators would choose ART instead of the traditional restorative treatment if they had received training in the use of ART. Therefore, in 2001 all 21 public health dental operators of Ekurhuleni, one of the five districts in Gauteng province, attended a training course in ART. Dental operators in other districts constituted the control group.
Although tooth extraction remained the main type of treatment provided in both groups one year after training, ART had been used in 67% of the restorations placed in the primary, and 11% of the restorations in the permanent dentition of outpatients in the study group. However, unlike in the MDS programme [8], the REX score did not significantly increase in either dentition in the study group [17]. Furthermore, only 13 of the 21 trained operators were found to have applied ART frequently thus having integrated the approach into their daily dental treatment routine [17]. As no information is available on the long-term effect of the use of ART in public health services by operators who applied ART after completing the training course, a follow up investigation was carried out. This study aims to report on the 5-year restorative treatment pattern of the 13 operators in a South African public oral health service, who adopted the ART approach into their daily dental practice.
Intervention
Permission to carry out the present study was obtained from the Ethics Committee for Research on Human Subjects (Medical) of the University of the Witwatersrand, Johannesburg, South Africa, under protocol number M00/07/13. Operators were trained in ART according to recommended course standards [18] by a staff member (SM) of the Division of Public Oral Health, University of the Witwatersrand, Johannesburg, in August 2001. The training was conducted during a 3-day workshop. Lectures at the Dental School on the first day were followed, on days 2 and 3, by clinical training on selected patients at a primary healthcare clinic in an informal settlement south of Johannesburg. Lectures contained information on (dis)-advantages of ART, its clinical indication, successes and failures of ART restorations and sealants, selection of materials and instruments, hand-mixing of glass ionomer, clinical procedures and management of failed restorations. Operators received copies of the lectures and the ART manual [19]. Contrary to recommendation, no preclinical training in the use of ART was given on extracted teeth. Clinical training consisted of demonstration of the use of ART by the trainer, followed by supervised ART treatment of carious lesions by operators. A workshop was attended by groups of 4-6 participants operating in pairs: one carried out the treatment while the other provided chair-side assistance. The functions were alternated for the treatment of successive patients. Each operator restored between 3 and 10 cavities in the 6 -15-year-old children selected.
The operators received no coaching or support from the health and university authorities after the training.
Evaluation
Information concerning the number of restored and extracted teeth and type of restoration per dentition was collected in 2002 from dental clinic records covering the 4 months preceding the ART training (April to July 2001) and 12 months after training (August 2001 -July 2002) and, in 2006, for the period from August 2002 to July 2006. The dental operators did the recording. The number of ART and conventional restorations and tooth extractions for both primary and permanent teeth per operator were calculated, by hand from the clinic record books, by the principal investigator and a fieldworker. The dental records formed the basis for calculating the ratio of number of restorations to number of extractions (REX score) and the proportion of ART restorations to the total number of restorations (%ART).
Statistical analysis
All data were entered into the computer and checked for accuracy before the oral statistician of the College of the Dental Sciences of the Radboud University Nijmegen, the Netherlands, conducted the statistical analysis, using SPSS statistical software. This study follows a retrospective longitudinal design, with the operator as the unit of investigation. A linear regression model was used to estimate the time effect for each dentist for both the proportion of ART restorations and that for the REX score as dependent variables and year as independent variable. From the time effect thus found per dentist, the mean and standard error (SE) of the overall time effect (both for proportion ART restorations and REX score) was calculated. Statistical significance was set at α = 0.05.
Results
Of the initial 21 dental operators, 8 operators did not use ART after training [17]. Two further operators had left the services during the period from 2002 to 2006. The remaining 11 operators adopted ART into their daily dental practice and were followed longitudinally. These consisted of 7 females and 4 males, 8 of whom were dentists and 3, dental therapists. The mean age was 41 years (SD = 9.5). In 2006, operators had graduated on average 18 (SD = 7.4) years previously and worked in their current posts on average for 14 (SD = 3.6) years.
The percentage of ART restorations and standard error of the total number of permanent and primary restorations placed over the 5-year period are shown in Figure 1. The mean percentage of ART restorations in permanent dentition after 1 year was 24.0% (SE 7.2) and increased to 42.7% (SE 9.2) after 5 years. This increase was statistically significant (p = 0.02). The percentage of ART restorations in primary dentition after 1 year was 80.6% (SE 4.9) and 72.6% (SE 8.8) after 5 years.
The mean REX scores and standard error from before the introduction of ART training to 5 years after training in primary and permanent dentition are shown in Figures 2 and 3, respectively. The mean REX score before ART was introduced was 0.08 (SE 0.03) and 0.07 (SE 0.04) for permanent and primary teeth, respectively. Five years after ART training, the mean REX score was 0.11 (SE 0.03) for permanent and 0.17 (SE 0.05) for primary dentitions. No time effect was observed for the mean REX scores in permanent (p = 0.59) and primary (p = 0.24) dentition from before, to 5 years after, the ART training.
Percentages of ART restoration (%ART) and Standard Error (SE) for the primary and permanent dentition by year of investigation Figure 1 Percentages of ART restoration (%ART) and Standard Error (SE) for the primary and permanent dentition by year of investigation.
Mean REX scores and Standard Error (SE) for primary denti-tions before ART training (Year = 0) and 5 years after ART training Figure 2 Mean REX scores and Standard Error (SE) for primary dentitions before ART training (Year = 0) and 5 years after ART training.
Discussion
This investigation reports on the use of ART by a selected number of dental operators in a provincial public oral health service, 5 years after they were trained. The preceding study, carried out one year after the training, was aimed at assessing the effect of the ART training and had, therefore, included a control group [17].
As the investigation is a selective follow-up to an earlier published study [17], it has inherited some study design shortcomings. These include a potential recall bias related to the fact that treatment data were sometimes recorded by staff at the end of the day and not immediately after completion of the treatment. Recording in this way is, however, common practice in South Africa. A further shortcoming is that evaluator blinding was impossible. It would have required the employment of an outside evaluator totally ignorant about the ART training, for a period of 5 years. Such a requirement is very difficult to meet, considering the publications on ART, and an evaluator was, therefore, not available.
Only 11 from the originally 21 operators adopted ART into their daily dental practice. The reasons may be related to barriers of ART adoption, which were investigated and reported elsewhere: lack of a sustained supply of materials for placing ART restorations; lack of adequate operator time, due to high patient load/workload, lack of patient cooperation due to dental anxiety; lack of leadership and guidance by healthcare management, negative attitudes of patients towards receiving restorative care; insufficient chair-side assistance and negative operator attitude towards using ART [20].
The results of this study showed no statistically significant difference in the proportion of ART restorations in relation to the total number of restorations placed in primary dentitions over the five-year study period, but it did in permanent dentitions. Dental operators had maintained their relatively high level of ART utilisation in primary teeth from year 1 to year 5 and increased it in permanent teeth. It appears, therefore, that in this selected group a single ART training course had resulted in a sustained shift in the restoration pattern, particularly in primary teeth; from predominantly conventional restorative treatment, using rotary instruments, to ART. This indicates that these operators preferred ART as the mode for treating children.
Despite the increase in ART restorations in both dentitions, no statistically significant changes in the mean REX score over the period preceding ART training to 5 years following it could be observed. This may indicate that, although dental operators may prefer to use ART in many cases, this preference was not strong enough to motivate them to use ART at a higher frequency than conventional restorative treatment. It may also indicate an increase in tooth extractions.
As epidemiological data in South Africa have shown a need for twice as many tooth restorations as extractions [1], expressed as mean REX score of 2.0, it is obvious that the introduction of ART has not resulted in achieving this national goal. The REX scores of 0.11 and 0.17 for permanent and primary teeth respectively in the present study indicate that many cavities were not restored.
Reasons for this situation may be related to the barriers of ART adoption in South African public oral health services [20]. A critical shortage of dental operator posts in the South African public oral health service has also been reported [2,21]. This, combined with an increasing number of patients seeking care at public dental clinics creates a constant high patient load/workload [2,21] and thus limits time needed for the operator to address patients' restorative needs.
As in many African countries, demands by patients for tooth extractions in South Africa's public oral health clinics are higher than for tooth restorations [22]. Patients, particularly from a low socio-economic background, seek care only when they have severe toothache, which has to be treated by extraction of the decayed tooth. Even if a cavitated lesion can be treated with a restoration, few patients honour the appointment made. The situation was studied in Tanzania. Poor communication between the dental practitioner and dental outpatients was the major barrier Mean REX scores and Standard Error (SE) for permanent dentitions before ART training (Year = 0) and 5 years after ART training Figure 3 Mean REX scores and Standard Error (SE) for permanent dentitions before ART training (Year = 0) and 5 years after ART training.
why people did not receive restorative care. The outpatients did not know that a tooth could be restored [23] and the dental practitioner did not inform them that such a treatment was possible [24]. Even if good communication lines between the patients and dental practitioners are established, it does not imply that the percentage of teeth restored will increase. In South Africa as in other African countries, patients often have to travel long distances to a dental clinic and when arrived, have to queue for long hours until they are attended to. A recall visit is then no option and a painful tooth that can be saved through a restoration is extracted. This happens even though free oral health services are offered in public oral health clinics. Patients have to pay for transport to the clinics and that further results in very low recall compliance amongst patients in public oral health services.
Conclusion
Five years after training, ART had been used consistently by the investigated selected group of dental operators: it was not considered a novelty to be used for only a short time. The ART approach was the predominant restorative treatment used in primary teeth and showed a significant annual increase in permanent teeth. However, this sustained use had not resulted in a statistically significant increase in the REX score in primary and permanent dentitions. | 2016-05-04T20:20:58.661Z | 2009-04-21T00:00:00.000 | {
"year": 2009,
"sha1": "5baabef4b9ab9766048a056f5b9332c0e6603263",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-9-10",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e953f21c6c5edf797f4dd56ade4b85a7ed66516",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266178732 | pes2o/s2orc | v3-fos-license | Spatial Cognition in Children With Physical Disability; What Is the Impact of Restricted Independent Exploration?
Given the developmental inter-relationship between motor ability and spatial skills, we investigated the impact of physical disability (PD) on spatial cognition. Fifty-three children with special educational needs including PD were divided into those who were wheelchair users (n = 34) and those with independent locomotion ability (n = 19). This division additionally enabled us to determine the impact of limited independent physical exploration (i.e., required wheelchair use) on spatial competence. We compared the spatial performance of children in these two PD groups to that of typically developing (TD) children who spanned the range of non-verbal ability of the PD groups. Participants completed three spatial tasks; a mental rotation task, a spatial programming task and a desktop virtual reality (VR) navigation task. Levels of impairment of the PD groups were broadly commensurate with their overall level of non-verbal ability. The exception to this was the performance of the PD wheelchair group on the mental rotation task, which was below that expected for their level of non-verbal ability. Group differences in approach to the spatial programming task were evident in that both PD groups showed a different error pattern from the TD group. These findings suggested that for children with both learning difficulties and PD, the unique developmental impact on spatial ability of having physical disabilities, over and above the impact of any learning difficulties, is minimal.
INTRODUCTION
Spatial cognition involves perceiving the location, dimension and properties of objects and their relationships to one another; it is core to everyday living, e.g., reading maps, packing a suitcase. There is a known relationship between motor competence and spatial cognition. For example, in typical infants, the emergence of independent walking predicts the development of spatial understanding about the layout of their environment (Clearfield, 2004) and locomotor experience in infancy enhances spatial cognition (Yan et al., 1998). This is supported by longitudinal evidence that the age at which walking emerges is predictive of spatial cognition at 32 months (Oudgenoeg-Paz et al., 2015). Beyond infancy, an association has been shown between motor ability and mental rotation performance in 5-to 6-year-olds (Jansen and Heil, 2010), and between motor ability and spatial navigation performance in 5-to 11-year-olds (Farran et al., 2019).
Further evidence for the relationship between the motor and spatial domains comes from individuals with physical disability (PD), including those with Cerebral Palsy. Physical Disability is a disturbance of movement and is used as an umbrella term that includes various subtypes and causal pathways. A diagnosis of Cerebral Palsy is given when the disorder of movement results from an early acquired non-progressive brain lesion (Rosenbaum et al., 2006); individuals with Cerebral Palsy also present with varied neural presentation and cognitive impairments (Ego et al., 2015;Stadskleiv et al., 2017). Stadskleiv et al. (2017) report that the majority of individuals with Cerebral Palsy in their study presented with white matter lesions. Their measure of MRI presentation was not associated with motor outcome, but was associated with level of cognitive ability.
Studies that have specifically investigated spatial cognition in children with PD have shown that this group demonstrate impaired spatial knowledge of their environment (Stanton et al., 2002;Wiedenbauer and Jansen-Osmann, 2006), and that individuals with PD present with impaired visuo-spatial perception (Stiers et al., 2002;Critten et al., 2018). In children with Cerebral Palsy, Belmonti et al. (2015) report impaired spatial memory on a table-top task and a large-scale spatial memory task. They also report an association between spatial memory and the extent of neural impairment for right-hemisphere lesions, but not for left-hemisphere lesions. They explain this with respect to evidence for right lateralization of visuospatial functions (for example, the right inferior parietal lobe; Schintu et al., 2014) and perception of self-motion (right parietal-temporal areas; Dieterich et al., 2003).
The aim of the current study was to better understand the relationship between motor and spatial ability domains by further investigating the impact of physical disability on spatial cognition while also contributing to the limited literature describing the impact of independent physical exploration on children's sense of spatial competence. With reference to physical exploration, Oudgenoeg-Paz et al. (2015) reported that scores on a selflocomotion physical exploration measure among their typically developing young participants (20 months of age) was predictive of small scale spatial cognition at 32 months [assessed with the Block Design subtest of the Wechsler Intelligence Scales for Children -Fourth Edition (WISC-IV); Wechsler, 2003]. Furthermore, investigators have shown that a child's experience of physical exploration in their local environment is related to the development of strategies required for successful navigation of space (Cornell et al., 2001). Since physical exploration is likely to be restricted in those with PD, due to their poor motor co-ordination, muscular weakness, limited sensations such as paralysis, difficulties with proprioception (perception of the body) and/or poor balance (Sit et al., 2007), comparing spatial cognition skills in children with PD and children without PD, can provide potential insight into the role of physical exploration opportunity as a causal factor in the development of spatial cognition.
In this study we focus on two groups of children with special educational needs including PD: children with PD who are wheelchair users; and those with PD who have independent locomotion. These groups differ with respect to independent exploration because restrictions on exploration are likely to be increased for wheelchair users, especially in the early years. This is because, for wheelchair users, some activities and places are inaccessible and, although there are wheelchair users who are able to self-propel, many wheelchair users are often guided along routes by helpers who may repeat the same routes. This limits the individual's active control over their exploration. Active control was investigated by Foreman et al. (1994) who demonstrated poorer performance in a radial search task in 6year-olds who were trained passively compared to 6-year-olds who experienced active training. For both passive and active free-choice conditions, they included a walking and a sitting (being pushed in a push chair) condition. They determined that the free-choice element, i.e., self-initiated exploration, was more important than the type of locomotion, thus emphasizing that for wheelchair uses, restrictions to their autonomy of movement can negatively impact spatial cognition.
Whilst the above review demonstrates an incomplete understanding of the relationship between motor competence and spatial reasoning, there is a consistent pattern of past findings showing an association between them. To our knowledge, our study provides the first investigation of the relationship between motor impairments and small-and large-scale spatial cognition in a large group of children with PD. We included three assessments of spatial cognition. First, we used a mental rotation task, a relatively pure measure of small-scale spatial ability with no physical manipulation requirements in which participants match a rotated image to one of two mirror-imaged upright images. Uttal et al. (2013) and Newcombe (2018) refer to mental rotation as requiring intrinsic spatial coding, i.e., the withinobject spatial relations that constitute the structure of the object. This spatial task activates the posterior parietal cortex (Zacks, 2008). It also taps into processes that are common to motor activity (Wohlschläger and Wohlschläger, 1998), and activates the precentral sulcus, a neural area associated with motor activity (Zacks, 2008). Particularly relevant to the current study, this brain activation from a mental rotation task is atypical in individuals with impaired motor ability (e.g., Biotteau et al., 2016;Kashuk et al., 2017). We predicted impaired mental rotation abilities in our participants with PD, relative to those with typical development and suspected that this deficit would be more evident among children with PD who were wheelchair users (for whom exploration might have been relatively limited) than among children with PD who were able to walk independentlyas exploration was found to be associated with small scale spatial performance (Oudgenoeg-Paz et al., 2015).
We also included two route learning tasks; in contrast to the mental rotation task, these tasks can be classified as extrinsic spatial tasks (Uttal et al., 2013;Newcombe, 2018), i.e., requiring coding of the spatial relations between objects. The spatial programming task was a 2D route learning problem presented via a freely available Bee-Bot App. Bee-Bots are programmable robots and the Bee-Bot app was presented to children on an iPad. Participants were shown a map-like viewerindependent/allocentric perspective and asked to program the route that the Bee-Bot should take in order to arrive at a flower. This form of presentation allows the participant to view the set of spatial relationships within the environment simultaneously, without actually navigating through the space; it provides a static view of the environment (see Uttal et al., 2006). The use of maps has been related to the development of allocentric spatial coding strategies (Uttal et al., 2006). Furthermore, the development of the ability to use allocentric coding has been associated with self-locomotion (Yan et al., 1998). The second route learning task was presented using desktop virtual reality (VR) and thus represented a high level of physical realism. In contrast to the viewer-independent perspective presented in the spatial programming task, in this task participants viewed the environment from a viewercentered/egocentric perspective. Participants were shown a route from A to B and asked to learn it. This perspective represents the prototypical manner in which we experience new environments; as we navigate, the relationship between ourselves and space is constantly changing, and landmarks are viewed sequentially. Desktop VR is ideally suited to this investigation because it neutralizes the demands of real-world locomotion, allowing a pure measure of spatial cognitive aspects of navigation.
The above two route learning tasks differ in their egocentric vs. allocentric representation of the environment, and the use of a map only in the spatial programming task. Landmark knowledge and route knowledge, as measured in both tasks, activate the parahippocampal gyrus (Wegman and Janzen, 2011) and the caudate nucleus respectively (Doeller et al., 2008). Allocentric coding and the development of configural knowledge, i.e., knowledge of the spatial relations between places within an environment activates the hippocampus, as part of the same interacting network (Doeller et al., 2008). Thus it is likely that the spatial programming task additionally activates the hippocampus. This is, of course, speculative without direct neural evidence.
We predicted poorer performance in the children with PD for both route learning tasks compared to a typically developing group. For the spatial programming task, this was based on the association between early locomotor experience and the development of allocentric coding (Clearfield, 2004). For the VR route learning task, this was based on previous reports of impaired spatial knowledge of large-scale environments in individuals with PD (Stanton et al., 2002;Wiedenbauer and Jansen-Osmann, 2006). Given the association between physical exploration and the development of both allocentric and egocentric spatial knowledge (Cornell et al., 2001;Oudgenoeg-Paz et al., 2015), as well as the impact of passive vs. active route learning on performance (Foreman et al., 1994), we predicted a further differentiation between children with PD who used a wheelchair vs. those who could walk independently, with the poorest performance predicted for the PD participants who used a wheelchair. This was based on the assumption that wheelchair users had relatively limited opportunity for independent exploration compared to non-wheelchair users. Due to the heterogeneity of neural damage in individuals with PD, we did not make predictions based on the neural activation of each spatial task.
We also included a memory element to the VR route learning task, in which participants were asked to recall landmarks along the route. Whilst this had a spatial element, it could be solved using visual recognition and so we did not predict a deficit in the PD participants on this measure.
Participants
For the mental rotation and spatial programming tasks, 51 typically developing children were recruited from mainstream schools in the United Kingdom (see Table 1). For the VR route learning task, in addition to the fifty-one TD children who completed the full battery of tasks, data was also included from TD children who had completed this task as part of a different study (Farran et al., 2019) bringing the total number of TD children to N = 122 for this task. The TD children ranged from 5 to 11 years, chosen to span the mental age range of the PD participants (which was lower than their chronological age, on account of their learning difficulties). This allows us to compare the performance of the PD group to what would be expected for their level of non-verbal ability, thus taking into account their learning difficulties.
Fifty-three participants with PD (all with statements of special educational needs) were recruited from two special schools in the United Kingdom. All children with PD who were invited to take part met the criteria of being able to verbally communicate (some children supported this by signing or gesturing), having the ability to use the keys on a computer keyboard (some children used a large-keys keyboard), and all had normal or corrected to normal vision. One of the authors, who was also a teacher of the children with PD, also completed the Movement Assessment Battery for Children 2 -checklist (MABC2; Henderson et al., 2007) for each participant. The MABC2 checklist is a thirtyitem checklist in which the respondent rates the child's motor competence on a 4-point scale (0, 1, 2, or 3). The questions refer to motor skills such as self-care skills, classroom skills, NA 1 BAS3 Matrices ability scores are derived from the first item that was assessed and are equivalent to raw scores. 2 One participant in the TD group did not complete the mental rotation task.
In order for the range of BAS ability scores to be similar across the groups, the three participants with the lowest BAS matrices scores in the PD groups were excluded from the sample for mental rotation and Bee-Bot analyses.
recreational skills, and ball skills. A total motor score is provided which is the sum of the thirty scores, with a higher score indicative of poorer motor performance. The MABC2-checklist correlates significantly with performance on the MABC2 test (r = 0.38; p < 0.001; Schoemaker et al., 2012) and has high construct validity (Cronbach's α: 0.94; Schoemaker et al., 2012). All participants completed the Matrices subtest of the British Ability Scale 3 (BAS3; Elliot and Smith, 2011) as a measure of non-verbal ability and the British Picture Vocabulary Scale III as a measure of verbal ability (Dunn and Dunn, 2009). The children with PD were divided into two groups: (1) wheelchair users (used wheelchairs every day and for most of the day) and part-time wheelchair users (used wheelchairs for part of the day or the week); and (2) non-wheelchair users (although some of this group may have used wheelchairs at an earlier age) (see Table 1). A large proportion of the children with PD had received a diagnosis of Cerebral Palsy; N = 33/34 (97%) in the PD wheelchair group, and N = 6/18 (33%) in the PD no wheelchair group. Individuals with Cerebral Palsy have known deficits in visuo-spatial perception (e.g., Ego et al., 2015;Critten et al., 2018Critten et al., , 2019. The extent to which these deficits are independent of their motor impairment is not possible to ascertain. However, given that Cerebral Palsy is a lifelong disorder caused by cortical damage before, during or soon after birth, and the known developmental association between motor and spatial domains, it is highly likely that early disordered motor development in these participants has an impact on the development of spatial cognition (see Stanton et al., 2002), similar to that of an individual with a lifelong motor deficit without a diagnosis of cerebral palsy.
Ethical approval was obtained from the University Ethics Committee. Parental written consent and the children's verbal consent were obtained prior to testing. Children were tested individually in quiet areas or rooms in 20-30 min sessions. For each task, participants were given no help during the tasks beyond the standardized instructions. As this was part of a larger battery of tasks, children took part in approximately six sessions. The additional TD children who received the VR navigation task, the BAS3 matrices and BPVS were presented with these tasks under the same conditions (the same 17 inch laptop was used for VR navigation task, task administration was identical, and testing took part in a quiet area of the school within a 30-min testing session, as part of a larger battery of tasks).
Mental Rotation Task
This task, from Broadbent et al. (2014b), was presented on a 17 inch laptop computer. Participants viewed two mirror imaged monkeys on the top half of the screen and the test monkey on the bottom half of the screen (Figure 1) and were asked to choose which of the two monkeys on the top half of the screen matched the monkey on the bottom half of the screen. They responded by pressing one of two keys on the keyboard. A large-keys keyboard was available for children who found the laptop keys difficult to access, and two participants chose to answer by pointing, and their choices were inputted for them. There were 6 practice trials followed by 32 experimental trials. In the practice trials, the test monkey was rotated 0 • (four trials), 45 • (one trial), or 90 • (one trial). The practice block was repeated if participant made any errors on these trials. No feedback was given for experimental trials, but motivation language was used at the end of the task such as "Well done." In the experimental trials, the test monkey was rotated at 0 • , 45 • , 90 • , 135 • , or 180 • . Accuracy was recorded.
Spatial Programming Task
The Bee-Bot app 1 was presented on an iPad. There are twelve route planning games on the app, starting with a very simple route for the Bee-Bot to reach a flower (Figure 2). The routes gain in complexity and some routes have more than one algorithm to complete them. The first two routes were used as practice routes, and Routes 3-9 were used as experimental routes (seven routes). Participants were told that they would need to program the Bee-Bot to move it from the start along the route to the flower using the arrow keys in the corner of the screen. Participants were asked to program all moves before they started the Bee-Bot on the route by pressing the GO key. The experimental trials commenced once participants had passed the two practice trials.
Participants were told that if they made an error, they would be allowed to have another go. If participants perceived that they had made an error, motivational language was used (e.g., "Good effort") and they were encouraged to try again. There were a maximum of five trials for each route, and if the child did not complete a route correctly within the five trials, then the task finished. The task was scored as the number of routes attempted by the children (route accuracy: max = 7). We also recorded the number and type of errors made by participants. A correct programming algorithm included two types of commands; forward displacement of the Bee-Bot and left or right 90-degree rotation of the Bee-Bot. Errors scores were coded as a proportion of errors for that command type within the route, e.g., if there were two rotation commands in a route, an FIGURE 2 | Bee-Bot app showing Routes 3 and 9. Images published with permission from TTS group (https://www.tts-group.co.uk/). error of one would give a proportion of 0.5. The mean proportion error score across the number of routes attempted, for each error type, was used as the dependent variable.
VR Navigation Task
The VR navigation task was from Farran et al. (2012). Virtual environments (VEs) were created using Vizard 2 and presented on a 17 inch laptop computer. The VEs displayed brick-wall mazes which could be navigated using the arrow keys on the keyboard. Preceding the experimental maze, the participants watched the experimenter navigate a simple corridor that included two turns. Then they practiced navigating along the corridor. If participants had difficulty controlling their navigation, they were given another attempt.
The experimental VE displayed a brick-wall maze with 6 junctions, each leading to two paths, one correct and one incorrect. The 6 correct choices constituted two left, two right, and two straight-ahead choices. A map of the maze layout is shown in Figure 3. Each incorrect path choice ended in a cul-desac and looked like a T-junction when viewed from the preceding junction. Sixteen unique landmarks featured throughout the maze and featured equally on the left and right of the paths. Eight of the landmarks were near to junctions ('junction landmarks'). Eight of the landmarks were not near to junctions ('path landmarks'). Landmarks were selected from a range of categories (e.g., animals, tools, furniture) for their high verbal frequency (Morrison et al., 1997) and for being easy to recognize. A gray duck was shown at the end of the maze. On approaching the duck, the game ended.
Route learning task
Participants were instructed to learn a single six junction route through a maze. The experimenter showed the participant the correct route through the maze by using the arrow keys on the keyboard to navigate and told the participant to watch, because it would be their turn to navigate next. After the experimenter demonstration, the participant attempted to walk the correct route from start to finish using the arrow keys. A large-keys keyboard was available for those children who found the laptop keys difficult to access. If the participant selected an incorrect path, they reached a cul-de-sac and could self-correct by turning around. If a participant was going backwards to the start of the maze, they were directed back to the junction where they made the error. On reaching the gray duck (i.e., on completing the route) the trial terminated. Motivational language was used throughout to maintain participant concentration.
Each walk through the maze from start to finish of the route was labeled a learning trial. The criterion for having learnt the route was the successful completion of two consecutive learning trials from start to finish without error. If participants did not meet this criterion after ten learning trials, the task was stopped. The cumulative number of errors across learning trials was recorded; this was used as the dependent variable. An error was defined as a deliberate incursion down an incorrect path; if the participant corrected his/her course before reaching half-way down an incorrect path section, no error was counted.
Landmark recall task
After the participant had learnt the six junction route to criteria, they completed a landmark recall task. Participants were shown the same maze but with all landmark objects shown as red balls. The experimenter navigated, stopping at each junction to point out the red ball(s). Participants were asked to recall what object the ball had been when they were navigating the route. On providing an answer, the participant was shown a visual image of the correct answer on another computer screen as feedback, i.e., the landmark in its correct location. This feedback was given to eliminate any dependency between their answers (e.g., if the participants answered incorrectly at one location, without feedback they might not have used that landmark label again, or their incorrect answer might have negatively influenced their subsequent performance if they had recalled the landmarks in sequence). This was conducted for all 12 landmarks that were visible from the correct path. Eight of these landmarks were on the correct path, there were also four landmarks that could be viewed straight ahead before a correct turn to the left or right was executed).
To ensure that the verbal labels used by the participants in the landmark recall task could be coded accurately (e.g., a participant might use the word "light" for "streetlamp"), after the landmark recall task, participants were shown images of each of the 16 landmarks and were asked to name them. This information was then used to retrospectively facilitate the scoring of the landmark recall task.
Overview of Analyses
Where suitable, the data is analyzed using developmental trajectory analysis (Thomas et al., 2009). Developmental trajectory analysis does not require the individual matching of the participants and goes beyond determining differences in group means, to ascertain whether the trajectory of performance across the range of mental ages of each group differs at the onset of the trajectory (the youngest mental ages measured) or the rate of development. For developmental trajectory analysis to be meaningful, it is important that a measure of mental age (in this study, BAS3 matrices ability score) correlates with the task dependent variables. This was the case for the mental rotation and spatial programming tasks, but not the VR navigation task. For the VR navigation task, comparison was by group means instead.
Developmental trajectory analyses were ANCOVAs with Group as the between-participant factor and BAS3 matrices ability scores as the covariate. We chose BAS3 matrices ability score (equivalent to raw score) as our measure of mental age because it is a measure non-verbal ability and thus represents ability within the same domain as the tasks of interest. BAS3 matrices ability score was rescaled so that the X-axis crossed the Y-axis at the lowest BAS matrices score (a score of 58) of the participants. That is, we subtracted 58 from all BAS matrices scores for these analyses. This does not change the analyses but is easier to interpret because the starting point for the trajectories is at zero. The ANCOVA model included interaction terms between the BAS3 matrices covariate and Group. This was used to indicate whether spatial ability developed at a different rate for each group, with respect to non-verbal ability.
The mental rotation variables were broadly normal (Kolmogorov-Smirnov, p > 0.05). Spatial programming and VR navigation variables were largely not normally distributed (Kolmogorov-Smirnov, p < 0.05). Because ANOVA is robust to violations of assumptions of normality, parametric analyses were applied (Blanca et al., 2017) with one exception, maze error. For this variable, responses were skewed toward zero, and thus non-parametric analyses were conducted. For associational analyses, parametric and non-parametric analyses were applied for normal and non-normal distributions respectively.
Mental Rotation
Developmental trajectory analysis was conducted on the proportion of correct answers with degrees of rotation (0 • , 45 • , 90 • , 135 • , 180 • ) as a within-participant factor and Group as a between-participant factor. This revealed the anticipated main effect of rotation (decrease in accuracy with increasing degrees of rotation), reported as a linear contrast, F(1,94) = 18.94, p < 0.001, η p 2 = 0.17. The effect was consistent across participant groups, F < 1. There was no group difference in proportion correct at the lowest level of non-verbal ability (i.e., at the intercept of the trajectories), F(2,94) = 2.529, p = 0.085, η p 2 = 0.051). However, because this effect was marginal we had reason to explore it. This revealed a lower proportion correct at the lowest level of nonverbal ability in the PD wheelchair group compared to the TD group only (p = 0.036; other comparisons; p > 0.05; see Figure 4). There was no interaction between non-verbal mental age and group, which is indicative of similar rates of development across groups, F < 1. BAS3 matrices score (non-verbal mental age) was significantly related to proportion correct, F(1,94) = 32.079, p < 0.001, η p 2 = 0.254. All other interactions with BAS3 matrices score were non-significant, p > 0.05 for all.
Spatial Programming Route Accuracy
Developmental trajectory analysis on the number of routes attempted (route accuracy, maximum = 7) with Group as a between-participant factor demonstrated no group difference at the lowest level of non-verbal ability, F < 1 and similar rates of development, F(2,95) = 1.379, p = 0.357, η p 2 = 0.296 across the groups. BAS3 matrices score was significantly related to Bee-Bot route performance, F(1,95) = 39.875, p < 0.001, η p 2 = 0.296 ( Figure 5).
Spatial Programming Errors
Developmental trajectory analysis on proportion error scores, with a within-participant factor of Error Type (forward errors, turn errors) and Group as a between-participant factor demonstrated a group difference at the lowest level of nonverbal ability, F(1,95) = 7.525, p = 0.001, η p 2 = 0.14 and an interaction between non-verbal ability and group, which is indicative of different rates of development, F(2,95) = 3.20, p = 0.045, η p 2 = 0.063 across the groups. This was accounted for by significantly more errors at the intercept in the TD group compared to both of the PD groups (TD vs. PD wheelchair, p = 0.002; TD vs. PD no wheelchair, p = 0.002; PD no wheelchair vs. PD wheelchair, p = 0.417), and a steeper improvement with development in the TD group compared to the PD wheelchair group (TD vs. PD wheelchair, p = 0.041; TD vs. PD no wheelchair, p = 0.076; PD no wheelchair vs. PD wheelchair, p = 0.853). The slopes of the trajectories for each error type did not differ, F(1,95) = 1.09, p = 0.30, η p 2 = 0.011, and this pattern was consistent across groups, F(2,95) = 2.00, p = 0.14, η p 2 = 0.040. BAS3 matrices demonstrated a significant association with spatial programming errors, F(1,95) = 22.39, p < 0.001, η p 2 = 0.19. Figure 6 illustrates developmental trajectories collapsed across error type.
Navigation
A larger TD group was employed for this task, which enabled comparison with TD groups in different age ranges ( Table 2). The PD groups had a similar level of BAS3 matrices ability score to the TD 5-7 year-olds (PD wheelchair vs. TD 5-7: p = 0.410; PD no wheelchair vs. TD 5-7: p = 0.945) and a lower level of BAS3 matrices ability score than the TD 8-9 year-olds and the TD 10-11-year-olds (p < 0.05 for all).
Landmark Recall
ANOVA of the number of junction and path landmarks that were correctly recalled was carried out, with a between-participant factor of Group and a within-participant factor of Landmark Type (path, junction). This demonstrated no difference in the number of landmarks recalled across groups, F(4,166) = 2.093, p = 0.084, η p 2 = 0.048 (Tukey pairwise comparisons were non-significant for this marginal effect: p > 0.05 for all). There was a main effect of landmark type due to stronger recall of junction than path landmarks, F(1,166) = 159.463, p < 0.001, η p 2 = 0.490, which did not interact with group, F < 1 (Figure 7).
Associations Between Motor Ability and Spatial Competence
We were also interested in how performance on the M-ABC checklist correlated with each of our spatial dependent variables. M-ABC checklist data is available for the two PD groups only, and so the correlation matrix below does not include the TD group. As shown in Table 4, there were no significant correlations between motor score and spatial competence.
DISCUSSION
The current study had two aims. The first aim was to investigate the relationship between motor ability and spatial competence by working with participants for whom motor ability is impaired. The second aim was to investigate whether this relationship differed for those who were wheelchair users and potentially limited in opportunities for independent exploration, compared to those who could walk independently. All participants with PD had a statement of special educational needs (e.g., moderate learning difficulties, epilepsy). This was evident in their level of non-verbal ability, which was commensurate with that of TD 5-to 7-year-olds.
We predicted that the PD groups would show impaired spatial ability on all three tasks. We also predicted a differentiation in performance between the two PD groups for all three spatial tasks, with the PD wheelchair group finding the tasks harder than the PD no wheelchair group, on account of differences in their opportunities for independent exploration. We found that level of impairment in the PD groups across tasks was broadly akin to their level of non-verbal ability (note that the PD groups had poor non-verbal ability). This demonstrates that spatial ability is poor (i.e., it is not ageappropriate), but that in the context of the learning difficulties of these individuals, it does not represent a specific area of weakness. The one exception to this was performance of the PD wheelchair group on the mental rotation task, where performance was lower than expected for their level of nonverbal ability. Mental rotation taps into intrinsic spatial skills, whilst the two spatial route tasks tap into extrinsic spatial skills (Uttal et al., 2013;Newcombe, 2018). Precisely why performance on the mental rotation task and/or intrinsic spatial skills would show a specific impairment relative to the two spatial route tasks and/or extrinsic spatial skills is difficult to determine. The difference could relate to the neural activation of motor areas of the brain in the mental rotation task specifically (Zacks, 2008). However, this is a tentative explanation given the known heterogeneity in neural deficit in individuals with physical disability and learning difficulties (e.g., Stadskleiv et al., 2017).
The overall pattern of performance observed could also reflect differences in the sensitivity and specificity of the route learning tasks. The VR navigation task relied on landmark knowledge and route knowledge and thus did not draw on the more sophisticated configural knowledge. Navigational tasks that rely on configural knowledge, an ability which develops in typically developing children between the ages of 5 and 10 years (Bullens et al., 2010;Broadbent et al., 2014a), might have been more sensitive to group differences. Furthermore, neither of the route learning tasks are pure measures of spatial ability. Route knowledge tasks also draw on executive function skills (Purser et al., 2012) and we discuss below that, for the spatial programming task, the working memory and attention demands of the task might explain the pattern of errors of the two PD groups. Limitations in working memory and attention, on account of learning difficulties could thus overshadow any differences between the two PD groups in spatial competence. The pattern of spatial performance in the PD wheelchair group is discussed further within the context of each task below.
We also predicted that performance on the landmark recall task would not be an area of deficit for the PD groups. This was the case. In fact, there were no group differences on this task, demonstrating that the mechanisms tapped into on this task (object memory) were not impacted by either physical disability or the participants' learning difficulties. Although, note that the lack of evidence in progression in the three TD groups could also suggest that this measure was not sensitive to developmental differences. Performance on each task is discussed in turn below.
Performance on the mental rotation task demonstrated a linear decrease in accuracy with increasing degrees of rotation for all groups. This pattern was expected for the TD group (e.g., Farran et al., 2001). The presence of this typical pattern for both of the PD groups suggests that the PD groups were capable of performing mental rotation and approached the task in a typical manner. Despite this, the PD groups performed at a lower level than expected for their chronological age (mean: 13 years), and at a level commensurate with their level of nonverbal mental age. A lower level of performance was observed in the PD wheelchair group, compared to the TD group from the lowest level of non-verbal ability and remained consistently low throughout the range of non-verbal abilities, as indicated by the similar rate of development to the TD group. In other words, across the range of non-verbal abilities that we examined, the PD wheelchair group was consistently and to the same degree poorer than the TD group on the mental rotation task, suggesting delayed but parallel development. In contrast, for the PD no wheelchair group, performance was on a par with the developmental trajectory of the TD group and therefore as expected for their level of non-verbal ability. Thus, any deficit in mental rotation ability in this group appears to be attributable to having learning difficulties (indexed here by non-verbal ability), rather than motor impairments. Note, these group comparisons were explored based on a marginal interaction effect and so should be considered cautiously.
The PD wheelchair group are likely to have limited experience of exploration and limited experience of actively moving through their environment. This could have a developmental cascading impact on the development of their ability to perform mental rotation. This is supported by Oudgenoeg-Paz et al. (2015) who demonstrated that exploration in TD toddlers was longitudinally predictive of their performance on a block construction task (a task which involves mental rotation; Farran et al., 2001). It is also noteworthy from the MABC-checklist scores that the PD wheelchair group had more severe motor impairment than the PD no wheelchair group. This was the case across all subsections of the checklist (Table 5), including sections A1 and A2 which included fine motor items. It is possible that this broad difference in motor competence between the PD groups, rather than or in addition to their experience of independent exploration, can explain why mental rotation was impaired in the PD wheelchair group relative to their non-verbal ability. Whilst this is not statistically supported by the correlational analyses which indicated no significant associations between motor ability and spatial competence, the relationship does show a medium effect size for this group (Cohen, 1988) and the lack of significance could reflect a lack of power for these analyses. In support of a broad motor-spatial relationship, Soska et al. (2010) report a relationship between the fine motor skills required for visual-manual exploration and small-scale spatial abilities in 4.5-7.5 months-old infants. Further support is offered from evidence that mental rotation draws on mechanisms that are common to motor activity at neural and behavioral levels (Parsons et al., 1995;Zacks, 2008), supporting a direct impact of motor impairment on performance on this task for the PD wheelchair group. Further research with a larger participant group is required to determine the motor-spatial association in this context. For both of the PD groups, performance on the navigation task was lower than the level of 10-to 11-year-old TD children, despite the age range of the PD groups spanning from 5 to 18 years. This level of navigation ability is broadly in line with the level of nonverbal ability of the two PD groups, which was similar to that of the TD 5-to 7-year-old group. The association between motor ability and performance on the VR navigation task showed a medium (albeit non-significant) effect size for the PD wheelchair group. Whilst this could be taken to suggest some impact of their motor impairment on navigation performance, the lack of group difference in navigation performance between the two PD groups suggests that the physical disabilities of the PD groups were not the limiting factor, but rather it was their learning difficulties. At first blush, this appears to contrast to previous reports of impaired navigation in people with physical disabilities (Stanton et al., 2002;Wiedenbauer and Jansen-Osmann, 2006). However, on a closer look, it simply reflects differences in the matching procedures across the studies. Stanton et al. (2002) did not measure IQ (all participants had cognitive performance in the 'normal' range) and matched participants by Chronological Age. Thus, their PD group performed at a lower level on a navigation task than expected for their chronological age, which is largely consistent with the current study. Furthermore, Stanton et al. (2002) also used a developmentally more sophisticated measure of navigation, which might had differentiated the groups more than the current measure of navigation. Given that a large proportion of their sample had a diagnosis which implicates poor visuospatial cognition (Cerebral Palsy or Spina Bifida), without cognitive data it is difficult to disentangle the extent to which this contributed to their navigation performance. Wiedenbauer and Jansen-Osmann (2006) report data from children with Spina Bifida and TD controls. Their groups were matched on Chronological Age and Verbal IQ and thus the Spina Bifida group had lower non-verbal IQ than the TD control group. As such, the deficit in navigation that they report is relative to their Chronological Age and not their (lower) non-verbal ability; our data are also broadly consistent with this pattern of findings, as we observed a deficit relative to Chronological Age. One might argue that by comparing spatial performance in our sample to their level of non-verbal mental age, we are risking matching away any group differences. Whilst this is a risk, it is the most appropriate way to account for the cognitive learning difficulties of our PD samples. Furthermore, the use of developmental trajectories and error analyses in this study has enabled us to capture additional information in relation to development, individual differences and task approach.
The pattern of performance on the navigation task demonstrated that all groups had stronger recall of landmarks at junctions than landmarks on other parts of the path sections. This is in line with our predictions and suggests that all children were using a landmark strategy when learning the route, i.e., they understood that landmarks at junctions were relatively more useful for route learning than other landmarks. This strategy is consistent with the literature on the typical development of route learning (e.g., Farran et al., 2012), and appears to be robust to atypical development as it has been observed in several atypical groups including Williams syndrome and children with Attention Deficit Hyperactivity Disorder (ADHD) (Farran et al., 2019). Consequently, despite having both physical disabilities and learning difficulties, these participants appeared to be able to encode landmarks effectively, and use them as a tool when navigating.
The pattern of performance of the PD groups on the spatial programming task differed from that of the TD group. For both PD groups, the number of routes attempted was in line with that expected for their level of non-verbal mental age and showed a typical rate of development. This was, however, coupled with group differences in the error patterns which suggests that the PD groups were approaching the task in a different manner to the TD group. Developmentally, at the lowest level of nonverbal ability, the TD group had higher proportion error scores than both PD groups, even though they were more successful in progressing through the routes. There are a number of reasons for this finding. A high proportion of errors could indicate a difficulty in perspective taking. For example, if the Bee-Bot is facing right, and it needs to move upwards on the iPad, the participant must determine that this requires a 90 • left turn, i.e., they need to view the turn from the perspective of the Bee-Bot and not themselves. Given that perspective taking is a relatively late spatial skill to develop (Frick et al., 2014), this might have impacted the TD group more than the PD group who had more years of experience and perhaps more exposure to allocentric representations of space. The relatively late development of perspective taking (Frick et al., 2014) and processing allocentric representations (Bullens et al., 2010;Broadbent et al., 2014a) could explain why the TD group exhibited a high number of errors at the lowest level of non-verbal ability. This contrasts to the other spatial skills measured in this study, such as mental rotation and route knowledge, which are available from at least five years in typical development (e.g., Lingwood et al., 2015). Furthermore, due to the threshold procedure employed, the TD group were exposed to a broader range of routes, and so encountered relatively more of the difficult routes (which necessarily included more changes in perspective) than the PD groups. If perceptive taking and/or allocentric coding was more problematic for those with lower non-verbal ability, this would be compounded by exposure to a larger range of routes, as observed in the TD group. The group difference at the trajectory intercept was coupled with a steeper rate of development for the TD group relative to the PD groups, which meant that the TD group caught up with the PD groups as non-verbal ability increased. This difference in the rate of development between the TD and PD groups might reflect differences in the performance limitations of each group. If the TD group are initially failing due to poor perspective taking and/or poor allocentric knowledge, their rate of development might be related to the development of these spatial skills. The PD group might have an initial advantage in these spatial skills due to their higher chronological age and level of experience with map-like representations. However, other factors might limit their progression such as juggling the spatial demands with more domain general demands such as working memory and attention, skills which might be limited in these groups due to their general learning difficulties. This might have led participants to make mistakes such as miscounting the number of paving slabs, losing where they are on the route when planning their algorithm, or forgetting the function of the buttons (e.g., understanding that the turn function programs the Bee-Bot to turn within their own square rather than moving forward one square when it turns). These kinds of limitations could be more confounding across the range of non-verbal abilities, hence the shallower rate of development in these groups. These kinds of limitations might also explain why there was no difference in performance between the two PD groups. These tentative suggestions require further research which take into account the involvement of working memory and attention processes in this task.
Whilst our findings are consistent with the conclusion that physical disability per se does not necessarily have a broad impact on spatial competence, it is difficult to disentangle the bidirectional developmental influence of physical disabilities and learning difficulties when both are present from birth, as in our sample. A large proportion of our sample had a diagnosis of Cerebral Palsy, which is known to present with deficits in visuospatial perception alongside motor difficulties (although note evidence for heterogeneity in visuospatial perception in Cerebral Palsy; Critten et al., 2019). We cannot rule out that any atypicalities observed in the current sample are driven by limitations in visuospatial perception that are associated with Cerebral Palsy. However, all of our PD participants had a lifelong disorder and given the known interacting developmental trajectories of the spatial and motor domains (e.g., Yan et al., 1998;Clearfield, 2004;Jansen and Heil, 2010;Oudgenoeg-Paz et al., 2015;Farran et al., 2019), further research is required to determine any differentiated impact of a diagnosis of Cerebral Palsy, in individuals with PD and a learning disability, on spatial competence. We predict that a lifelong physical disability in any individual could impact the spatial domain.
To summarize, we have shown across three different spatial tasks that children with PD and learning disabilities perform lower than an age-appropriate level, but for the most part, at the level expected for their level of non-verbal mental age. Mental rotation was one exception to this finding; a skill that was particularly problematic for the children who relied on a wheelchair. We also observed unusual error patterns in both PD groups on the spatial programming task. Whilst it appears that having a physical disability did not always impact the development of spatial cognition over and above any general learning difficulties in our groups, there were indications of some minor, but potentially significant impacts of having a physical disability on spatial cognition. This highlights the importance of enabling active exploration for individuals with PD, particularly for those who are wheelchair users; evidence supports the importance of learning spatial layouts using free-choice and active exploration, over and above whether children locomote or use a wheelchair (Foreman et al., 1994).
DATA AVAILABILITY STATEMENT
The dataset presented in this study can be found in the following online repository: https://osf.io/75skq/.
ETHICS STATEMENT
This study was reviewed and approved by UCL Institute of Education. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
EF, VC, and DM conceived of the initial study design. VC collected the data from the Physical Disability participants. EC collected the data from Typically Developing participants. EF analyzed the data and wrote the manuscript. All the authors read, contributed to and approved the final manuscript. FUNDING EC's contribution was funded by an Economic and Social Research Council (ESRC) 1 + 3 Ph.D. studentship. | 2021-08-04T00:04:57.837Z | 2020-04-22T00:00:00.000 | {
"year": 2021,
"sha1": "469c72b12985bf890881822a11e600cca7827742",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2021.669034/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd53507358821c4bb66a26097d3b1b04d85a6252",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
122548117 | pes2o/s2orc | v3-fos-license | A 19-miRNA Support Vector Machine classifier and a 6-miRNA risk score system designed for ovarian cancer patients
Ovarian cancer (OC) is the most common gynecologic malignancy with high incidence and mortality. The present study aimed to develop approaches for determining the recurrence type and identify potential miRNA markers for OC prognosis. The miRNA expression profile of OC (the training set, including 390 samples with recurrence information) was downloaded from The Cancer Genome Atlas database. The validation sets GSE25204 and GSE27290 were obtained from the Gene Expression Omnibus database. Prescreening of clinical factors was conducted using the survival package, and the differentially expressed miRNAs (DE-miRNAs) were identified using the limma package. Using the Caret package, the optimal miRNA set was selected to build a Support Vector Machine (SVM) classifier. The miRNAs and clinical factors independently related to prognosis were analyzed using the survival package, and the risk score system was constructed. Finally, the miRNA-target regulatory network was built by Cytoscape software, and enrichment analysis was performed. There were 46 DE-miRNAs between the recurrent and non-recurrent samples. After the optimal 19-miRNA set was selected for constructing the SVM classifier, 6 DE-miRNAs (miR-193b, miR-211, miR-218, miR-505, miR-508 and miR-514) independently related to prognosis were further extracted to build the risk score system. The neoplasm cancer status was independently correlated with the prognosis and conducted with stratified analysis. Additionally, the target genes in the regulatory network were enriched in the regulation of actin cytoskeleton and the TGF-β signaling pathway. The 6-miRNA signature may serve as a potential biomarker for OC prognosis, particularlyfor recurrence.
Introduction
Ovarian cancer (OC) is the most lethal gynecologic malignancy with a 5-year overall survival (OS) of ~47%, which has almost never changed over the past 20 years (1). In 2015, 1.2 million women suffered from OC, and the disease led to 161,100 deaths worldwide (2). The symptoms of OC are inconspicuous and non-specific thus most cases are diagnosed at a later stage (3). Therefore, early diagnosis and treatment of OC are critical for improving the outcomes of the disease, and prognosis mainly depends on the disease degree, tumor subtypes and medical conditions (2,4). Understanding the underlying mechanisms of OC could facilitate the development of advanced treatment approaches.
MicroRNAs (miRNAs) play important roles in OC pathogenesis and progression. By comparison of the transcriptome data from different tissues with genome-scale biomolecular networks, miR-124-3p was identified as a potential biomarker for OC (5). miR-27a is considered as an oncogene which inhibits forkhead box O1 (FOXO1) in OC (6), while miR-34a serves as a suppressor by downregulating histone deacetylase 1 (HDAC1) (7). miR-409-3p was found to enhance the cisplatin-sensitivity of OC cells by inhibiting autophagy controlled by Fip200 (8).
In addition, a growing number of studies have shown that the dysregulation of miRNAs is associated with the prognosis of OC. The miR-200 family members have been identified as prognostic indicators for the disease stage, tumor histology and survival of OC (9). For example, miR-200b-429 may be a promising marker for OC survival, and the low expression of miR-200 indicates a poor prognosis and plays a regulatory role in the tumor (10). An upregulated serum miR-221 expression level is correlated with tumor stage and grade of epithelial ovarian cancer (EOC), which serves as an independent factor for a poor prognostic in EOC (11). The serum level of miR-141 and miR-200c can distinguish OC patients from healthy controls, and they may be utilized as markers for predicting the prognosis of OC (12). A high expression level of miR-203 has been reported as a candidate marker that predicts the progression and adverse outcome of patients with EOC (13,14). Serum miR-21 expression was found to be increased in EOC patients, and it may function as a novel marker for the diagnosis and prognosis of EOC (15). The expression of miR-150 is higher in primary serous OC than in omental metastases, and its lower expression is associated with shorter progression free-survival in metastatic tissues (16). Nevertheless, the miRNAs related to the recurrence of OC have not been fully revealed.
Thus, exploring the correlation between miRNAs and the development and recurrence of OC is critical for improving the curative effects and prognosis of OC patients. Based on the miRNA expression profile of OC in the public database, the miRNAs correlated with the recurrence of OC were screened, and then a classifier was constructed to recognize the recurrence of OC. Combined with the prognostic information of the samples, a risk score system was constructed based on the expression levels of significant miRNAs. The present study may provide a theoretical basis for the prognostic prediction and targeted therapy of OC, particularlyrecurrent OC.
Materials and methods
Data source and prescreening of clinical factors. The miRNA expression profile of OC (the training set) was downloaded from The Cancer Genome Atlas (TCGA, https://portal.gdc. cancer.gov/) database (September 10, 2018), based on the Illumina HiSeq 2000 RNA Sequencing platform. The data in the '0a07b199-d93d-4202-a63a-b38e39dc5ca4.mirbase21. mirnas.quantification.txt' file that is level 3 was downloaded and used. Then we used the encoding information to obtain the sample information. There were 415 OC samples with available clinical information in the training set, of which 390 had information regarding recurrence: 170 were non-recurrent OC samples and 220 were recurrent. The human reference genome hg38/GRCh38 was used to annotate the expression information.
Meanwhile, other relevant datasets were searched from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih. gov/geo/) database with the keywords 'ovarian cancer' and 'Homo sapiens'. The inclusive criteria were: i) the samples in the dataset had recurrent information; ii) the samples had prognostic information; iii) the total number of samples was no <50; and iv) the dataset was an miRNA expression profile. Based on these criteria, only two datasets, GSE25204 (17) and GSE27290 (18), were selected and used as validation datasets. The GSE25204 dataset was based on the Illumina Human v2 MicroRNA expression beadchip platform and included 85 OC samples with recurrent information (validation set 1). The GSE27290 dataset was based on the Agilent-015508 Human miRNA Microarray platform (pre-commercial version 6.0) and contained 58 OC samples with recurrent information (the validation set 2).
The clinical information of all the samples in the training set was statistically analyzed. In order to determine the basis for grouping, the univariate and multivariate Cox regression analysis in the R survival package (19) (version 2.41-1, http://bioconductor.org/packages/survivalr/) was used to screen the clinical factors significantly associated with prognosis. P<0.05 was set as the significance threshold.
Data standardization and differential expression analysis.
The miRNA expression profile matrixes of the three datasets were stacked, and the matrix for each dataset was scaled according to the expression level. The unit specification was scaled as follows to provide a sample vector: where is the 2-norm of the vector (l 2 norm).
Using the sqrt [sum(data^2)] function in R (20), the square root of the eigenvalue of matrix B = A*A T was obtained. The purpose of this normalization was to obtain the sample values scaled to 1. Using median scaling, the expression level of each miRNA was centralized and normalized according to the median and median absolute deviation (MAD). Specifically, an eigenvector x = (x 1 ,..., x n ): was assigned. The median scale normalization was defined as For the training set, the samples were grouped according to the recurrence condition of the samples. Using the R package limma (21) (version 3.34.7, https://bioconductor. org/packages/release/bioc/html/limma.html), the differentially expressed miRNAs (DE-miRNAs) between the recurrent and non-recurrent OC samples were selected. The false discovery rate (FDR) <0.05 and |log 2 fold change (FC)| >0.263 were set as the thresholds for the significant differences. According to the expression levels of the DE-miRNAs in the training datasets, the bidirectional hierarchical clustering for expression levels of these DE-miRNAs was performed based on the centered Pearson correlation algorithm, using the pheatmap package (22) (version 1.0.8, https://cran.r-project. org/web/packages/pheatmap/index.html) in R.
Construction of Support Vector Machine (SVM) classifier.
In the training set, the Cox regression analysis in the R survival package (19) was used to select the DE-miRNAs significantly related to prognosis, with the threshold of log-rank P<0.05. The DE-miRNAs significantly related to recurrent prognosis were further selected to perform the follow-up analyses.
Recursive feature elimination (RFE) is an integrated machine learning method, which considers the selection of subset as an optimization problem (23). Using the RFE algorithm in the R package Caret (24) (version 6.0-76, https://cran.r-project.org/web/packages/caret), the optimal miRNA set was filtered from the training dataset. In the 100-fold cross validation, the miRNA with the highest accuracy was selected as the signature miRNA.
SVM is a supervised classification algorithm of machine learning, which discriminates sample types by estimating the probability that a sample belongs to a certain category (25). For the training set, the SVM classifier was constructed based on the optimal miRNA set using the SVM method (Core: Sigmoid Kernel; Cross: 100-fold cross validation) in the R package e1071 (26) (version 1.6-8, https://cran.r-project. org/web/packages/e1071).
The performance of the SVM classifier was separately evaluated in the training set and the validation sets using 4 valuation indicators [Concordance index, C-index; Brier score; Log-rank P-value of Cox-proportional hazard (PH) regression; and area under the receiver operating characteristic (ROC) curve, AUC]. The C-index and Brier score were calculated using the R package survcomp (27) (version 1.30.0, http://www. bioconductor.org/packages/release/bioc/html/survcomp.html). Using the R package survival (19), the Kaplan-Meier (KM) curves for the two groups classified by the SVM classifier were generated, and the log-rank P-value of the difference between the two groups was calculated. Furthermore, the indicators of ROC curves (sensitivity, Sen; specificity, Spe; positive prediction value, PPV; negative prediction value, NPV) were calculated using the R package pROC (28) (version 1.12.1, https://cran.r-project.org/web/packages/pROC/index.html).
Construction of the risk score system. Based on the multivariate Cox regression analysis in the R survival package (19), the prognosis-associated miRNAs were further analyzed to identify the DE-miRNAs independently related to prognosis. The log-rank P<0.05 was set as the threshold.
Based on the regression coefficients of the independent prognostic miRNAs, the risk score system was constructed, and the risk score of each sample was obtained according to the following formula: Risk score = ∑Coef DE-miRNAs x Exp DE-miRNAs where Coef DE-miRNA represents the regression coefficient, and Exp DE-miRNA indicates the expression level of the corresponding miRNA.
For the training set, the samples were divided into high-and low-risk groups with the median of risk scores as the cut-off point. Using the KM curve analysis in the R survival package (19), the correlation between the risk score system and prognosis was evaluated. Meanwhile, the risk score system was confirmed in the validation sets.
Stratified analysis of clinical factors.
Using the univariate and multivariate Cox regression analysis in the R survival package (19), the clinical factors independently correlated with prognosis in the training set were screened out. Combined with the high-and low-risk samples determined by the risk score system, stratified analysis was further carried out.
miRNA-target regulatory network analysis and enrichment analysis. The risk scores of the mRNA-sequencing samples matched with the miRNA-sequencing samples were calculated using the risk score system. Based on the risk scores, the samples in the training set were divided into high-and low-risk groups. Using the R package limma (21), the differentially expressed genes (DEGs) between the two groups were selected, with the thresholds of FDR <0.05 and |log 2 FC| >0.263. Based on the starBase database (29) (version 3.0, http://starbase. sysu.edu.cn/), the miRNA-mRNA regulatory interactions in at least one of the five databases, targetScan, picTar, RNA22, PITA, and miRanda, were selected. Then, the correlation of the expression levels of the miRNAs and target DEGs in the matched samples were calculated, and the interactions with significant negative correlations were selected. Subsequently, the miRNA-target regulatory network was visualized using the Cytoscape software (30) (version 3.6.1, http://www.cytoscape.org/). Using the Database for Annotation, Visualization and Integrated Discovery (DAVID) tool (31) (version 6.8, https://david.ncifcrf.gov/), the functional and pathway enrichment analyses were carried out, with P<0.05 as the screening criterion.
Results
Prescreening of clinical factors and differential expression analysis. The clinical information of the 415 OC samples in the training set was performed with statistical analysis, and Table I. Clinical information of all the tumor samples in the training set and the prescreening of the clinical factors significantly associated with prognosis.
Univariables Cox
Multivariables then the clinical factors significantly associated with prognosis were screened. The age, tumor recurrence, and neoplasm cancer status were found to be the clinical factors significantly related to prognosis (Table I and Fig. 1). To identify the recurrence prognosis-associated miRNAs, the samples in this study were grouped based on the recurrence information.
For the training set, a total of 46 DE-miRNAs (18 upregulated and 28 downregulated) were identified between the recurrent and non-recurrent OC samples ( Fig. 2A). The clustering heatmap was drawn based on the expression levels of the DE-miRNAs, which indicated that the samples were clearly divided into two types (Fig. 2B).
Construction of the SVM classifier. Using the Cox regression analysis, 24 prognosis-associated miRNAs were selected in the training set. Using RFE algorithm, the optimal miRNA set involving 19 miRNAs (including miR-135b, miR-139, miR-151, miR-187, miR-193b, miR-210, miR-211, miR-218, miR-219, miR-30b, miR-30d, miR-365, miR-505, miR-506, miR-508, miR-509, miR-513c, miR-514 and miR-760) was selected (Fig. 3). Based on the optimal 19-miRNA set, the SVM classifier was constructed. Then, the performance of the SVM classifier in the training set and the validation sets was assessed using the 4 valuation indicators aforementioned. The results showed that the C-index values were >0.80, and Brier score values <0.1 in both the training and validation sets (Table II). As shown in the confusion table diagrams that indicated the sample classification based on the SVM . Accuracy curve for screening the optimal miRNA set. The horizontal axis represents the number of miRNA variables, and the vertical axis represents the cross-validation accuracy. The marked content is the number of miRNAs corresponding to the optimal miRNA set. classifier, the 19-miRNA set could distinguish well the recurrent samples from the non-recurrent (Fig. 4). The AUC curves showed that the AUC values of the training set and the validation sets were >0.9 ( Fig. 4 and Table II). The KM curves suggested that the predictive results of the SVM classifier were significantly related to prognosis (P<0.05; Fig. 4). These results indicated that the 19-miRNA-based classifier could accurately determine the recurrence type of the OC samples. Table II. Evaluation indicators for the Support Vector Machine (SVM) classifier in the training set and the validation sets. Construction of risk score system. Combining the optimal 19-miRNA set with the recurrence prognosis information of the samples, 6 independent prognosis-related DE-miRNAs (miR-193b, miR-211, miR-218, miR-505, miR-508 and miR-514) were identified (Table III). Combined with the regression coefficients of the 6 independent prognostic miRNAs, the risk score system for OC was constructed. The formula for calculating the risk score of each sample was: With the median of risk scores as the cut-off point, the samples were classified into high-and low-risk groups. For the training and the validation sets, the KM curves showed that the high-and low-risk groups determined by the risk score system were significantly associated with the actual recurrence prognosis information (Fig. 5).
Stratified analysis of the clinical factors.
In the training set, although the age, tumor recurrence, and neoplasm cancer status were all identified as prognosis-associated clinical factors, only the neoplasm cancer status was considered as an independent prognostic factor relating to the recurrence, based on the multivariate Cox regression analysis (Table IV and Fig. 6A).
To analyze the correlation between the neoplasm cancer status and recurrence prognosis separately in the high-and low-risk groups, stratified analysis was further performed for the neoplasm cancer status ( Fig. 6B and C).
miRNA-target regulatory network analysis and enrichment analysis. In total, we identified 615 DEGs (400 upregulated and 215 downregulated) between the high-and low-risk groups. Based on the StarBase database, the target genes were predicted for the 6 independent prognostic miRNAs. The overlapping genes between the target genes and the DEGs were obtained after comparison, and 601 miRNA-mRNA regulatory interactions were selected. Then, 218 interactions with significant negative correlations were retained for constructing the miRNA-target regulatory network (involving miR-193b, miR-211, miR-505, miR-508, and miR-514) (Fig. 7). In addition, the target genes in the regulatory network were enriched in 25 functional terms (such as blood vessel development and vasculature development) and 6 pathways (such as regulation of actin cytoskeleton and TGF-β signaling pathway) (Table V).
Discussion
In the present study, we identified 46 DE-miRNAs between the recurrent and non-recurrent ovarian cancer (OC) samples. Nineteen prognosis-associated miRNAs were used to construct an SVM classifier, among which 6 were deregulated and independently related to prognosis. A risk score system based on the 6 miRNAs had a high accuracy for risk prediction in both the training and validation sets. The neoplasm cancer status was a clinical factor independently correlated with recurrence. miR-193b serves as a tumor suppressor in many cancer types. Its role in OC has recently been investigated. The epigenetic silencing of miR-193a-3p could promote OC progression by targeting the growth factor receptor-bound protein-7 (GRB7) (32). miR-193b-3p has an antitumor effect in OC cells by inhibiting the p21-activated kinase 3 (33). Downregulation of miR-193b could induce OC metastasis (34). These results indicate that miR-193b may be a tumor suppressor in OC. Moreover, low expression of miR-193b is associated with a poor prognosis of OC patients (35). In the present study, miR-193b was one of the 6 miRNA signatures Table IV. Cox regression analysis for screening the clinical factor independently correlated with prognosis in the training set. that could predict recurrence of OC, suggesting that its expression also might be linked to recurrence. Currently, only a few studies have reported the correlations between miR-218 and OC. It was reported that miR-218 prevents the proliferation and invasion in OC by downregulating its target gene, runt-related transcription factor 2 (RUNX2) (36). In colon adenocarcinoma, the long noncoding RNA MNX1-AS1 could promote progression. It acts as a competing endogenous RNA (ceRNA) of miR-218-5p and upregulates SEC61A1, the downstream target gene of miR-218-5p (37). MNX1-AS1 was also found to facilitate the progression of OC (37). However, it is unclear whether MNX1-AS1 also has this competing relationship with miR-218-5p. In the present study, miR-218 was another important miRNA identified to be related to the recurrence of OC, indicating it may be a novel predictive factor for OC recurrence.
miR-30a, miR-30e and miR-505 exhibit significantly lower expression in ovarian clear cell carcinoma (OCC) compared with those in elderly advanced ovarian papillary serous carcinoma (OPSC) patients, and the activating transcription factor 3 (ATF3) is the primary gene co-targeted by them (40). Overexpression of the tumor suppressors miR-130b-3p, miR-509-3p, miR-509-5p, miR-508-3p and miR-508-5p has been association with the improved survival of OC patients, and these miRNAs may alter the physical properties of OC cells via regulating the actin cytoskeleton (41). Moreover, by downregulating gene expression levels in the MAPK1/ERK signaling pathway, miR-508 acts as an inhibitor for cell proliferation, migration and invasion in OC cells (42). Reduced miR-514 is correlated with adverse prognosis of OC patients, and miR-514 can inhibit cell proliferation and lower cisplatin chemosensitivity in OC by regulating the ATP binding cassette subfamily (43). These findings indicate that the three miRNAs, miR-505, miR-508 and miR-514 may function as suppressors in OC development and their low expression could be associated with poor prognosis. Our results demonstrated that miR-505, miR-508 and miR-514 were three miRNA signatures in recurrent OC, suggesting that they may be the predictive indicators for OC recurrence.
For the target genes in the miRNA-target regulatory network (involving miR-193b, miR-211, miR-505, miR-508 and miR-514), they were significantly enriched in the regulation of the actin cytoskeleton and the TGF-β signaling pathway. The TGF-β signaling pathway functions in various cellular processes correlated with tumorigenesis, and the genetic variants in the pathway are related to OC risk and may help to identify high-risk individuals (44). TGF-β signaling can be suppressed by the accumulation of epigenetic modifications, which contributes to the oncogenesis of OC (45). The dynamic remodeling of the actin cytoskeleton is important for multiple cellular activities, and dysfunction of cytoskeletal proteins can lead to many diseases in humans (46). Therefore, miR-193b, miR-211, miR-505, miR-508 and miR-514 may influence the prognosis of OC via the regulation of the actin cytoskeleton and the TGF-β signaling pathway.
Although we performed comprehensive bioinformatic analyses using the miRNA expression profile of OC and confirmed the classification accuracy by the validation datasets, several limitations remain. First, the sample size with available recurrence information was small. Second, this study lacks validation experiments to validate the expression of these predictive miRNAs and interplayed target genes.. Third, the accuracy of the SVM classifier and clinical value of the 6-miRNA risk score system should be further tested in OC patients. Therefore, further experiments should be prepared and conducted to support our findings.
In conclusion, the SVM classifier may be accurate in determining the recurrence status of OC patients. Moreover, the 6-miRNA risk score system may be effective in predicting the outcome of OC patients. Furthermore, miR-193b, miR-211, miR-505, miR-508 and miR-514 may affect the prognosis of OC via regulation of the actin cytoskeleton and the TGF-β signaling pathway. | 2019-04-20T13:03:26.793Z | 2019-04-10T00:00:00.000 | {
"year": 2019,
"sha1": "59f941bfdd5ee954d08f9af793063a5f715ee914",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/or.2019.7108/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "10e7eccfa7406168cb99c9e01e5aba34ef342693",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
54213268 | pes2o/s2orc | v3-fos-license | Two-dimensional water waves in the presence of a freely floating body: conditions for the absence of trapped modes
The coupled motion is investigated for a mechanical system consisting of water and a body freely floating in it. Water occupies either a half-space or a layer of constant depth into which an infinitely long surface-piercing cylinder is immersed, thus allowing us to study two-dimensional modes. Under the assumption that the motion is of small amplitude near equilibrium, a linear setting is applicable and for the time-harmonic oscillations it reduces to a spectral problem with the frequency of oscillations as the spectral parameter. It is essential that one of the problem's relations is linear with respect to the parameter, whereas two others are quadratic with respect to it. Within this framework, it is shown that the total energy of the water motion is finite and the equipartition of energy holds for the whole system. On this basis, it is proved that no wave modes can be trapped provided their frequencies exceed a bound depending on cylinder's properties, whereas its geometry is subject to some restrictions and, in some cases, certain restrictions are imposed on the type of mode.
Introduction
This paper continues the rigorous study (initiated in [4]) of the coupled time-harmonic motion of the mechanical system which consists of water and a rigid body freely floating in it. The former is bounded from above by a free surface, whereas the latter is assumed to be an infinitely long cylinder which allows us to investigate twodimensional modes orthogonal to its generators. The body is surface-piercing and no external forces acts on it (for example, due to constraints on its motion). The water domain is either infinitely deep or has a constant finite depth; the surface tension is neglected on the free surface of water whose motion is irrotational. The motion of the whole system is supposed to be small-amplitude near equilibrium which allows us to use a linear model.
In the framework of the linear theory of water waves, the time-dependent problem describing the coupled motion of water and a freely floating surface-piercing rigid body was developed by [1]. However, his formulation was rather cumbersome, and so during the second half of the 20th century the main efforts were devoted to various problems involving fixed bodies instead of freely floating ones (see the summarising monograph by [5]). The cornerstone was laid by [2] himself who proved the first result guaranteeing the absence of trapped modes at all frequencies provided an immersed obstacle has a fixed position and is subject to a geometric restriction now usually referred to as John's condition. In the two-dimensional case, it includes the following two requirements: (i) there is only one surface-piercing cylinder in the set of cylinders forming the obstacle; (ii) the whole obstacle is confined within the strip between two vertical lines through the points, where the surface-piercing contour intersects the free surface of water, the part of bottom (when the depth is finite) is horizontal outside of this strip. [11] demonstrated that if condition (i) holds, then condition (ii) can be replaced by a weaker one. Namely, if the depth is infinite, then the whole obstacle must be confined to the angular domain between the lines inclined at π/4 to the vertical and going through the two points, where the surface-piercing contour intersects the free surface. If the depth is finite, then it is required that the whole obstacle is confined to a smaller angular domain between the lines going through the same two points, but inclined at a certain angle to the vertical that is a little bit less than π/4. The results of [11] and [2] are illustrated in [5]; see pp. 125, 126 and 137, respectively.
In [3], another geometric condition alternative to (ii) was found which together with (i) guarantees the absence of trapped modes at all frequencies for fixed bodies. This condition does not impose any restriction on the angle between the surfacepiercing contour and the free surface (arbitrarily small angles are admissible), but this is achieved at the expense that the wetted contour is subject to a certain pointwise restriction (it must be transversal to curves (20) in a certain definite fashion).
On the other hand, condition (i) is essential for the absence of trapped modes. This became clear when [8] constructed an example of such a mode for which purpose she applied the so-called semi-inverse method (see, for example, [7] for its brief description). Her example involves two fixed surface-piercing cylinders each of which satisfies the modified condition (ii) of [11], but they are separated by a nonzero spacing. Another example of a mode trapped by two fixed surface-piercing cylinders was found by [10]. Subsequently, [4] proved that the latter cylinders can be considered as two immersed parts of a single body which freely floats in trapped waves, but remains motionless.
During the past decade, the problem of the coupled time-harmonic motion of water and a freely floating rigid body has attracted much attention. Along with the just mentioned paper [4], rigorous results were obtained in [7], where a brief review of related papers is given. However, the substantial part of work concerns the study of trapped modes and the corresponding trapping bodies and only the paper [6] has been focussed on conditions eliminating trapped modes in the case when a surface-piercing or totally submerged body is present (for a surface-piercing body the original proof of [2] was essentially simplified). In the present paper, our aim is to fill in this gap at least partially.
In the present note, we find conditions on the frequency so that they guarantee that no modes (or some specific modes) are trapped by a freely floating body provided its geometry satisfies the assumptions used in [11] and [3] for establishing the absence of modes trapped by the same body being fixed.
Statement of the problem
Let the Cartesian coordinate system (x, y) in a plane orthogonal to the generators of a freely floating infinitely long cylinder be chosen so that the y-axis is directed upwards, whereas the mean free surface of water intersect this plane along the x-axis, and so the cross-section W of the water domain is a subset of R 2 − = {x ∈ R, y < 0}. Let B denote the bounded two-dimensional domain whose closure is the cross-section a floating cylinder in its equilibrium position. Let both the immersed part B = B ∩ R 2 − and the above-water part B \ R 2 − be nonempty domains and D = B ∩ ∂R 2 − be a nonempty interval of the x-axis, say {x ∈ (−a, a), y = 0} (see figure 1). We suppose that W is either R 2 − \ B when water has infinite depth (see figure 1) or {x ∈ R, −h < y < 0} \ B, where h > b 0 = sup (x,y)∈B |y|, when water has constant finite depth. We suppose that W is a Lipschitz domain, and so the unit normal n pointing to the exterior of W is defined almost everywhere on ∂W . Finally, by S = ∂ B ∩ R 2 − and F = ∂R 2 − \ D we denote the wetted contour and the free surface at rest, respectively; if water has finite depth, then H = {x ∈ R, y = −h} is the bottom's cross-section.
For describing the small-amplitude coupled motion of the system it is standard to apply the linear setting in which case the following first-order unknowns are used. The velocity potential Φ(x, y; t) and the vector-column q(t) describing the motion of body whose three components are as follows: • q 1 and q 2 are the displacements of the centre of mass in the horizontal and vertical directions, respectively, from its rest position x (0) , y (0) ; • q 3 is the angle of rotation about the axis that goes through the centre of mass orthogonally to the (x, y)-plane (the angle is measured from the x-to y-axis).
We omit relations governing the time-dependent behaviour (see details in [4]), and turn directly to the time-harmonic oscillations of the system for which purpose we use the ansatz Φ(x, y, t), where ω > 0 is the radian frequency, ϕ ∈ H 1 loc (W ) is a complex-valued function and χ ∈ C 3 . To be specific, we first assume that W is infinitely deep in which case the problem for ϕ, χ is as follows: Here ∇ = (∂ x , ∂ y ) is the spatial gradient, g > 0 is the acceleration due to gravity that acts in the direction opposite to the y-axis; N = (N 1 , N 2 , N 3 ) T (the operation T transforms a vector-row into a vector-column and vice versa), where (N 1 , N 2 ) T = n, , y − y (0) × n and × stands for the vector product. In the equations of body's motion (7), the 3×3 matrices are as follows: The positive elements of the mass/inertia matrix E are where ρ(x, y) ≥ 0 is the density distribution within the body and ρ 0 > 0 is the constant density of water. In the right-hand side of relation (7), we have forces and their moments. In particular, the first term is due to the hydrodynamic pressure, whereas the second one is related to the buoyancy (see, for example, [1]); the nonzero elements of the matrix K are Note that the matrix K is symmetric. In relations (3), (4) and (7), ω is a spectral parameter which is sought together with the eigenvector (ϕ, χ). Since W is a Lipschitz domain and ϕ ∈ H 1 loc (W ), relations (2)-(4) are, as usual, understood in the sense of the following integral identity: which must hold for an arbitrary smooth ψ having a compact support in W . Finally, relations (5) and (6) specify the behaviour of ϕ at infinity. The first of these means that the velocity field decays with depth, whereas the second one yields that the potential given by formula (1) describes outgoing waves. This radiation condition is the same as in the water-wave problem for a fixed obstacle (see, for example, [2]). The relations listed above must be augmented by the following conditions concerning the equilibrium position: • The mass of the displaced liquid is equal to that of the body: I M = B dxdy (Archimedes' law); • The centre of buoyancy lies on the same vertical line as the centre of mass: B x − x (0) dxdy = 0; • The matrix K is positive semi-definite; moreover, the 2 × 2 matrix K ′ that stands in the lower right corner of K is positive definite (see [1]).
The last of these requirements yields the stability of the body's equilibrium position, which follows from the results formulated, for example, by [1], § 2.4. The stability is understood in the classical sense that an instantaneous, infinitesimal disturbance causes the position changes which remain infinitesimal, except for purely horizontal drift, for all subsequent times.
In conclusion of this section, we note that relations (5) and (6) must be amended in the case when W has finite depth. Namely, the no flow condition replaces (5), whereas ν must be changed to k 0 in (6), where k 0 is the unique positive root of k 0 tanh(k 0 h) = ν.
3 Equipartition of energy, trapped modes and conditions guaranteeing their absence
Equipartition of energy
It is known (see, for example, [5, § 2.2.1]), that a potential, satisfying relations (2), (3), (5) and (6), has the asymptotic representation at infinity of the same type as Green's function. Namely, if W has infinite depth, then and the following equality holds Assuming that ϕ, χ is a solution of problem (2)- (7), we rearrange the last formula using the coupling conditions (4) and (7). First, transposing the complex conjugate of equation (7), we get This relation and condition (4) yield that the inner product of both sides with χ can be written in the form: Second, substituting this equality into (12), we obtain In the same way as in [7], this yields the following assertion about the kinetic and potential energy of the water motion.
Proposition 1. Let ϕ, χ be a solution of problem (2)- (7), then Moreover, the following equality holds: Here the kinetic energy of the water/body system stands in the left-hand side, whereas we have the potential energy of this coupled motion in the right-hand side. Thus the last formula generalises the energy equipartition equality valid when a fixed body is immersed into water. Indeed, χ = 0 for such a body, and (16) turns into the well-known equality (see, for example, formula (4.99) in [5]). Proposition 1 shows that if (ϕ, χ) is a solution of problem (2)-(7) with complexvalued components, then its real and imaginary parts separately satisfy this problem. This allows us to consider (ϕ, χ) as an element of the real product space H 1 (W ) × R 3 in what follows (the sum of two quantities (15) defines an equivalent norm in H 1 (W )). Definition 1. Let the subsidiary conditions concerning the equilibrium position (see § 2) hold for the freely floating body B. A non-trivial real solution (ϕ, χ) ∈ H 1 (W ) × R 3 of problem (9) and (7) is called a mode trapped by this body, whereas the corresponding value of ω is referred to as a trapping frequency.
In order to determine when (ϕ, χ) ∈ H 1 (W ) × R 3 is not trapped by B we write (16) as follows: It is clear that the left-hand side is non-negative provided ω 2 is sufficiently large, and so we arrive at the following.
Proposition 2. Let E and K be given by (8) and let ω 2 be greater than or equal to the largest λ satisfying det(λE −gK) = 0. If the domain W is such that the inequality holds for every non-trivial ϕ ∈ H 1 (W ), then ω is not a trapping frequency.
Note that if W has finite depth, then ν must be changed to k 0 in relation (11), where the behaviour of the remainder must be also replaced by the following one: In relations (12) and (14) ν must be also changed to k 0 . On the other hand, formula (13) remains be valid in the same form as above, and so proposition 2 is true in this case as well.
Examples of water domains for which inequality (18) holds
We begin with the case when W has infinite depth. By ℓ d and ℓ −d we denote the rays emanating at the angle π/4 to the vertical from the points (d, 0) and (−d, 0), respectively, and going to the right and left, respectively. Let the whole rays ℓ d and ℓ −d belong to W for all d > a. Thus, B is confined within the angular domain between the lines inclined at π/4 to the vertical and going through the points (a, 0) and (−a, 0) to the right and left, respectively. Under this assumption, [11] proved (see also [5] (2) and (3). Here W c is the subset of W covered with rays {ℓ d : According to the last inequality, if ϕ is non-trivial, then (18) holds. Therefore, proposition 2 is applicable, thus giving a criterion which values of ω are non-trapping frequencies for the freely floating B whose immersed part B is confined as described above.
In order to obtain inequality (18) in the case when W has finite depth, ℓ d and ℓ −d must be replaced by similar segments connecting F and H and inclined at a certain angle to the vertical that is a little bit less than π/4. Numerical computations of [11] show that the same result as for deep water is true when B is confined between the segments inclined at 44 1 3 • .
Another criterion eliminating some particular trapped modes
In this section, we turn to the case when B does not satisfy the conditions of § 3.2.
To be specific, we suppose that W is bounded from below by the rigid bottom H. Moreover, we assume that B is symmetric about the y-axis (see figure 1); this implies that N 1 = n x (N 2 = n y ) attains the opposite (the same, respectively) values at every pair of points on B which are symmetric about the y-axis. Let also ρ(x, y) be an even function of x, and so x (0) = 0 (the centre of mass lies on the y-axis); this implies that N 3 = xn y − n x (y − y (0) ) has the same behaviour as N 1 . The last restriction on B or, more precisely, on B is expressed in terms of the curves parametrised by σ ∈ (−π, 0). On curves of these two families we define directions as shown in figure 1. It is clear that all curves (20), that intersect H transversally, enter into W . Let this property also hold on S; that is, all transversal intersections of curves (20) with S are points of entry into W (see figure 1). In what follows, a body satisfying the listed conditions is referred to as belonging to the class B provided the conditions considered in § 3.2 are not fulfilled for it. The following assertion generalises the criterion of [3] guaranteeing the absence of trapped modes for fixed surface-piercing bodies immersed in deep water and satisfying the above transversality condition with the family of curves (20). As in proposition 2 the values of ω that are not trapping frequencies must be sufficiently large, but what is new that some restrictions must be also imposed on the type of mode.
Proposition 3. Let W have finite depth and let B be a freely floating body belonging to the class B.
If ω 2 is strictly greater than the largest λ such that det(λE − gK) = 0 with E and K given by (8), then ω is not a trapping frequency for modes of the form: (a) ϕ is an even function of x and χ = (d 1 , 0, d 3 ) T ; (b) ϕ is an odd function of x and χ = (0, d 2 , 0) T .
The inverse mapping ζ(z) has the following properties: the points a and −a on the x-axis go to infinity on the ζ-plane, whereas z = ∞ goes to ζ = 0; thus F is mapped onto the whole u-axis.
Denoting by W the image of W , we see that apart from the u-axis the boundary ∂W includes the images of S and H, say S and H respectively. According to properties of (21), if B belongs to the class B, then S is symmetric about the v-axis, lies within the strip {−∞ < u < +∞, −π < v < −α} and asymptotes the line v = −α as u → ±∞; here α ∈ (0, π) is the angle between S and F at (±a, 0). Moreover, the right half of S is the graph of a decreasing function of u ∈ (0, +∞); its maximum value v b is the root of cos v − (a/b 0 ) sin v = 1. Finally, H is a closed curve with the following properties. It is symmetric about the v-axis, is tangent to the u-axis at the origin and is the graph of a concave function of v ∈ (v h , 0); here v h ∈ (−π, 0) is the root of Let φ(u, v) = ϕ(x(u, v), y(u, v)), then relations (2)-(4) yield that (22) Here n ζ is the unit normal to S ∪ H exterior with respect to W and N ζ = N z(ζ) . Moreover, condition (10) implies that ∇φ · n ζ = 0 on H, whereas condition (7) takes the form Furthermore, conditions (15) give that whereas equality (17) turns into the following one: Further considerations are based on the following identity (see [5], Subsection 2.2.2): Here the left-hand side vanishes due to the Laplace equation for φ. Let us integrate this identity over W ′ = W ∩ {|u| < b} and b is sufficiently large (in particular, H ⊂ {|u| < b}). Using the divergence theorem, we get where S ′ = S ∩ {|u| < b}, u = (u, 0), ± denotes the summation of two terms corresponding to the upper and lower signs, respectively, and C ± = W ′ ∩ {u = ±b}. All integrals on the right arise from the first term on the right in (27) and one more integral of the same type vanishes in view of the boundary condition (23) on H.
Let us consider each integral standing on the right in (28). Using the free-surface boundary condition, we get that the first term is equal to where the last expression is obtained by integration by parts. It follows from (15) that ϕ(x, y) tends to constants as (x, y) → (±a, 0), and so φ(u, v) has the same property as u → ±∞. Therefore, the integrated term in the last equality tends to zero as b → ∞, whereas the integral on the right converges in view of (25). The second integral on the right in (28) is equal to Since S belongs to the class B, we have that φ and ϕ are simultaneously even and odd functions of x and u respectively. Therefore, either of the assumptions (a) and (b) implies that this integral vanishes because the integrand attains opposite values at points of S ′ that are symmetric about the v-axis. Finally, (25) implies that there exists a sequence {b k } ∞ k=1 tending to the positive infinity and such that the last sum in (28) tends to zero as b k → ∞. Passing to the limit as k → ∞, we see that the transformed equation (28) with b = b k gives the following integral identity: u sinh u (cosh u − 1) 2 φ 2 (u, 0) du = 0 provided either of the assumptions (a) and (b) holds.
Subtracting this from (26) multiplied by two, we get If ω 2 is strictly greater than the largest λ such that det(λE − gK) = 0, then cannot hold unless ω is not a trapping frequency for modes of the form (a) and (b). Indeed, the right-hand side is negative for such a value of ω and a non-trivial χ, whereas the left-hand side is non-negative because S belongs to the class B and the fraction in the last integral is non-negative. The obtained contradiction proves the proposition.
Conjecture
Given the proof of a theorem guaranteeing the uniqueness of a solution to the linearised problem about time-harmonic water waves in the presence of a fixed obstacle, then this proof admits amendments transforming it into the proof of an analogous theorem for the same obstacle floating freely with additional restrictions on the nontrapping frequencies (they must be sufficiently large) and, in some cases, on body's geometry and on the type of non-trapping modes. | 2015-03-07T17:41:51.000Z | 2015-03-07T00:00:00.000 | {
"year": 2015,
"sha1": "b1aae1f28baf0a60099ed3820e06632923b6c4dc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1aae1f28baf0a60099ed3820e06632923b6c4dc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
20545190 | pes2o/s2orc | v3-fos-license | Reversal of drug-resistance by noscapine chemo-sensitization in docetaxel resistant triple negative breast cancer
Multidrug resistance (MDR) is a major impediment to cancer treatment. Here, for the first time, we investigated the chemo-sensitizing effect of Noscapine (Nos) at low concentrations in conjunction with docetaxel (DTX) to overcome drug resistance of triple negative breast cancer (TNBC). In vitro experiments showed that Nos significantly inhibited proliferation of TNBC wild type (p < 0.01) and drug resistant (p < 0.05) TNBC cells. Nos followed by DTX treatment notably increased the cell viability (~1.3 fold) markedly (p < 0.05) in 3D models compared to conventional 2D systems. In vivo oral administration of Nos (100 mg/kg) followed by intravenous DTX (5 mg/kg) liposome treatment revealed regression of xenograft tumors in both wild type (p < 0.001) and drug-resistant (p < 0.05) xenografts. In wild type xenografts, combination of Nos plus DTX group showed 5.49 and 3.25 fold reduction in tumor volume compared to Nos and DTX alone groups, respectively. In drug-resistant xenografts, tumor volume was decreased 2.33 and 1.41 fold in xenografts treated with Nos plus DTX significantly (p < 0.05) compared to Nos and DTX alone respectively and downregulated the expression of anti-apoptotic factors and multidrug resistance proteins. Collectively, chemo-sensitizing effect of Nos followed by DTX regime provide a promising chemotherapeutic strategy and its significant role for the treatment of drug-resistant TNBC.
inactivation of NF-kB and anti-angiogenic pathways while stimulating apoptosis and enhancing the anticancer activity of doxorubicin in a synergistic manner against TNBC tumors 18 . Thus, even though Nos cannot be used as a standalone agent in TNBC treatment, its chemo-sensitizing effect can be critically important for enhancing the tumor specific toxicity of anticancer drugs. Till now, to our knowledge there is no report available for low dose oral Nos therapy as chemo-sensitizing agent for taxanes against TNBC.
Despite these advances, most of these strategies used alone cannot control and maintain the reversal of the MDR phenomena due to the poor tumor-targeting property of these agents in free forms 19,20 . To address this dilemma, nanoparticle-based drug delivery systems have attracted more attention for enhanced MDR reversal in cancer therapy 21,22 , which can efficiently deliver the therapeutic agents to the tumor tissue by the enhanced permeability and retention (EPR) effect 23,24 . The PEGylated liposomes are efficient drug carriers that can evade rapid clearance by the reticuloendothelial system of the body 25,26 . Many liposomal drugs are already approved for clinical use, such as AmBisome, Doxil (Ben Venue Laboratories, Inc Bedford, OH), DaunoXome, Marqibo and Myocet (GP-Pharm, Barcelona, Spain), while others are under clinical trial. Nos chemo-sensitizing effect can be critically important for enhancing the tumor specific toxicity of DTX liposomes and will help in reducing the dose of DTX and its dose dependent side effects. Docetaxel loaded PEGylated liposomes (DTXPL) were prepared and characterized by our group in non-small cell lung tumor bearing mice 27 .
Poor availability of anticancer drug and nanocarrier in solid tumor is one of the major limitations in their therapeutic outcome [27][28][29] . In such scenario, stromal disruption could be important for harnessing the potential of anticancer therapy. In our previous report, respiratory and oral delivery of telmisartan showed significant anticancer and antifibrotic effects in orthotopic and metastatic lung tumor models 29,30 . Kach et al. (2014) demonstrated the anti-fibrotic activity of Nos through cAMP/PKA signaling activation mediated by prostaglandin E2 receptors in pulmonary fibroblasts 31 .
In our laboratory, we demonstrated that low dose Nos acted as a chemo-sensitizer and efficiently inhibits the growth of TNBC cells followed by DTX treatment may produce superior anticancer effects. In our recently published study, we have evidently showed that Nos treatment lead to the activation of early stress markers such as phospho p38 and phospho JNK (family of MAP kinases) in a time and dose dependent manner, thus may sensitize TNBC cells to DTX to induce apoptosis significantly. In the same study, we published that Nos could act as an anti-fibrotic agent and enhance the tumor penetration of coumarin-6 loaded PEGylated liposomes in triple negative breast cancer xenografts 32 .
In this study, we have proposed to treat both wild type and drug-resistant breast tumors with Nos by oral route prior to administering the nanoparticles to solid breast tumors. We hypothesize that prior treatment with Nos will make poorly penetrable fibrous tumors into easily nanoparticle penetrable loose interstitial networks allowing for superior intratumoral distribution of the nanotherapeutics leading to their superior anticancer effects as well as to overcome drug-resistance.
Results
Noscapine increases the sensitivity of drug-resistant TNBC cells to DTX. Our previous studies have indicated cancer cell growth inhibitory properties of Nos after pre-sensitization and also enhancement of anticancer activity of DTX 32 . In the present study we examined the pre-sensitization effect of Nos at low concentrations in drug-resistant TNBC cells and compared with the wild tumor cells. First, the wild type cells were exposed to Nos for 24 h followed by DTX and their cytotoxicity was determined. To minimize extensive loss of cell death after treatment, a dose of 8 μM Nos and/or 0.8 μM DTX was chosen for a 24 h treatment period. Treatment with Nos alone does not show cytotoxicity of wild type cells but as shown in Fig. 1, cells treated with Nos at low concentrations followed by DTX treatment markedly (p < 0.01) increased the cytotoxicity of wild-type TNBC cells. Cells which were treated with DTX only showed 40.0% cell killing but Nos pre-treatment followed by DTX treatment group showed 74.0% cell killing. Noscapine pre-sensitization enhances cytotoxicity of wild type and drug-resistant MDA-MB-231 cells followed by docetaxel treatment. Triple negative breast cancer cells were pre-sensitized with noscapine for 24 h followed by docetaxel treatment for 24 h and percentage cell killing was measured by the crystal violet assay. Each value represents the average of the independent experiments with triplicate determinations. *Indicates a significant (**p < 0.01) difference compared with control. Data presented are means ± standard deviation (SD).
Further, we wanted to investigate the pre-sensitization effect of Nos on drug-resistant TNBC cells. Treatment of drug-resistant TNBC cells with Nos at low concentration followed by DTX resulted in a significant increase in the cytotoxicity when compared to cells treated with DTX alone (p < 0.05, Fig. 1). In DTX resistant TNBC cells, only DTX showed 23.41% of cell killing but Nos plus DTX showed 52.03% cell kill as compared to control. There was no significant difference between the percentage of cytotoxicity in cells treated with Nos alone at low concentration and control cells. We found that Nos chemo-sensitization followed by DTX treatment significantly (p < 0.01) increased cytotoxicity of DTX in resistant MDA-MB-231 cells.
Noscapine chemo-sensitization suppress three-dimensional growth of the drug-resistant TNBCs. In order to determine the efficiency of Nos pre-sensitization effect, TNBC cells were grown in 3-dimensional (3D) cultures because this system mimics in vivo system. Cell viability of both wild type and drug-resistant TNBC cells in 3D alginate scaffold matrix was shown in Table 1. In our lab we already have optimized the 3D alginate scaffold using TNBC cells previously 33 . The 3D TNBC cultures were exposed to Nos alone, DTX alone and Nos plus DTX, and the viabilities of both untreated and treated cultures were determined. Treatment with Nos plus DTX combination led to disintegration of 3D spheres of drug-resistant MDA-MB-231 TNBC cells when compared with their respective controls ( Fig. 2A). Number of mammospheres of drug-resistant TNBC cells in each treatment group (Control, Nos alone DTX alone, Nos presensitization followed by DTX) was quantified microscopically (2B). The number of mammospheres decreased significantly (p < 0.01) in Nos plus DTX group compared to control. In terms of mammospheres number, there was no significant difference between control and Nos treatment groups, again confirmed the Nos alone treatment did not affect the cell viability. The cell viabilities of both wild type and drug-resistant TNBC cells were determined by alamar blue-based assay as shown in Fig. 2C. The marked reduction (p < 0.01) in cell viability of wild-type and drug-resistant TNBC cells was observed which were treated with Nos presensitization followed by DTX.
We next found that there was a significant (p < 0.05) difference in the cell viability of 3D culture systems. Compared to conventional 2D systems, approximately 1.3 fold increase in the cell viability was observed in 3D models (Table 1A, and 1B). These results implicate that drug-resistant MDA-MB-231 breast cancer cells demonstrated higher cell viability than their MDA-MB-231 wild type cell counterparts.
Oral administration of Noscapine in combination with intravenous docetaxel causes inhibition of wild type and drug resistant xenografted TNBC tumors. Further, we investigated the effects of Nos chemo-sensitization followed by DTX treatment on DTX resistant MDA-MB-231 orthotopic xenograft tumor bearing nude mice. The single-agent Nos sensitization in combination drug schedule were designed to reflect a clinically relevant approach with DTX (5 mg/kg body weight) administered intravenously twice a week and Nos (100 mg/kg body weight) fed by oral-gavage on a daily basis. At the end of the treatment, vehicle treated control mice (PBS) showed unrestricted tumor growth (Fig. 3A,B). Although single agent drug regimens decreased tumor growth and progression as compared to control (only PBS).
First, the in vivo antitumor efficacy of Nos, DTXL, or their combination was investigated in wild type MDA-MB-231 TNBC orthotropic xenograft tumor bearing nude mice as described in methods as per our previously published studies 32 . As shown in Fig. 3, the treatment groups showed significant (p < 0.001) tumor growth inhibition compared to the Nos only and control groups. Although oral administration of Nos followed by DTXL treatment resulted in reduced breast tumor volume, a significantly higher reduction in the tumor volumes were noted in the Nos plus DTXL group when compared with Nos alone or DTXL alone treated groups. Combination of Nos followed by DTXL group showed 5.49 and 3.25 fold reduction in tumor volume compared to Nos and DTXL group, respectively (Fig. 3A). In particular, the tumor volume and tumor size in the DTXL with Nos sensitization group showed a pronounced reduction compared with all other treatment groups (Fig. 3A), indicating a statistically improved antitumor effect of DTXL after Nos sensitization. In terms of body weight, Average body weight of treatment group (for DTX treated group 24 g ± 1.9 and for Nos plus DTX group 24 g ± 1.6) was higher than control group (22.5 g ± 1.2) indicating that the treatment has no apparent toxicities on body weight. Moreover, we observed that animals in control group were weaker than treatment group. Lower body weight and Noscapine + Docetaxel 56 70 Table 1. The comparative cell viability of triple negative breast cancer cells 2D versus 3D. Viability of noscapine, docetaxel and combination of noscapine pre-sensitized followed by docetaxel treatment of MDA-MB-231 wild type and drug resistant MDA-MB-231 cells in 2D (Table 1A) and 3D (Table 1B) alginate scaffold system by alamarBlue.
compromised movement could be attributed to solid tumor induced cachexia condition in control animals. Our in vivo studies revealed that DTXL with Nos sensitization exerts superior anticancer effects in wild type TNBC in vivo models. Further, we extended our study to investigate the therapeutic efficacy of Nos pre-sensitization to overcome the DTXL resistance and in vivo studies were conducted in drug-resistant TNBC xenografts (Fig. 3B). The tumor growth inhibition was different in drug resistant MDA-MB-231 xenografts when compared to MDA-MB-231 wild type xenografts. Tumor growth of Nos pre-treated DTXL treatment showed significant (p < 0.05) reduction compared to DTXL alone. In drug-resistant xenografts, tumor volume was decreased 2.33 and 1.41 fold in xenografts treated with Nos followed by DTX liposomes significantly (p < 0.05) compared to Nos and DTXL alone respectively. Although, the reduction in tumor volume and tumor size was less when compared to wild type TNBC xenografts, these observations suggest that Nos pre-treatment overcomes the resistance of DTX efficiently in breast tumor xenografts. Administration of Nos as a chemo-sensitizer in conjunction with DTXL did not affect the body weight of the treated mice indicates safety of the DTX liposomes and Nos plus DTXL combination. These results suggesting that Nos treatment had no apparent cytotoxic side effects and the combined approach might be considered a potentially suitable strategy for treating TNBC. Noscapine chemo-sensitization overcome drug resistance by inhibiting the expression of multi-drug resistance proteins. We next studied the possible mechanism of Nos effect on cytotoxicity of DTX in wild type and DTX-resistant xenograft tumors. Since caspase 3, cyclin D1, bcl-2 and matrix metallo proteinase 2 (MMP-2) are key regulators in the cell cycle, apoptosis and extracellular matrix, we investigated the protein expression level in treated animal groups (Fig. 4) and full-length blots were included in a supplementary information file as Figures S1 and S2. Our previous in vitro study revealed that in wild type TNBC cells, Nos at low concentrations followed by DTX treatment inhibited growth of MDA-MB-231 cells in part by inducing apoptosis and stimulating activation of pro-apoptotic, stress-activated protein kinases (SAPKs), phosphorylated p38 and JNK1/2, Akt, bcl-2 and survivin 32 . In the present study, we found that the expression of apoptosis regulator bcl-2 was down-regulated (0.7 fold) significantly (p < 0.05) and caspase3 were upregulated (1.4 fold) markedly (p < 0.001) in Nos plus DTX combination compared to Nos alone and control groups ( Fig. 4A and B). Cell cycle regulator cyclin D1 expression was also decreased (1.3 fold) significantly (p < 0.01) in Nos pre-sensitized animals (followed by DTX treatment) than other treatment groups. Our western blot analysis also showed that Nos pre-sensitization markedly (p < 0.001) inhibited the expression of MMP-2 (1.7 fold) in the combination group than other treatment groups.
To further verify the Nos pre-sensitization effect can overcome DTX resistance in DTX resistant xenograft tumors, we further analyzed the multidrug resistance proteins. Consistent with these findings, our western blot analyses of drug resistant TNBC tumor lysates in Fig. 4C and D show that Nos followed by DTX treatment inhibited the resistance marker MDR 1 (ABCB1) in drug-resistant TNBC cells. The expression of MDR 1was significantly (p < 0.05) down regulated (1.1 fold) in Nos pre-treatment followed by DTX treated cells when compared to control and DTX only treated cells (Fig. 4C and D). It is of note here that another resistance related protein MRP1 expression was also found to be higher in control lysates. On treatment with DTX after Nos chemo-sensitization, MRP 1 was significantly (p < 0.01) down regulated (1.13 and 1.9 fold) in DTX alone and DTX after Nos chemo-sensitization, respectively compared to untreated control xenografts (Fig. 4B). In consistent with the wild type TNBC in vivo data, ant-apoptotic protein bcl-2 (p < 0.001) and MMP-2 expression was also down-regulated significantly (p < 0.01) in combination group than other treatment groups. Thus, these findings highlight the potential of Nos pre-sensitization followed by DTX treatment and could be a clinically important combination to overcome the DTX resistance of breast cancer. Whether and to the extent such robust inhibition of drug resistant proteins by Nos pre-treatment followed by DTX treatment contributes to its superior TNBC growth inhibitory effects remain to be clarified. treated with oral administration of Nos followed by intravenous injection of DTX group. MRP 1 expression was markedly decreased as compared to animals which were treated with DTX alone. High amount of MRP 1 expression was found in control mice. The data shown in Fig. 5 collectively demonstrates that Nos pre-sensitization followed by DTX treatment overcome its drug-resistance.
Discussion
Triple negative breast cancer (TNBC) has more aggressive disease progression with limited treatment options due to the lack of standard chemotherapy [34][35][36] . However, multidrug resistance (MDR) in cancer cells has remained as a significant obstacle in the achievement of efficient chemotherapy 37,38 . Due to multidrug resistance nature of tumor, use of nanocarriers like liposomal formulations may deliver their payloads of drugs more efficiently to cancer cells due to increased permeability and retention effect. Combining liposomal formulations along with natural compounds can improve the therapeutic efficacy of cytotoxic agents and possibly reverse MDR. To test this hypothesis, we investigated the therapeutic potential of Nos as a sensitizer at low concentrations in conjunction with DTX to overcome drug-resistant TNBCs. To our knowledge this study is the first attempt to identify a novel combination of Nos pre-sensitization at low concentrations in conjunction with DTX formulations demonstrating inhibition of growth of wild type and drug-resistant TNBC cells in vitro as well as in vivo.
DTX inhibited the growth of wild type as well as drug-resistant TNBC cells as shown in Fig. 1. In our previous study, we have used low dose 4 µM Nos and 0.4 µM DTX for in vitro cytotoxicity but in the present study we have used 8 µM Nos and 0.8 µM DTX. Usually, resistant cell line needs higher concentration of drug compared to wild-type cells, therefore, we have used 8 µM Nos and 0.8 µM DTX. The reason why we used different concentrations of Noscapine and DTX as compared to our previous study because we did not get IC50 (half maximal inhibitory concentration). Therefore, we have used higher concentration of Nos and DTX to get the IC50 in DTX resistant TNBC cells.
It is important to note here that killing of drug-resistant cells which were treated with DTX was more significant in Nos pre-sensitized cells than without Nos pre-treatment suggesting that TNBC growth inhibition by microtubule class of compounds can be used to overcome the drug-resistance of TNBC. This is further supported by our mammosphere studies where Nos sensitization and DTX was effective in disrupting mammospheres of wild type as well as drug-resistant TNBC cells. In mammalian tissues and cells connect not only to each other, but also to support structures called extracellular matrix (ECM). The cells grow within an organized three dimensional (3D) matrix and their behavior is dependent upon interactions with immediate neighbors and the ECM 39 . We have utilized Algimatrix 3D platform to culture both wild type and drug resistant TNBC cells because 3D cell culture models create a pragmatic microenvironment and mimic in vivo systems, which helps to understand cell-cell interactions 40,41 . In the current study, Nos at low concentrations in conjunction with DTX combination was more effective to disintegrate mammospheres than either agent alone. Our laboratory has previously demonstrated that 3D cell culture scaffolds (AlgiMatrix TM ) serve as a valid platform for the development of more physiologically relevant culture systems for cancer biology 33,42 . Collectively, our current in vitro 2D and 3D studies demonstrate that this combination has unique ability to target resistant cells to suppress growth of drug-resistant TNBC cells. Zhou et al. showed that noscapine binds to tubulin at a different site than paclitaxel and causes mitotic arrest in paclitaxel-resistant ovarian carcinoma cells 14 . It has been demonstrated that the intracellular mediators in 3D multicellular morphologies showed greater resistance to chemotherapy than in monolayers in three different endometrial cancer cells such as Ishikawa, RL95-2, and KLE cell lines 43 . It was shown that doxorubicin had less effect on proliferation and induced less apoptosis in 3D multicellular structures of high grade cancer cells (RL95-2 and KLE cell lines) than in cell monolayers. These observations have important implications with regard to the in vitro study of anticancer treatments.
Liposomal formulation of DTX was developed to improve the solubility of DTX and its long circulation and sustained release of DTX. Nanosized liposomes are reported to selectively accumulate in solid tumor due to EPR effect. However, deeper penetration in tumor tissue is severely restricted by collagen rich tumor stroma and other components of tumor ECM. Administration of Nos orally at low concentrations disrupts the extracellular matrix network due to its anti-fibrotic activity, therefore, DTX liposomes were more permeable to TNBC tumors 32 . Nos has been reported to have in vitro anticancer activity for a wide variety of cancers and administration of Nos does not have toxic side effects on any other organs in vivo 15 Our recently published in vitro data suggests that the treatment of wild type TNBC cells with Nos at low concentrations stimulated activation of stress-activated kinases p38 and JNK1/2 in a time and dose-dependent manner 32 . Hence, we construe from this study that lower concentrations of Nos act as a chemo-sensitizer and treatment with DTX may produce superior anticancer effect that warrants further investigation for its potential clinical applications. To extend to our previous study, we extrapolated present study to molecular level in vivo wild-type and drug-resistant xenografts to investigate the antitumor efficacy of Nos chemo-sensitization to overcome the drug resistance. Decreased expression of bcl-2, cyclin D1 and MMP 2 with combination treatment in wild type TNBC tumors is in agreement with our previously published studies which demonstrated that Nos also downregulates the expression of various cell cycle regulators and survival proteins 14,15,18,32 . Increased expression of pro-apoptotic factor caspase 3 in wild type xenograft breast tumors, correlates with the work of Shen et al., (2015) who showed that Nos increases the anti-cancer activity of cisplatin in ovarian cancer cells SKOV3/DDP by modulating the cell cycle and activating apoptotic pathways 44 .
ATP-binding cassette (ABC) transporters such as MDR1, MRP1 and BCRP (family of multidrug resistance proteins) play a crucial role in mediating drug resistance in cancer cells [45][46][47] . Our current studies further revealed that drug-resistant TNBC cells have decreased expression of key regulators of resistance such as MDR1, MRP 1 and anti-apoptotic protein bcl-2 in drug-resistant xenograft breast tumors (Fig. 4). These results indicate that Nos increases the sensitivity of xenografts to DTX, which led to increased apoptosis and decrease resistance of TNBC tumors and correlates with more aggressive phenotype of tumor and progression of disease [48][49][50] . Decreased expression of bcl-2 in drug resistant tumors suggests that it may be involved in the resistance mechanism in these TNBC cell lines. Our results showed that Nos significantly suppressed the invasive ability of MDA-MB-231 cells in xenografts in parallel with down-regulation of MMP2. Nos mediated disruption of tumor ECM will enhance the tumor penetration and tumor bioavailability of DTX liposomes corroborated well with previous studies 51,52 . Immunohistochemical expression of MRP 1 positive staining were detected and the intensity of MRP 1 staining was less in the combination than other treatment groups, agreement with the previous studies 53-55 . Su et al. (2011) also found that noscapine sensitizes cisplatin-resistant ovarian cancer cells by inhibiting hypoxia inducible factor-1 alpha (HIF-1α) 56 . Although, these results need further evaluation, the present findings do support the conception that Nos may offer a novel therapeutic strategy for drug resistant TNBCs (Fig. 6).
In conclusion, to our knowledge this study is the first attempt to identify a combination of Nos pre-sensitization at low concentrations in conjunction with DTX inhibits growth of wild type and drug-resistant TNBC cells illustrated in vitro as well as in vivo (Fig. 6). Nos oral administration enhanced the anticancer activity of DTX in drug resistant TNBC by enhancing the tumor bioavailability of DTX liposomes and chemo-sensitizing the tumor to DTX. This study provides a basis for improving efficacy of chemotherapy effect on drug resistant TNBCs and sheds light on new insights to the development of novel clinical therapeutics.
Cytotoxicity of docetaxel resistant cells after Noscapine chemo-sensitization.
In vitro inhibition of cell growth was assessed in Nos on DTX resistant TNBC cells by crystal violet cytotoxicity assay. The wild type and DTX resistant MDA-MB-231TNBC cells were plated in 96-well micro titer plates, at a density of 1 × 10 4 cells/well and allowed to incubate overnight and were treated with various dilutions of Nos made in cell growth medium (10 to 160 µM) from Nos stock solution in DMSO. To study the interaction between Nos and DTX, the treatment strategy included as cells treated with (i) control (ii) only Nos (8 µM) for 24 h (iii) only DTX (0.8 µM) (iv) Nos 8 µM for 24 h followed by DTX 0.8 µM for 24 h. In group (ii) and (iv), Nos was discarded after 24 h and replaced with fresh media and DTX, respectively and viability was assessed by crystal violet assay 32 . The absorbance was measured by a microtiter plate reader (Spectramax 190, Molecular devices, USA) at 540 nm.
Three-dimensional mammosphere assays in alginate scaffold 3D Breast Tumor Model. Initially, DTX resistant cells were pre-treated with Nos for 24 h then followed by DTX treatment. DTX (0.8 µM) was used to treat 3D alginate scaffolds seeded with 0.15 million cells on 7, 9, 11 days post tumor cell seeding based on our previously published study 33 . Similarly in 96 well plates after seeding 15,000 cells per well, spheroids were treated with DTX (0.8 µM) on 7, 9, 11 days post cell seeding. The alamarBlue ® assay was performed to determine number of cells at the end point. Results were compared with 2D culture systems.
AlamarBlue ® Assay. At 14 day in culture cell viability and metabolic activity was measured using the alamarBlue ® assay which is based on the conversion of a non-fluorescent dye to the red fluorescent dye resorufin in response to chemical reduction of growth medium resulting from cell growth. Briefly, 10% alamarBlue ® dye with respect to the volume of the medium in each well was added. After one hour of incubation, plates were read for fluorescence intensity at 530 nm & 590 nm wavelength for excitation and emission, respectively. Figure 6. Illustrated a combination of noscapine pre-sensitization at low concentrations followed by docetaxel inhibits growth of wild type and drug-resistant TNBC cells. Noscapine oral administration enhanced the anticancer activity of docetaxel in drug resistant triple negative breast cancer by augmenting the tumor bioavailability of docetaxel liposomes and chemo-sensitizing the tumor to docetaxel. The combination of noscapine plus docetaxel liposomes administration down-regulated the expression of anti-apoptotic factors bcl-2, cyclin D1 and multidrug resistance proteins such as MDR 1, MRP 1 as well as extracellular matrix metalo protein 2 (MMP-2) and augmented levels of pro-apoptotic caspase-3 expression. These findings provided a promising strategy to overcome multidrug resistance by combined delivery of noscapine pre-sensitization followed by docetaxel anticancer agents to target tumor cells more effectively.
Establishment of TNBC cell-derived xenografts in immunocompromised mice. The experiments
involving generation of DTX-resistant TNBC cell-derived sub-cutaneous xenografts were performed according to our previously published methods and protocols 23,32 approved by the Institutional Laboratory Animal care & Use Committees at the Florida A&M University and all methods were performed in accordance with the relevant guidelines and regulations. Female, 5-weeks Balb/c nude mice were purchased from Charles River Laboratories (Horsham, PA). The orthotopic TNBC xenograft studies were carried out in female Balb/c Nude Mice. Following suitable acclimation of animals, 1.5 × 10 6 wild-type and 1.0 × 10 6 drug-resistant MDA-MB-231 TNBC cells were re-suspended in 100 µl of phosphate buffer solution (PBS), and implanted in the mammary fat pads using a 27-gauge needle Tumors were allowed to grow unperturbed for 10-14 days. When tumors became palpable, the mice were randomly assigned to treatment or control groups of six animals each. Mice were treated with control, PBS only, Nos (100 mg/kg), DTXL (5 mg/kg), or Nos plus DTXL. Nos was administered by oral gavage every alternate day for 2 weeks while DTXL was given by intravenous route weekly twice by tail vein. For present study, we have monitored animals every alternate day after the last dose of DTX up to 4 weeks. Study was terminated when more than 50% of control animals were unable to move around due to large tumor. All the animals were euthanized using carbon dioxide. Body weight and tumor volume was measured for assessment of therapeutic efficacy Nos and DTX. Tumor volumes were calculated by the modified ellipsoidal formula. Tumor volume = 1/2(length × width 2 ). Representative tumor samples were stored at −80 °C for subsequent analysis. | 2018-04-03T01:17:36.242Z | 2017-11-20T00:00:00.000 | {
"year": 2017,
"sha1": "7e8f61bd0bb1f91d36d3d5414b6e5c0425c08edf",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15531-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a55cd928ac5392679042c434babee39b76877758",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56229035 | pes2o/s2orc | v3-fos-license | Preparation and Properties of Clay-Reinforced Epoxy Nanocomposites
The clay-reinforced epoxy nanocomposite was prepared by the polymerization method. The effect of clay addition on the mechanical properties of epoxy/clay nanocomposites was studied through tensile, flexural, impact strength, and fracture toughness tests.Themorphology and tribology behavior of epoxy/clay nanocomposites were determined by X-ray diffraction (XRD) andwear test, respectively. The wear test was performed to determine the specific abrasion of the nanocomposites. In addition, the water absorption characteristic of the nanocomposites was also investigated in this study. XRD analysis indicated that the exfoliation structure was observed in the epoxy nanocomposites with 3wt% of clay, while the intercalated structure was shown at 6wt% of clay. It was found that the addition of clay up to 3wt% increased the tensile strength, flexural strength, impact strength, and the fracture toughness. On the contrary, the presence of above 3wt% of clay produced a reverse effect. It could be concluded that the best properties in mechanical, wear resistance, and water resistance were obtained for the epoxy nanocomposites containing 3wt% of clay.
Introduction
Polymer/clay nanocomposites (PCNs) are new class of composite materials, in which clay as a layered silicate is dispersed in nanoscale size in a polymer matrix [1].Recently, PCNs have attracted significant academic and industrial interest.This interest stems from the fact that nanosized-layer-filled polymers can exhibit dramatic improvements in mechanical and thermal properties at low clay contents because of the strong synergistic effects between the polymer and the silicate platelets on both the molecular and nanometric scales.The potential properties enhancements of PCNs have led to increased application in various fields such as the automobile industry (exterior and interior body parts and fuel tanks), packaging industry (bottles, containers, and plastic films), electronic industry (packaging material and exterior parts of electronic devices), coating industry (paints, wire enamel coatings, etc.), and aerospace industry (body parts of airplane and exterior surface coatings).
Epoxy resins have been widely used as impregnating materials, adhesives, or matrices for composites because of their good electric insulating, good chemical resistance, low shrinkage during cure, good thermal characteristics, and ease in processing.However, the major problems with the epoxy resins for engineering applications are their low stiffness and strength when compared with metals.One effective method for offsetting these deficiencies of pure epoxy is incorporation of reinforcing fillers.Montmorillonite (MMT) clay has been well documented as the best reinforcement materials for polymer nanocomposites due to its high aspect ratio, low cost, and the fact that it consists of layered silicates which can be inserted individually in nanosize by polymer chains [2].
The dispersion of clay particles in a polymer matrix results in the formation of three types of composite materials [3].The first type is conventional phase separated composites in which the polymer and the inorganic host remain immiscible resulting in poor mechanical properties of the International Journal of Polymer Science composite material.The second type is intercalated polymerclay nanocomposites, which are formed by the insertion of one or more molecular chains of polymer into the interlayer or gallery space.The last type is exfoliated or delaminated polymer-clay nanocomposites, which are formed when the clay nanolayers are individually dispersed in the continuous polymer matrix.Exfoliated polymer-clay nanocomposites are especially desirable for improved properties because of the large aspect ratio and homogeneous dispersion of clay and huge interfacial area (and consequently strong interaction) between polymer chains and clay nanolayers.
Most important factors for success in preparing epoxy/clay nanocomposites are types of epoxy resin and curing agent/hardener used.Epoxy/clay nanocomposites based on diglycidyl ether of bisphenol A (DEGBA) resin have been synthesized using a wide range of curing agents including triethylenetetramine (TETA) [4], diaminodiphenyl methane (DDM) [5], diaminodiphenylsulfone (DDS) [6], and diethyltoluenediamine (DETDA) [7].However, studies on the synthesis of epoxy/clay nanocomposites based on DEGBA resin with a curing agent of polyaminoamide have not yet been investigated.
The epoxy/clay nanocomposites using a type of epoxy resin of DEGBA and polyaminoamide as hardener were prepared by using an in situ polymerization method.The effect of clay addition to the neat epoxy on the tensile strength, flexural strength, impact strength, fracture toughness, and specific abrasion was investigated in this work.The dispersion of clay particles in the neat epoxy matrix was determined by using X-ray diffraction (XRD), whereas the fracture surface was studied using scanning electron microscopy (SEM).
Materials.
The epoxy resin used as the matrix was DER 331, a bisphenol A diglycidyl ether-based resin (DGEBA) supplied by Dow Chemical.This epoxy resin offers epoxide equivalent weight of 182-192, viscosity of 11000-14000 mPa⋅s, and density of 1.16 g/cm 3 at 25 ∘ C. Polyaminoamide purchased from PT. Justus Kimiaraya, Semarang, Indonesia, was used as the curing agent/hardener.The montmorillonite (MMT) clay (Nanomer 1.28E) was an organosilicate modified by quaternary trimethylstearylammonium ions having an approximate aspect ratio of 75-120, purchased from Nanocor Co., USA.
Preparation of Epoxy/Clay
Nanocomposites.Firstly, the clay was dried in an oven at 80 ∘ C for 8 hours to remove water.The epoxy resin and clay were mixed with clay varied from 0, 2, 3, 4, 5, and 6 wt% of clay content at 75 ∘ C for 2 hours using mechanical stirrer.This mixture then was degassed by a vacuum oven for 15 minutes.The curing agent of polyaminoamide was added to the mixture of epoxy/clay, mixed at 75 ∘ C for 5 minutes, and degassed in a vacuum oven for 3 minutes.The mixture of epoxy, clay, and curing agent was poured into the steel mold and then degassed in a vacuum oven for 10 minutes.All samples were cured in an oven at 80 ∘ C for 2 hours, followed by postcuring at 150 ∘ C for 2 hours.
Characterization and Mechanical Properties
2.3.1.XRD Analysis.X-ray diffraction (XRD) measurements were made directly from clay powder.In the case of the epoxy/clay nanocomposites, the measurements were carried out on bars.All these experiments were performed in reflection mode using X-ray diffractometer at a scan rate of 0.3 ∘ /min in a 2 range of 2-10 ∘ and operated at 30 kV and 20 mA.
Water Absorption
Test.Specimens (tensile bars) were dried at 80 ∘ C in an oven until a constant weight was attained prior to immersion in deionized water at room temperature.Weight gains were recorded by periodic removal of the specimen from water bath and weighing on a balance.The percentage gain at any time , as a result of water absorption was determined by where and denote, respectively, weight of dry samples (the initial weight of samples prior to immersion in deionized water) and weight of samples after exposure to deionized water.
Mechanical Properties
(1) Tensile, Flexural, and Specific Abrasion Tests.Tensile strength was measured by means of tensile test according to ASTM D638 M using a universal testing machine (Servopulser, Shimidzu).Tensile test was performed at a crosshead speed of 10 mm/min.Flexural strength was evaluated through flexural test by a universal testing machine (Torsee's) using three-point bending configuration according to ASTM D790 at a crosshead speed of 10 mm/min.Impact test was carried out on notched specimens to determine impact strength using a pendulum hammer impact test according to ASTM 256-02.Specific abrasion was measured by wear test using a universal wear machine (Riken-Ogosis).
(2) Fracture Toughness Test.A single-edge-notch 3-point-Bending (SEN-3PB) test was conducted to obtain the critical stress intensity factor toughness ( IC ) of epoxy/clay nanocomposites according to ASTM D5045-96 standard using a universal testing machine (Torsee's).Rectangular specimens (thickness 6.35 mm, width 12.70 mm, span length 50 mm, and overall length 56 mm) were cut using a vertical band saw as shown in Figure 1.The notches were made first by the formation of saw-cut slots having rectangular shape with a width of ∼1 mm in the midsection of specimens and then by sharpening with a fresh razor blade.The total notch length of SEN-3PB specimen was 5.5 mm deep.The fracture toughness test was performed on a universal testing machine (Servopulser, Shimadzu EFH-EB20-40L) at crosshead speed of 10 mm/min.The load-displacement curves were recorded and the maximum loads upon fracture were used to determine the IC value, which is defined by [8] where is the shape factor, P is the maximum load, is the length of the span, is the specimen thickness, W is the specimen width, and is the total notch length (produced by saw and fresh razor blade).For specific specimen geometry, the shape factor can be determined by the following equation (3)
Scanning Electron Microscopy (SEM).
The fracture surface of SEN-3PB specimens of epoxy/clay nanocomposites was investigated using SEM (JEOL SEM) at an acceleration voltage of 12 kV.The fracture surface was sputter-coated with a thin gold-palladium layer in a vacuum chamber for conductivity before examination.
Results and Discussion
3.1.XRD Analysis.It was well known that the structure of polymer/clay nanocomposites has typically been established using X-ray diffraction (XRD) analysis and transmission electron microscopy (TEM) observation.Due to its easiness and availability, XRD is the most commonly used tool to probe the nanocomposite structure.By monitoring the position, shape, and intensity of the basal reflections from the distributed silicate layers, the nanocomposite structure (intercalated or exfoliated) may be identified.In an exfoliated nanocomposite, the extensive layer separation associated with the delamination of the original silicate layers in the polymer matrix results in the eventual disappearance of any coherent Xray diffraction from the distributed silicate layers.On the other hand, for intercalated nanocomposites, the finite layer expansion associated with the polymer intercalation results in the appearance of a new basal reflection corresponding to the larger gallery height.Figure 2 presents the XRD patterns of clay, neat epoxy, and the epoxy nanocomposites containing 3 and 6 wt% of clay.For clay, the sharp peak at 2 = 3.54 ∘ (-spacing = 2.49 nm) is assigned to the (001) basal plane, which corresponds to an interlayer spacing of the clay.The absence of sharp peak in the epoxy nanocomposite with 3 wt% of clay suggests the formation of an exfoliated structure after in situ polymerization [7].It means that the exfoliated nanocomposite was obtained for a system with 3 wt% of clay.In terms of intensity of the XRD pattern for below 2 = 4, the epoxy nanocomposite with 6 wt% of clay showed higher intensity compared to the system with 3 wt% of clay.This indicated that the intercalated structure or agglomerated of clay may be formed in the epoxy nanocomposite with 6 wt% of clay.
Water Absorption.
Figure 3 shows the percentage of water absorbed by the neat epoxy and its nanocomposites with different clay content as a function of immersion time.A similar behavior for all samples was observed, where the water uptake increased rapidly during the initial stage (about 1-20 days) and then leveled off.After about 131 days, the water uptake approached a maximum value, known as the saturate point.From Figure 3, it can be seen that the epoxy nanocomposite containing 3 wt% of clay exhibited the lowest value in the saturated amount of absorbed water, that is, 1.48%, indicating the best barrier properties.The exfoliated structure as previously discussed in XRD results may be believed to be responsible for the lowest saturated amount of absorbed water in the epoxy nanocomposite with 3 wt% of clay.In the exfoliated structure, the silicate layers of clay dispersed in the nanometer in a polymer matrix can create a tortuous pathway for water molecules to diffuse into the composites [9].Vlasveld et al. [10] reported that the speed of International Journal of Polymer Science moisture absorption in the polyamide 6/clay nanocomposites was reduced with increasing amounts of exfoliated silicate, due to barrier properties.Because of the high aspect ratio and large surface area of the exfoliated silicate layers, the silicate layers acted as efficient barriers against transport through the material.Several studies showed that the maximum water absorption of a polymer system decreased due to the presence of nanofiller [3,11].Becker et al. [11] reported a reduction in maximum water uptake for different types of epoxy systems reinforced with layered silicate.On the other hand, the highest saturated amount of absorbed water was obtained for the nanocomposite with 6 wt% of clay.This may be attributed to the intercalated silicate layers of clay which lead to the decrease in the tortuosity effect in the nanocomposite with 6 wt% of clay.The diffusivity value of the exfoliated nanocomposites was lower than that of intercalated nanocomposites based on a polyamide 6/clay system.
Mechanical Properties.
Figure 4 shows the effect of clay addition on the tensile strength of epoxy nanocomposites.
It can be seen that the presence of clay up to 3 wt% has increased the tensile strength of the neat epoxy.However, beyond 3 wt% of clay, the addition of clay resulted in a drastic reduction in the tensile strength.In this study, the optimum clay loading was obtained at 3 wt%.It is interesting to note that the presence of 3 wt% of clay has improved significantly the tensile strength by 41%.The increase in tensile strength may be attributed to the formation of exfoliated structure for epoxy nanocomposites with 2 and 3 wt% of clay.This result is attributed to the XRD result as previously mentioned.In the exfoliated structure, individually silicate layers in nanometer size are dispersed uniformly in the polymer matrix with high aspect ratio.The high aspect ratio of nanoclay may also increase the tensile strength by increasing the nanofiller contact surface area on the polymer matrix.Large numbers of reinforcing nanoclay platelets presented in the polymer matrix act as efficient stress transfer agents in the nanocomposites, inducing plastic deformation into the base polymer, and finally increase the tensile strength [12].Furthermore, the intercalated structure or agglomerated clay particles that occurred for the epoxy nanocomposites with high clay content (above 3 wt%) was believed to be responsible for the decrease in tensile strength.This intercalation structure leads to low aspect ratio of clay platelets and low contact surface area, resulting in weak adhesion between polymer matrix and clay, in which subsequently lower their tensile strength.In addition, this behavior was probably attributed to the filler-filler interaction which resulted in agglomerates, induced local stress concentration, and finally reduced tensile strength of the nanocomposites.Similar results were found by several previous researchers.Zhang et al. [13] reported that the tensile strength was increased by the addition of up to 3 wt% and however decreased for above 3 wt% of clay content for epoxy/clay nanocomposites with DEGBA resin and a curing agent of tetrahydro acid anhydride.The presence of 3 wt% of clay resulted in an improvement in tensile strength by 20.1%.Wang et al. [6] found that the tensile strength was increased by 25% with addition of 2 wt % of clay, but dropped with further increasing of clay content for the epoxy/clay nanocomposites using DEGBA resin and a curing agent of DDS.
The effect of clay addition on the flexural strength of epoxy nanocomposites is presented in Figure 5.A similar behavior to that of the tensile strength was observed.The the flexural strength.The exfoliated structure observed for the nanocomposites containing up to 3 wt% of clay content is believed to be responsible for the observed trend.In the exfoliated nanocomposites, the interfacial bonding between the nanoclay filler and the epoxy matrix is improved, thus increasing the surface area of matrix/nanoclay interaction.As a result, this leads to good stress transfer from the matrix to the nanoclay, thus resulting in improved flexural strength [14].The decrease in flexural strength above 3 wt% of clay content was probably due to the presence of agglomerated clay particles, which possibly acted as stress concentration sites and caused a decrease in the flexural strength of the nanocomposites [15,16].In a similar study, Kaynak et al. [17] investigated the flexural strength of nanoclay (Na-montmorillonite) based epoxy nanocomposites.
Results showed an improvement in flexural strength and fracture toughness with maximum value at 0.5% nanoclay loading due to the exfoliation structure of clay.Chow et al. [15] studied that the effect of addition of organoclay on the flexural strength in injection-molded polyamide 6/polypropylene nanocomposites.Their results showed that the flexural strength was increased with an increasing up to 4 phr organoclay loading and, however, above 4 phr led to reduction in flexural strength.The increase in flexural strength at up to 4 phr organoclay loading might be attributed to the exfoliated structure.On the other hand, the intercalated or agglomerated silicate layers of clay were believed to be responsible for the decrease in flexural strength at above 4 phr.The impact strength of epoxy nanocomposites as a function of clay content is presented in Figure 6.It is clear that the impact strength of the nanocomposites was increased with the clay content up to 3 wt%.Beyond 3 wt% clay content, the impact strength was decreased drastically with the clay content in the nanocomposites.The maximum improvement in impact strength was obtained at 3 wt% of clay content with increment of 95%.This indicated that the clay has more effective toughening agent especially for 3 wt% of clay.The increase in impact strength may be related to the exfoliated silicate layers of clay in the neat epoxy matrix as shown in XRD results.In the exfoliated structure, individually silicate layers in nanometer size are dispersed uniformly in the neat epoxy matrix with high aspect ratio.The nanoclays may have a good toughening effect acting as efficient crack stoppers and form a tortuous crack propagation path resulting in higher impact strength [18].Furthermore, the intercalated or agglomerated silicate layers of clay were believed to be responsible for the reduction in impact strength for the epoxy nanocomposites containing above 3 wt% of clay content.Zhang et al. [13] reported that the impact strength was improved with the presence of clay up to 3 wt%, and beyond 3 wt% of clay content the impact strength was drastically reduced in the epoxy/clay nanocomposites.
The effect of clay addition on the fracture toughness of epoxy nanocomposites was shown in Figure 7.The fracture toughness of the neat epoxy was improved by the presence of clay up to 3 wt%.However, the presence of more than 3 wt% of clay has made the epoxy nanocomposites more brittle.A similar trend to that of the impact strength (cf. Figure 6) was observed.The optimum toughness of the epoxy nanocomposites was achieved for the content of 3 wt% clay, where its improvement was 19% compared to the neat epoxy.The exfoliated structure is believed to be responsible for the improvement in fracture toughness.The individually silicate layer nanosized may be able to resist the crack propagation and finally increase the fracture toughness.At high content of clay (>3 wt%), the agglomerated clay particles may act as initial crack and then reduce the fracture toughness.Liu et al. [19] nanocomposites using epoxy resin type of tetraglycidyl-4,4-diaminodiphenylmethane (TGDDM) cured with DDS.They reported that the fracture toughness based on value measurement of critical stress intensity factor ( IC ) and critical energy release rate ( IC ) was dramatically increased by the addition of up to 4.5 wt% of clay due to the better clay dispersion.However, the fracture toughness was decreased with further increasing of clay content.Wang et al. [6] studied the fracture toughness of nanocomposites based on DEGBA epoxy resin and a curing agent of DDS.They found that the IC and IC values were improved by 77% and 190% for 2 wt% of clay content, respectively.However, the fracture toughness was reduced with further addition of clay.
Figure 8 shows the specific abrasion of epoxy nanocomposites as function of clay loading.It was found that the addition of clay up to 3 wt% has reduced drastically the specific abrasion of the neat epoxy.On the other hand, the more addition of clay has improved the specific abrasion.The characteristic of structure in the nanocomposites may be attributed to the behavior of the specific abrasion of the epoxy nanocomposites.Again, the exfoliated structure in the epoxy nanocomposites with 2 and 3 wt% of clay content reduced the specific abrasion.The best abrasion characteristic was also achieved for the epoxy nanocomposite with 3 wt% of clay.The increase in specific abrasion for the nanocomposites with above 3 wt% of clay content was due to the agglomerated clay particles acting as stress concentration sites and reduced the specific abrasion value.epoxy exhibited a relatively smooth fracture surface indicating very fast and straight crack propagation [14].This indicates a typical fractography feature of brittle fracture behavior, thus accounting for the low fracture toughness of the neat epoxy.However, it is evident that the presence of clay in the epoxy nanocomposites with 3 wt% of clay increased the roughness of the fracture surfaces (Figure 9(b)).An increase in fracture surface roughness is an indicator of crack deflection mechanism, which increased the absorbed energy of fracture by increasing the crack length during deformation [14].This confirmed a higher IC value for the nanocomposite with 3 wt% of clay than that for neat epoxy.
In addition, the higher IC value may also be attributed to the stress disturbance caused by the clay particles.These clay particles acted as obstacles, causing the crack to take a more tortuous part, manifesting a meandering crack trajectory.Liu et al. [7] reported that more than one toughening mechanism usually occurs in the epoxy/clay system such as shear yielding of the matrix, crack deflection, micro voiding, and debonding between clay and epoxy.
Conclusion
Epoxy/clay nanocomposites were successfully prepared by using polymerization methods.The optimum properties of tensile strength, flexural strength, impact strength, fracture toughness, and specific abrasion were obtained for the epoxy nanocomposites containing 3 wt% of clay.The tensile strength, flexural strength, impact strength, and fracture toughness were increased by 41, 20, 95, and 19%, respectively.This result was attributed to the formation of exfoliation structure in the nanocomposite with 3 wt% of clay as indicated by the XRD pattern.
Figure 1 :
Figure 1: SEN-3PB specimen geometry used for the fracture toughness test.
Figure 2 :
Figure 2: XRD patterns of neat epoxy and its nanocomposite with 3 and 6 wt% of clay.
Figure 3 :
Figure 3: Water uptake of neat epoxy and its nanocomposites with different clay content as a function of immersion time.
Figure 4 :
Figure 4: Effect of clay content on tensile strength of epoxy/clay nanocomposites.
Figure 5 :
Figure 5: Effect of clay content on flexural strength of epoxy/clay nanocomposites.
2 )Figure 6 :
Figure 6: Effect of clay content on impact strength of epoxy/clay nanocomposites.
2 )Figure 7 :
Figure 7: Effect of clay content on fracture toughness of epoxy/clay nanocomposites. | 2018-12-15T14:43:47.268Z | 2013-10-02T00:00:00.000 | {
"year": 2013,
"sha1": "ce6da0daa745b29679c77c72725f9439bb3d4abd",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijps/2013/690675.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ce6da0daa745b29679c77c72725f9439bb3d4abd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
12410477 | pes2o/s2orc | v3-fos-license | Oral health and quality of life: an epidemiological survey of adolescents from settlement in Pontal do Paranapanema/SP, Brazil
This study aimed to verify oral health, treatment needs, dental service accessibility, and impact of oral health on quality of life (QL) of subjects from settlement in Pontal do Paranapanema/SP, Brazil. In this epidemiological survey, 180 10-to 19- years old adolescents enrolled in the school that attend this population in settlement underwent oral examination, to verify caries index (DMFT- decayed, missing and filled teeth) and periodontal condition (CPI), and were interviewed using the World Health Organization Quality of Life (WHOQOL-Bref) and Oral Impact Daily Performance (OIDP) instruments to evaluate QL, and the Global School-Based Health Survey (GSHS) about dental service accessibility. DMFT average was 5.49 (± 3.33). Overall, 37.2% of participants showed periodontal problems, mainly CPI = 1 (77.7%). Treatment needs were mainly restorations. GSHS showed that the last dental consultation occurred > 1 year previously for 58.3% of participants at a public health center (78.9%). The average WHOQOL-Bref was 87.59 (± 15.23). Social relationships were related to dental caries and health service type. The average OIDP was 6.49 (± 9.15). The prevalence of caries was high and observed periodontal problems were reversible. The social relationships of adolescents from settlement were influenced by caries and health services type.
Introduction
Recent research has examined adolescents' conceptualizations of and satisfaction with quality of life, as well as their well-being 1,2 .Well-being and health-related quality of life are measured using the physical, social relationships, psychological, and environmental domains 2 .
Oral health is an important indicator of adolescents' quality of life independently of domains, but especially in the social relationships domain due to halitosis and concern with appearance 1,3 .Dental caries and periodontal disease are the oral health factors that cause the most pain and embarrassment among adolescents.Rural populations have shown extremely poor oral health characterized additionally by early tooth loss, despite public health policies and technological advances in dentistry 4 .The psychological and social impacts of oral health problems can compromise adolescents' quality of life 5 .
In Brazil, oral health is worse in rural than in urban populations, posing a major burden on the public health system 6 .In the state of São Paulo, the decayed, missing, and filled teeth (DMFT) index was found to increase three-fold between the ages of 11 and 19 years (2 vs. 6), reflecting the worsening of oral health during adolescence 4 .However, national surveys have documented a decline in the DMFT index from 6.1 in 2003 to 4.2 in 2010 among 15-19-year-olds 7,8 .
The high caries index and prevalence of periodontal disease suggest that inequality of access to dental services affects rural adolescents, who can be considered to be a socially excluded population due to residence in remote areas 9 .Despite the presence of health centers in rural areas, the difficulty of transport to these centers, lack of capable professionals, and suboptimal center infrastructure can pose barriers that negatively impact adolescents' health 9 , similarly in rural settlements because of geographic setting.
Rural settlements in Brazil have been built with land reform, involving national government, the landless and farmers front many violent conflicts.In anticipation of land reform, families invade land considered to be unproductive and live there without sanitary services.In this manner, settlements are created and populations organize to seek the right of access to public health services.However, they are located in remote areas and must overcome obstacles to health care access 10 .
In this context, the aims of this study were to determine the oral health conditions, access to dental services, self-perceived quality of life, and impact of oral health on quality of life among Brazilian adolescents from rural settlement.
Study design and participants
All 349 adolescents aged 10-19 years old who lived in rural settlement in Caiuá, São Paulo State, Brazil, were invited to take part in this cross-sectional epidemiological study.São Paulo Lake Project School is the only educational institution which attends this community that is also located into rural settlement.Considering that all of adolescents were participants in the Family Aid financial national incentive program, which requires regular school attendance, this school was considered the better setting to find all adolescents and to conduct this research.
After approval by the Committee for Ethics in Human Research of Univ.Estadual Paulista -UNESP, School of Dentistry -Araçatuba, the study purpose and procedure were explained to adolescents' parents.Parents provided written informed consent to adolescents' participation and the study was conducted in accordance with all ethical guidelines.Adolescents who agreed to take part in the research, answered all questions, and permitted oral examination were included in the study.
Data collection was performed through oral examination and interviews.
Oral examination
Oral examinations were conducted to calculate DMFT caries index (decayed, missing and filled teeth) and a community periodontal index (CPI) for each adolescent participant.The CPI was used to classify periodontal conditions in sextants of the maxilla and mandible as healthy (0) or to indicate the presence of gingival bleeding (1), dental calculus (2), periodontal pocket of 4-5-mm depth (3), and periodontal pocket of ≥ 6 mm depth (4).
A pilot study was conducted earlier this research with adolescents who lived in other settlement in Presidente Venceslau-SP, aiming to test questionnaires and to train the researcher who had 0.88 in Kappa test for dental caries.
The oral examinations were conducted by this trained researcher, who was properly attired, using mirror and probe to verify caries index and periodontal condition.These examinations were performed under the World Health Organization's (WHO's) recommendations for epidemiological surveys.
Measures
The same researcher conducted the interviews of students during class periods at the schools, in local reserved to avoid embarrassment and coercion.It was previously scheduled and when some student was not present, three consecutive attempts were executed.The absence of student in all of attempts, result in exclusion of sample.
The validated Portuguese versions of the World Health Organization Quality of Life (Brief) questionnaire (WHOQOL-Bref) 11 , Oral Impacts on Daily Performance (OIDP) index 12 , and the World Health Organization's Global School-Based Student Health Survey (GSHS) 13 questionnaires were used to guide interviews.Although these instruments can be self-administered, the researchers administered them in an interview setting to ensure participants' comprehension and reduce the possibility of false answers, thereby controlling for common biases.
The WHOQOL-Bref is a 26-item instrument developed by the WHO to evaluate quality of life in the physical, psychological, social relationships, and environmental domains, with two additional questions about self-perceived quality of life and health.Responses to all items are structured by a five-point Likert scale ranging from 1 (never/nothing/very dissatisfied) to 5 (always/ completely/very satisfied).Higher scores indicate better quality of life, except for items related to pain and discomfort, negative feelings, and medication dependence, which have inverted scales.Domain scores are calculated by summing responses to all items in that domain 11 .
The OIDP index measures the impact of oral health on quality of life.It was developed originally for children and was later adapted for use in older populations, including adolescents.The instrument has two components: the first section solicits information about adolescents' self-reported oral problems in the past 3 months using a list of 17 problems and an open "other" item, and the second component is used to evaluate the effects of problems reported in the first section on eight daily activities of adolescents (eating, speaking, cleaning the mouth, sleeping, maintaining an emotional repertoire, smiling, doing homework, and engaging in social socializing).
The impacts of oral problems on quality of life are measured according to severity (none, few, moderate, severe) and frequency (always, sometimes, never) using a Likert scale ranging from 0 (none/never) to 3. Severity scores are multiplied by frequency scores to obtain an index for each daily activity (range, 0-9).The total OIDP score (range, 0-72) is obtained by summing all totals for the eight daily activities; the overall OIDP index is calculated by multiplying by this score by 100 and dividing by 72 to create a scale ranging from 0 to 100.Higher scores indicate a greater impact of oral health on quality of life 12 .
The WHO developed the GSHS in 2003 and improved it in 2009 13 to evaluate health risk behaviors among adolescents according to demographic profile, with the aim of collecting data to support the planning and implementation of new programs and policies focused on adolescent populations, mainly in school environments.In this study, 27 GSHS items on oral hygiene habits and dental service accessibility were used.It has no score, but show behaviors of study population.
Statistical analysis
Descriptive analysis about DMFT and CPI index, all questions of GSHS, WHOQOL-Bref and OIDP instruments, was first conducted to characterize the study sample.The normality of data distribution related to WHOQOL-Bref and OIDP was examined using the Shapiro-Wilk test.Bivariate analyses were conducted using the chisquared, Fisher's, and Mann-Whitney tests, with calculation of 95% confidence intervals, which outcomes were variables related to quality of life.The independent variables were all question of dental services accessibility, DMFT and CPI indices.The same testes were applied in other bivariate analysis to test the influence of dental services accessibility on oral health condition (DMFT and CPI).A multivariate logistic regression model was constructed using variables showing statistical significance of 95% confidence intervals in bivariate analyses.All analyses were performed using BioEstat 5.3 and SPSS 20.0 software.
Results
Of 349 eligible participants, 32.9% moved, 7.5% withdrew from the study, 1.1% died, 4.0% refused study participation, and 2.9% did not answer all interview questions or did not allow oral examination.The final study population thus comprised 180 (51.6%) adolescents.
The mean DMFT index was 5.49 ± 3.33; 6.7% of adolescents had DMFT indices of 0 Dental caries was associated with age (p < 0.001) according to the Mann-Whitney test, but the regression model showed no linear progression (p = 0.67).
Table 1 shows adolescents' access to dental services according to caries experience and periodontal condition.
The health services type of last consultation with a dentist was related to the need for restoration of one tooth surface (p = 0.03).There cause for the last consultation was associated with the same treatment need according to the chi-squared test (p = 0.01).
Table 2 shows WHOQOL-Bref domain scores according to quality of life measures.
WHOQOL-Bref variables are distributed symmetrically, but OIDP indices are characterized by an asymmetrical distribution with many outliers.The Shapiro-Wilk test confirmed the asymmetrical distribution of OIDP scores in relation to the presence (p < 0.001) and absence (p = 0.04) of caries.Table 3 shows descriptive statistics for WHOQOL-Bref total and domain scores and the OIDP index; Mann-Whitney tests revealed that dental caries had a major impact on social relationships.
WHOQOL-BREF scores (average = 85.4) were higher among adolescents who sought dental treatment in the past 12 months than among those who consulted a dentist more than 1 year ago (average = 80.1).
WHOQOL-BREF social domain scores were higher among adolescents who obtained private dental services (average = 61.3)than among whose received public services (average = 54.5), and higher among those who classified the last consultation as "good" (average = 57.0)than among those who classified it as "bad" (average = 53.4).The health service type and quality of the last consultation thus influenced adolescents' social life (Table 4).
WHOQOL-BREF environmental domain scores were higher among adolescents who had received education about oral disease prevention (average = 77.4)than among those who reported receiving no health education (average = 70.16;Table 4).
Discussion
This research revealed high caries index and the treatment needs concentrated on restoration and endodontics procedures suggesting the lack of dental care, which problem started in health system or through neglect of oral health by patients.
Preventive and educational measures can avoid the worsening of oral health, but their absence can result in the need for endodontic procedures, provisional restorations, and drug prescription in urgent dental clinics to address severe dental problems and avoid extraction 14 .First molars are most commonly affected because they are the first posterior teeth to erupt, even with deciduous permanence 15 .It is important to highlight that the highest prevalence of decayed component of caries index and gingival bleeding predominance in the present study suggested that there was failed treatment and preventive dental services for that excluded population.
The neglect of oral hygiene, consumption of a cariogenic diet, and non-regularity of consultation with a dentist make the first molars more susceptible to extraction 15 .Considering that the settlement stayed in rural area, it is important to emphasize that rural population which present adolescents with high caries index tend to have a future with tooth loss in adulthood 4 .
In the present research the prevalence of caries was higher among adolescents who sought dental treatment because of pain than among those who presented for routine consultation.Preventive care can avoid dental caries, but it is rarely a part of public health services, which are the main source of dental care for rural and remote populations to maintain oral health 15 , that express the same situation of studied population who live at rural settlement.
Important to observe that the need for restoration of one tooth surface was greater among adolescents who attended public services than among those who consulted dentists in the private sector and adolescents who sought treatment because of pain perceived a greater need for restoration of one tooth surface than did those who made preventive routine dental visits.
These findings showed that adolescents attend public service used not to do a preventive dental treatment and the pain is the main reason for seeking care.In rural and remote communities, dental caries is a common event in children's life on their mothers' view, showing the lack of information regarding preventive measures 16 .
Adolescents perceive that a large number of decayed and un-treated teeth, severe gingivitis, and the presence of dental calculus are reasons to need dental treatment 1 .Unfortunately, adolescents seek dental treatment because of dental problems, instead of seeking regular and preventive consultation 17,18 .The expected outcome of this attitude, which is part of rural culture, in addition to the difficulty of access to dental services and shortage of capable professionals 9 , is poor oral health, with negative effects on learning and academic development 19 .The prevalence of conditions causing toothache among adolescents demonstrates the need for public health policies that consider the difficulties faced by these socially excluded populations and aim to reduce caries prevalence and promote oral health maintenance through preventive, educational, and interventional measures focused on quality of life 20 .
Fear may be one reason for rare treatment seeking and may also explain the high caries index because it can prevent individuals from ob- Adolescents' conceptualization and valuation of oral health must also be considered such as the social representation of oral problems among them 16 ; those who do not value oral health tend to have the worst hygiene habits, lowest frequency of consultations, and, thus, worst oral health 15,21 .Tooth loss and large and numerous carious lesions are common in patients who experience fear or high levels of anxiety during dental consultation 19 .
Health education should begin in childhood because conceptualization of the meaning of health and the importance of maintaining systemic health and preventive habits persists into adulthood.Adolescents who lack health information, particularly those in rural communities, are more likely to experience severe tooth loss in adulthood with negative effects on quality of life 4,21 .
Poor oral health affects individuals' social life, chewing, and tooth cleaning 21 .However, despite these effects on many aspects of quality of life, individuals' satisfaction with their dental appearance worsens from adolescence to adulthood, suggesting worsening oral health conditions and lack of care 22 .In the transition to adulthood, individuals acquire harmful habits such as tobacco use, alcohol consumption, and impoverishment of oral hygiene, making the oral condition more vulnerable to dental problems 22 .OIDP indices were low in the present study, demonstrating that oral health impacted adolescents' quality of life The relation found between presence of caries and social relationships in the present study indicated the mild impact of oral health on quality of life.
The negative effects of oral health on quality of life are due mainly to gingivitis and dental calculus, which prevent adolescents from smiling because of embarrassment about their appearance and halitosis.In addition, severe untreated caries cause pain, suffering, and discomfort, preventing individuals from relaxing, making them more sentimental, and making concentration on their studies difficult; in other words, these oral health problems clearly damage adolescents' social life 1 .The oral problems measured by the OIDP index cause difficulties with smiling, sleeping, eating, and tooth cleaning due to pain and discomfort 14 .
Another relevant problem caused by the worsening of caries, lack of preventive care, and rarity of dental consultation among adolescents is absenteeism from school, which increases the impact of oral health on quality of life 1 .This situation is worse in adolescents who lives in rural communities, given the difficulty of accessing schools and the severe pain resulting from poor oral health; in addition to the effects of poor valuation of oral health, these factors combine to make school absenteeism normal fin this population 1 , like the studied adolescents from settlement.Teenagers with severe caries and toothache due to large numbers of decayed and untreated teeth tend to be absent from school more than those with low caries indices 1 .
The present analysis showed no significant association between quality of life indices and variables measuring the need for dental treatment and periodontal condition, perhaps due to the small amount of discomfort caused by gingival bleeding.Tooth brushing at least twice a day and daily use of dental floss are directly related to good oral hygiene and periodontal condition 23 .Parents' education level is not associated with their children's periodontitis, and this disease is usually not a reason for dental consultation 23 .
The association between timing of the last consultation and quality of life was higher among adolescents who made frequent visits to the dentist suggesting that routine dental consultations can improve the quality of life.
Infrequent dental visits and lengthy intervals before individuals seek treatment are directly related to the pronounced impacts of oral health on quality of life, high caries index, and dissatisfaction with oral health 17 .Parents who have taken their children to the dentist since childhood tend to encourage their children to continue this habit in adolescence, promoting better valuation of oral health and preventive actions 24 .
The studied adolescents who attended public health services presented worst quality of life than those who seek for private services, demonstrating that the type of care offered to population is important to maintenance of their quality of life.
Rural communities tend to use more public health services than the urban, likely due to the shortage of private services in rural and remote areas 17 , similarly the study population who sought dental treatment in public services.Generally, the poor environment domain is related to low social and economic classes 25 , but in the study population all of participants belonged to the same class, which suggested differences in dental consultations between public and private services.
Professionals find working in rural areas very difficult because of the limited social life, professional isolation, workload, dissatisfaction with health center infrastructure, difficulty of attending educational courses, and lack of information about other job opportunities 26 .Thus, health care providers working in remote communities tend to demonstrate professional unpreparedness, lack of affinity with this kind of work, and lack of professional commitment to promote health, all of which contribute further to social exclusion.
The difficulty of accessing dental services, whether public or private, is not the only factor contributing to poor dental and periodontal conditions in adolescents.Education level, receipt of information about oral health, frequency and quality of tooth brushing, regularity of dental visits, and sugar ingestion are also worst in rural communities, especially those far from urban capitals 17 .For these reasons, dental caries is more prevalent among adolescents from rural and remote areas than among urban adolescents 27 , with decayed and untreated teeth suggesting lack of care and harmful habits that directly affect quality of life, principally in the psychological, social, and physical domains 17 .
The negative influence found of environment in oral health education for the studied adolescents from settlement suggested some failing in preventive works in public health services.
Satisfaction with one's environment, educational level, and positive perception of parents' health seem to be associated with better oral health conditions and less psychological impact of poor oral health among adolescents.Conversely, the damage caused by worsening dental problems negatively affects quality of life and limits dental function, resulting in poor self-perceived oral health and worse self-esteem 28,29 .Self-esteem is related to self-efficiency in oral care, which is better in boys than in girls; in addition, this self-efficiency seems to be worse in adolescents who live with a single parent than in those who live with both parents 30 .
The impact of oral health, especially regarding decayed and missing teeth, on the psychological domain can be sufficiently large to contribute to depression 31 .This situation has gained attention from researchers because oral health is directly linked to health related quality of life.A high caries index reflects the urgent need for immediate care to avoid worsening of the condition and irreparable loss 30 , especially in socially excluded populations, which typically have the worst oral health conditions and show the greatest impacts on quality of life 2 .
Although, there are few works investigating oral health and quality of life of populations from rural settlement, mainly with adolescents.New studies focused on them would clarify some peculiarities, like their life style and habits, to help the health professionals comprehending them and also to support new public health policies which would make easier the dental service accessibility and consequently, contribute to better quality of life of these adolescents.
It was concluded that the caries index was high among adolescents from rural settlement in Pontal do Paranapanema (SP).The most evident periodontal change was gingival bleeding, which is a reversible condition.Dental treatment needs included restoration and instruction in oral hygiene and prophylaxis.Adolescents from settlement continue to face inequity in access to dental services; in the majority of the study population, the last dental consultation occurred more than 12 months previously, in the public sector, due to toothache.Although participants reported good quality of life and a low impact of oral health, the social relationships of these socially excluded adolescents were influenced by the presence of dental caries and type and quality of the last consultation.
Collaborations
MM Leão contributed to design of study, acquisition of data, analysis and interpretation of data, and in drafting the manuscript.CAS Garbin contributed to design and acquisition of data, mainly concerning ethical aspects; was involved in drafting the manuscript, revising it critically for important intellectual content and contributed to analysis and interpretation of data.SAS Moimaz contributed to design and acquisition of data, mainly concerning epidemiological aspects; contributed to analysis and interpretation of data and gave the final approval of the version to be published.TAS Rovida contributed to conception and design of research, in drafting the manuscript, analysis and interpretation of data, and gave final approval of version to be published.All authors read and approved the final manuscript.
Table 1 .
Dental service accessibility according to caries experience and periodontal condition among adolescents from settlement.
Table 2 .
Score according scales of measurements of quality of life.
Table 3 .
WHOQOL-BREF domain scores and OIDP index according to DMFT index.
Table 4 .
Associations of quality of life with dental service accessibility. | 2017-07-14T02:24:15.229Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "c32f60d6b6a31b900f04da7bcb2d4a885e6a916c",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/csc/a/M4HLGqHF66X474T4b6npn7z/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c32f60d6b6a31b900f04da7bcb2d4a885e6a916c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256697635 | pes2o/s2orc | v3-fos-license | Joint Acoustic Echo Cancellation and Speech Dereverberation Using Kalman filters
This paper proposes a joint acoustic echo cancellation (AEC) and speech dereverberation (DR) algorithm in the short-time Fourier transform domain. The reverberant microphone signals are described using an auto-regressive (AR) model. The AR coefficients and the loudspeaker-to-microphone acoustic transfer functions (ATFs) are considered time-varying and are modeled simultaneously using a first-order Markov process. This leads to a solution where these parameters can be optimally estimated using Kalman filters. It is shown that the proposed algorithm outperforms vanilla solutions that solve AEC and DR sequentially and one state-of-the-art joint DRAEC algorithm based on semi-blind source separation, in terms of both speech quality and echo reduction performance.
INTRODUCTION
Acoustic echo cancellation (AEC) and speech dereverberation (DR) play important roles in speech processing applications like real-time communications and automatic speech recognition. AEC aims to remove echos introduced by the playback signals. DR aims to reduce reverberation and restore direct path sounds, optionally retaining early reflections. Adaptive filters are widely used in the two tasks, such as the normalized least means square (NLMS) filter [1], the recursive least square (RLS) filter, and the weighted prediction error (WPE) algorithm [2]. Kalman filters have also been separately applied in AEC [3,4,5,6] and DR [7,8,9,10].
From the view of adaptive filtering, AEC and DR share many similarities in their solving process. Both tasks are related to estimation of the room impulse responses (RIRs), one relating a source position to the microphone position and the other relating a loudspeaker position to the microphone position. Joint algorithms that deal with AEC and DR at the same time have shown performance gains over separate partial algorithms [11,12,13,14]. Takeda et al. [11] achieve blind dereverberation and echo cancellation by applying a frequency domain independent component analysis (ICA) model. It is assumed that direct sound frame is independent of late reverberation and playback signals, which is approximately true under multiple input/output inverse filtering theorem (MINT) conditions. Togami and Kawaguchi [13] combine acoustic echo reduction, speech dereverberation and noise reduction by assuming a time-varying local Gaussian model of the microphone input signal. The linear filters are optimized under a unified likelihood function that directly reflects the eventual speech enhancement performance. Cohen et al. [15] propose a Kalman-EM method based on a moving average model for speech dereverberation, and the Kalman filter is adopted to estimate the clean signal after echo cancellation in the E-step.
The reverberant signals are often described by an autoregressive (AR) model in the short time Fourier transform (STFT) domain. Our previous works [16,17] reformulate AEC and DR from the semi-blind source separation perspective, respectively. A RLS based joint DRAEC algorithm is further derived in [18]. Considering RLS can be seen a special case of Kalman filter [19], we further propose in this paper a joint DRAEC algorithm using Kalman filters. The AR model coefficients and the loudspeaker-to-microphone acoustic transfer functions (ATFs) are considered time-varying and are modeled simultaneously using a first-order Markov process. By minimizing a unified mean squared error loss function, a novel joint DRAEC algorithm is derived. The joint algorithm not only outperforms its RLS counterpart but also outperforms cascaded alternatives that use Kalman based AEC and Kalman based DR sequentially.
SIGNAL MODEL
We consider a multi-channel convolutive mixture in the shorttime Fourier transform (STFT) domain. A sensor array of M microphones captures signals from source S(t, f ) and signal X(t, f ) played by a loudspeaker, with t and f the time index and the frequency index, respectively. The mth microphone signal in the f th band is given by: where A m,l denotes the source-to-microphone transfer function, B m,l denotes the loudspeaker-to-microphone transfer function, and V m denotes the background noise. The signal can be approximated by an auto-regressive model [13] as: where S m is the direct path sound and early reflections, ∆ marks the boundary between early reflections and late reverberation, C m,n,l denotes the multichannel auto-regressive coefficients, andṼ contains the modeling error. The first term in the right-hand side of (2) is defined as the target to be recovered: where is a unified vector consisting of the AEC related filters of length L X and the DR related filters of length M L Y , and is a concatenation of the playback signals and the timedelayed microphone observations. (·) * denotes complex conjugate, (·) H denotes Hermitian transpose and (·) T denotes transpose. An optimal estimate of the filter coefficients (4) can be obtained by minimizing the squared error loss function: where(·) denotes variable estimate. We assume the following algorithm is performed for each microphone independently, and the subscript m is omitted for brevity.
THE PROPOSED ALGORITHM
Both the source-to-microphone and loudspeaker-to-microphone transfer functions are time-varying in real acoustic scenarios, therefore the filter states {B l , C n,l } are also considered to be time-varying. This is described by a first-order Markov process: where A(t) is the state transition parameter and u(t) ∼ N (0, Φ u (t)) describes the process noise that follows a zeromean complex Gaussian distribution.
Kalman based DRAEC
Given the above formulation, the well-known Kalman filter [20] is applied to estimate w(t). We denote Φ as the state vector error covariance matrix: The filter update equations are given by: where the Kalman gain φ S (t) = E{S(t)S * (t)} denotes power spectral density of the desired source and I is a unit matrix of proper size. The next time step predict equations are given by: The target source is eventually recovered by:
Cascaded solutions
where x(t) is a vector of the playback signals, ands(t) is a vector of the time delayed signals {S m (t−∆−l)} after AEC. Note that the AR coefficientsw DR are different from w DR as defined in (4), because the involved signals are different. The DR filter is thus susceptible to performance of the AEC filter. The loss function in this case is given by: and it can be optimized by performing Kalman filter based AEC and Kalman filter based DR sequentially.
Parameter estimation
The Kalman filter requires suitable estimators for Φ u (t) and φ S (t). Similarly as in [8], we use Φ u (t) = φ u (t)I, assuming the elements in w(t) uncorrelated and identically distributed. The variance parameter is estimated by the change of filter coefficients over time, where L denotes the filter length and η is a small positive number to retain the tracking ability when the acoustic environment changes. A maximum likelihood estimation of φ S is given by: and where α is a recursive smoothing factor. For initialization, we use w(0) = 0 and Φ(0) = I.
EXPERIMENTS
The experiments are conducted in echoic, echoic & reverberant, and echoic & reverberant & noisy environments. Reverberation only is not considered because the source models as in shown Fig. 1 would degenerate to the same one.
The sampling frequency is 16 kHz. We use a frame size of 32 ms, 50% overlap between frames and a STFT size of 1024 points. The AEC filter length is set to L X = 5 and the DR filter length is L Y = 5 with delay ∆ = 2. We use A = 1, η = 1e −4 and α = 0.8 in the Kalman filters. The experimental setup mainly follows that as in [18]. The proposed algorithm is compared with its RLS based variants, the implementations of which are open sourced 1 .
Echo
We first consider the task of single talk echo cancellation. The echo signals are recorded using a smart speaker with M = 2 microphones and one loudspeaker. In Fig. 2, echo return loss enhancement (ERLE), defined as ratio of the input signal power to the output signal power, is investigated. Echo path change is simulated by concatenating two different test files. Kalman-DRAEC achieves the highest steady state performance of 31.15 dB ERLE, surpassing the second-best Kalman-AEC-DR by 2.30dB and all the RLS variants. The joint DRAEC algorithms outperforms cascaded alternatives, DR-AEC and AEC-DR. One reason is the microphone signal contains an echoic copy of the playback sounds, and filtering on microphone signals helps echo reduction especially when nonlinear echo exists.
Double talk utterances are simulated by adding clean speech to the echo signals at signal-to-echo (SER) ratios of 0 dB, -10 dB and -20 dB. The perceptual evaluation of speech quality (PESQ) scores are evaluated and reported in the first category in Table 1. Kalman-DRAEC achieves overall highest scores when SER=-10 dB and SER=-20 dB, and scores comparable to Kalman-AEC-DR (1.54 vs 1.58) in SER=0
Echo & Reverb
Taking reverberation into account, clean speech is first convolved with room impulse responses before adding with echo signals. The impulse responses are generated in random-sized rooms using the Image method [21]. The direct sound with early reverberation (50ms) is used as reference. Double talk PESQ scores are reported in the right parts of Table 1. The trend is the same as in the echoic environments. Kalman filter based algorithms perform overall better than the RLS based variants. Performing DR before AEC is not recommended because the source model as in Fig. 2(b) mixes up the source signal and the echo signal. High reverberation (0.6s) and high echo level (SER=-20dB) is challenging for all the algorithms.
Echo & Reverb & Noise
The proposed algorithm is also evaluated in complex scenarios where interference and noise coexist. Interfering signals are added at signal-to-interference ratio (SIR) of 0 dB. Signal-to-distortion ratio (SDR) [22] and a non-instructive metrics, namely microphone signal to interference-plus-echo ratio (SIER), are respectively reported in Table 2 and Table 3. SDR measures the overall quality of the processed speech. SIER measures the non-target reduction performance. Kalman-DRAEC is advantages in SIER and scores highest in all the test cases, nevertheless, at a cost of more speech distortion as shown in the SDR scores. Based on the previous results, the advantage of DRAEC mainly comes from introducing more echo reduction.
CONCLUSION
This paper introduces a joint DRAEC algorithm using Kalman filters that performs echo cancellation and speech dereverberation at the same time. The joint algorithm is derived from a unified mean squared error loss function and outperforms cascaded DR-AEC and AEC-DR alternatives. The Kalman based algorithms also outperforms their RLS based variants because the acoustic environment change is explicitly modeled by a first-order Markov model. | 2023-02-10T06:42:35.275Z | 2023-02-09T00:00:00.000 | {
"year": 2023,
"sha1": "eff7dddd6b86306203bf0659a83626b2dd354cdb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eff7dddd6b86306203bf0659a83626b2dd354cdb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
202148560 | pes2o/s2orc | v3-fos-license | SURVEY ON SOLID WASTE MANAGEMENT IN PUBLIC HOSPITALS AND PRIVATE HOSPITALS IN THE CITY OF TANGIER.
In Morocco, there is little information available concer 1 ning the production, composition, handling, and disposal of hospital waste. The purpose of this study is to investigate the Hospital Waste Management in 4 public hospitals, 6 private hospitals for-profit company and a private non-profit organization in the city of Tangier. The results showed that the generation rate per capita of Hospital Waste in Tangier is among 2.12 kg per bed per day in the public hospitals and on average up to3.58 kg in private hospitals. Solid waste of public hospitals consists 74.9 % of household and similar waste and 25.1% of infectious waste. In private hospitals, the household and similar waste is 69.8% and infectious waste is30.2 % of the total waste produced. This work shows that, for most health facilities studied, general waste has been mixed with infectious
ISSN: 2320-5407
Int. J. Adv. Res. 7 (7), 621-630 622 The environmental risks caused by poor management of waste are considered to be one of the biggest problems in developing countries. As the volume and complexity of health care waste increase, the risk of transmission of the disease during the transport and disposal is also growing.
Today, it is identified that specific categories of Medical Waste (MW) are among the waste containing a dangerous potential. Of all waste generated in health centers, 75 to 90% of the waste is not dangerous or general waste. While 10 to 25% of waste is still infectious and dangerous and can cause many risks for health.
Although the proportion of infectious and hazardous waste is relatively low, a bad management of this waste can cause contamination of a large volume of general waste [5].
Several studies have shown that the highest risk of transmission of disease in workers in contact with hazardous waste is associated with AIDS, hepatitis B and C and tuberculosis [3].
In developing countries, the importance given to HW is low [14], and in many countries, the transport and disposal of hazardous waste and MW remain associated with HSW [6].
In most Moroccan cities, Household and Similar Waste (HSW)and Health Care Waste (HCW) is collected together for disposal in landfills, waste dumps or poorly designed incinerators, creating operational problems.
Medical Waste Management (MWM) planning is essential to avoid adverse effects on human health and the environment. For a successful implementation of the MW management plan, the availability of sufficient and up-todate information on the quantity and composition of waste produced is crucial [3].
Over the years, several waste management systems, procedures, and methods have been reported for proper handling and disposal of HW, including land filling [8], incineration [15], autoclaving and recycling [10].
In general, there is not just one solution to HW management problems. In most cases, several approaches are combined because each practice has its strengths and weaknesses [12].
In addition, WHO has put in place various guidelines for safe, effective and environmentally sound methods for the handling and disposal of HW.
The objective of this study is to evaluate solid waste management practices in Tangier by studying eleven health care facilities distributed between public and private hospitals. This will determine the quantity, composition, and waste of health care per person, in order to have a better waste management and reduce the risks of hazardous waste.
Study area
The city of Tangier is located in the extreme north-west of Morocco. This area isestimated to be 863 km² and its population is equivalent to 1,065,601 inhabitants according to the 2014 census. It is considered as the capital of Tangier-Tetouan-Al-Hoceima region. This field of study includes 11 health care institutions (4 public and 7 private hospitals). 623
Methodology:-
Before starting our investigation into HWM in the city's health institutions, we are interested in studies of the city's history. We realized a lack of accurate data, apart from general national estimates. In an attempt to identify the reality of MWM, we had to look more closely at this issue and we have developed a specific Protocol.
The internal management mode of each health facility allowed us to make a diagnosis on the issue later. Thanks to these data, the problem can be viewed in a general way and different solutions can be proposed. The study's method is two-steps, which are: 1. Investigation by survey 2. Records of health establishments data collection
Data collection Sampling
In our study, we considered that the main sources of MW are: Public and private hospitals, which are the largest producers of these kinds of waste.
Depending on the availability of facilities, the sample was restricted to 11 sources: 4 public hospitals, 6 private hospitals for-profit company and 1 private non-profit organization.
In order to gather the maximum data, we must have an idea about the local HWM. And we chose to act by participative surveys. Therefore, we had to put in place a survey for the qualitative data and another one for the quantitative data.
Investigation by survey
The survey, we have developed, deals with different aspects of HWM: waste production, storage, disposal, treatment, staffing, and training. We have also incorporated open-ended questions to get personal advice on the current situation.
This survey was designated to managers of health facilities, and /or hygiene technicians where these exist. In order to optimize the response rate, we chose to go to each of the concerned institutions, conduct a semi-directed survey and fill in the survey form.
If we had sent the survey to managers and let them answer it alone. The answers that could have been obtained would probably have been less specific and less faithful to reality.
All the interviews we conducted were accompanied by a visit to the concerned establishment premises. We were able to compare the given answers with the reality and discuss them with the nursing staff. The analysis of these answers gives an idea about the MWM quality and quantity.
624
The knowledge of the mode of internal management of each health facility allowed us to make a diagnosis on the issue later. Thanks to these data, the problem may be considered in a comprehensive manner and different solutions can be proposed.
The method of study is a two-step, which: 1. Investigation by questionnaire 2. Records of health establishments data collection Results and discussion:-
Hospital waste management
The answers obtained at the end of the questionnaire survey were treated by themes, distinguishing the situation of the different types of establishments. The percentages shown in the explanations represent the number of responses obtained.
Quantities produced
The amount of HW produced by the various health facilities in the city of Tangier is illustrates by Figure 2. We find that this quantity varies from one institution to another. By analyzing the histogram of weightings, it is found that the quantity of HW produced varies according to the litter capacity of the establishment; the more it is important the more the quantity of waste produced is high.
However, from Figure 3, we find that the amount of HSW varies between 2.18 and 0.07 kg/ bed/day with an average of 0.89 kg/bed/day. While the amount of waste from HCW varies between 1.29 and 0.03 kg/bed/day with an average of 0.37 kg/bed/day. We also observe that we have three orders of dominance: 1. The quantity of HSW is much higher than that of HCW (case of the Val Fleuri Hospital and the Duc de Tovar Hospital) 2. The quantity of HSW is relatively higher than that of HCW (case of the majority of health establishments) 3. The quantity of HSW is similar to that of HCW (Case of AL KORTOBI hospital) 4. This distribution could be due to the sorting process in these establishments. The weekly weighing of hazardous waste is 81, 25% of the studied establishments and it is carried out by the staff of the company ATHISA-Morocco, which is specialized in the collection and treatment of the HW. The solutions to be considered for the treatment of MW must, therefore, take this production into account.
One of our visits to one ofthe public hospitals of Tangier coincided with the day of the waste collection by theATHISA-Morocco staff. Thus we took advantage of this opportunity to attend the weighing, which was not very precise. The capacity of the scale, in this case, did not exceed 50 kg; on the other hand, the containers weighed much more, especially those intended for the conservation of the placentas in the freezers until the day of the collection. In such a situation, the weighing personnel estimates the amount of waste when faced with such a problem.
For waste assimilated to household waste, they are collected daily by the services of the municipality or companies whose service has been delegated to them and are put directly in the landfill without any sorting.
Waste sorting
The sorting of hazardous and non-hazardous waste is the first step towards improving the MHW. This task makes it possible to envisage a suitable treatment thereafter. During the visits to the various services of the establishments studied, MW is mixed with household waste with the exception of two or three containers in each establishment. As for pungent objects, they are put directly into inviolable containers of yellow color reserved for these object after having hooded the bevels.
The interview with the professionals shows that the non-respect of the sorting is due sometimes to the lack of the material and the equipment: absence of red bags, sometimes stock-out of the black bags, adding to this the ignorance of this procedure by certain staff, whose behavior is difficult to change. However, this sorting is necessary for any clean waste management project.
Sorting performing establishments
Regarding the sorting between HCW and HSW, we find a significant difference between public and private institutions.
In fact, 84% of private establishments do not sort, mixing indifferently hazardous and non-hazardous waste in bins which are then sent to the public dump through a municipal collection. Only sorting Sharp Waste (SW) is done in Quantity of household and assimilated waste KG/BED/DAY Quantity of hazardous waste KG/BED/DAY 626 almost 70% of cases. However, although collected separately, this SW is not subject to any specific treatment. It should be noted that sharp and SW can be immobilized or encapsulated in collection containers at the level of medical services. Thus, once filled at least their capacity, the containers will be filled with plaster and then transported to the landfill according to the testimony of a nurse from a private hospital. Regarding the public sector, all institutions have implemented sorting.
Again, the SW issue deserves separate treatment as 100% of public hospitals sort for this type of waste.
When we look at the quality of sorting done, we see a profound inefficiency. Some officials justify these deficiencies by the immobility of mentalities. However, by observing different services and treatment rooms, we realized that the problem comes from the nursing staff. Indeed, all waste produced in the room is thrown into the red bag regardless of their harmfulness. There is a strong presence of packaging, paper or leftover food in the middle of MW.
Sorting by nursing staff:
When the sorting is practiced in an establishment, it is generally the nursingstaffs who execute it. It is also the case that the maintenance staff takes in charge of waste sorting but this is a very rare case (9% of establishments visited). The sorting of waste at the source is more efficient than waste sorting in another step. In the very rare cases where a control of the sort exists, it is the nurse's majors who assure it at the level of each service. These controls are never strict and do not result in penalties. They are only intended to identify black spots in sorting.
In terms of SW, the materials used are special yellow containers instead of cans or recovery bottles that have been seen in one of the private hospitals. However, SW must theoretically be fully collected in special, single-use containers, which leaves us to consider for the future large needs in special containers. This type of equipment is lacking in the private sector.Some private institutions have shown us stocks of this type of waste dating back several years.
On the other hand, ATHISA provides this material to public institutions. The quantities are sufficient and are not reused.
Storage and disposal of waste
The issues of storage and disposal of waste are extremely linked. In fact, storage times are a direct consequence of the frequency of collecting waste. For this reason, these elements will be treated together.
Staff in charge of waste storage
All health facilities have designated an employee who is responsible for waste storage. In 87.5% of the care establishments, maintenance staff takes care of it.
ATHISA agents institute 3% of cases, while the remaining 9.5% are cleaners who evacuate waste. Finally, they provide this function in all private hospitals.
Storage equipment
All the sanitary facilities visited, public and private ones, have a special room for storing waste. This room makes it possible to store the HCW and the HSW separately.
As for the private establishments that refused to welcome us, and according to the testimony of the neighborhood, they empty their waste directly into the HSW containers shared with the entire population of the neighborhood where they are located, despite the risks that may cause.
For the intermediate storage, all the studied establishments often use the corridors and the toilets, situation presenting a potential risk for people frequenting these places.
Evacuation frequency
All institutions surveyed, public and private ones evacuate their daily household and assimilated as well as MW already treated in situ. It is the municipal collection vehicle that takes care of this collection. In the case of external treatment of MW, the evacuation is done by ATHISA agents every week.
627
Treatment of waste from care activities All the establishments studied treat their risk waste in either situ or elsewhere.
Integral waste treatment "in situ"
Only one public institution has a sterilizer mill that processes about 400 kg of HSW every week.
The sterilizer crusher can treat all the waste produced by health establishments with the exception of blood products and placentas. Indeed, the coagulation of this waste with the action of heat causes a dysfunction of the blades of the mill and slows down, or even prevents the action thereof. It is therefore understood that establishments that will treat their HCW in situ will have to perform a very strict sorting of blood products so as not to hinder the action of the mill. As a result, they are sent elsewhere for specialized treatment.
Treatment of Health-care waste activities outside the institution All establishments studied, which do not have a sterilizer grinder, have their HCW treated outside. However, there are solutions for such treatment. Thus, these establishments eliminate all their waste in TETOUAN city by ATHISA, a company specialized in the treating MW. The technology used by ATHISA is the autoclaving which is based on sterilizing waste by destroying the microorganisms for a period of time by acting on a rise in temperature and pressure. These methods are those of the highest value of installation (3600 T/year).
Personnel and training
Internal management of waste After our investigation, only the public hospitals are equipped with a waste Manager, while we note his absence in all private hospitals under investigation.
However, it appears that, apart from hygiene technicians of Mohammed V hospital and CNSS polyclinic, none of the employees in question is trained specifically in waste management. Yet it is this employee responsible for the sector property from all the waste. So it must be able to set up and control the sorting, identification of equipment needed, to ensure the good knowledge of the waste management of the entire nursing staff and anything else conducive to proper management of waste.
In addition, this person, as well as any other employee having prolonged contact with the HCW, must have a reinforced medical follow-up. At present, this element is largely neglected. Indeed, while the follow-up of the nursing staff is generally satisfactory, the investigated staff is very largely insufficient or completely non-existent. It is, therefore, a question of establishing for this staff a frequent, regular and compulsory medical check-up.
Training Nursing staff in waste sorting
The results of the survey show that training the staff in waste sorting is substantially equivalent in both the public and private sectors. Thus, all the institutions declare having trained their staff the sorting protocol. However, by about interviewing the various agents and caregivers, we found that some of them did not receive any training, as is the case with the Med VI hospital officer whose main job is security.
628
We find that only 3% of the surveyed institutions provided ongoing training to their employees. In addition, it appears that a strong training need is felt by all interviewed actors who wish to set up a regular training program in both public and private sectors.
A particular attention should be paid to this training issue, because of its essential role in the proper hospital waste management. As part of the implementation of a well HCW management in Tangier city, it will be necessary to develop a comprehensive training program and ensure the staff good practices.
General questions Deficiencies in waste management
In order to identify the actual needs of health actors regarding MW, the following question was asked: "In your opinion, what are the gaps in the current MWM process?".Therefore, several recurring problems can be highlighted as shown in the figure below. The main gap is currently observed, it concerns sorting risking and domestic waste as well as its establishment. Many private institutions are not performing waste sorting. Moreover, when sorting is performed, the quality is poor. No suitable treatment can work properly when these two types of waste are mixed. A substantive effort must, therefore, be made at all the city, in order to put in all places and establishments an efficient sorting.
Conclusion and recommendations:-
The diagnosis and inventory of waste management of health activities in Tangier city enabled us, through the analysis of available information in documents and sources, to put the finger on the need to set up a more rigorous management plan. We contested that: 1. There are no very reliable studies on the amount of HCW produced by public and private health care facilities in the city, which leads to a risk of overestimating quantities and oversizing treatment capacities. | 2019-09-10T09:10:04.107Z | 2019-07-31T00:00:00.000 | {
"year": 2019,
"sha1": "12815780f6fec1755e545d4a6f693331565971f4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/9402",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bce134eaa464d2710ef592b0d98266c89124aac5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
} |
72543390 | pes2o/s2orc | v3-fos-license | Antifeedant, insecticidal and growth inhibitory activities of selected plant oils on black cutworm, Agrotis ipsilon (Hufnagel) (Lepidoptera: Noctuidae)
The black cutworm, Agrotis ipsi lon (Hufnagel ) (Lepidoptera:Noctuidae) is a worldwide cosmopolitan pest of over 30 important crops, such as, beans, broccoli, cabbage, carrot, spinach, egg plant, lettuce, potato, tomato, turnip, as well as many other plants [1]. However, larvae will feed above ground until about the fourth instar. Larvae can consume over 400 sq cm of foliage during their development, but over 80% occurs during the terminal instar, and about 10% in the instar immediately preceding the last. Thus, little foliage loss occurs during the early stages of development. Once the fourth instar is attained, larvae can do considerable damage by severing young plants, and a larva may cut several plants in a single night. [2,3]. A. ipsilon has developed resistance in recent years to some of the conventional insecticides. Several attempts to combat the pest species on different crops using synthetic chemical pesticides culminated in problems like insecticide resistance, pest resurgence, outbreaks of secondary pests and environmental pollution. Keeping in view the economic importance of the insect pest and the hills crops in Nilgiris, laboratory studies were carried out to asses the effectiveness of the plant oils in controlling the black cutworm on cruciferous vegetables crops. Plant extracts including essential oils must have a great potential for pest management, which we shall review in light of recent literature. This work will complement previous reports on the biological and antimicrobial activities of essential oils as well as plant allelochemicals and their applications [4, 5]. So, a diversified use of essential oils by the development of their use in the pest management sector could be of both economic and ecological benefit.
Introduction
Th e black cutworm, Agrotis ipsilon ( Hu fnagel) (Lepidoptera:Noctuidae) is a worldwide cosmopolitan pest of over 30 important crops, such as, beans, broccoli, cabbage, carrot, spinach, egg plant, lettuce, potato, tomato, turnip, as well as many other plants [1] . However, larvae will feed above ground until about the fourth instar. Larvae can consume over 400 sq cm of foliage during their development, but over 80% occurs during the terminal instar, and about 10% in the instar immediately preceding the last. Thus, little foliage loss occurs during the early stages of development. Once the fourth instar is attained, larvae can do considerable damage by severing young plants, and a larva may cut several plants in a single night. [2,3] .
A. ipsilon has developed resistance in recent years to some of the conventional insecticides. Several attempts to combat the pest species on different crops using synthetic chemical pesticides culminated in problems like insecticide resistance, pest resurgence, outbreaks of secondary pests and environmental pollution. Keeping in view the economic importance of the insect pest and the hills crops in Nilgiris, laboratory studies were carried out to asses the effectiveness of the plant oils in controlling the black cutworm on cruciferous vegetables crops. Plant extracts including essential oils must have a great potential for pest management, which we shall review in light of recent literature. This work will complement previous reports on the biological and antimicrobial activities of essential oils as well as plant allelochemicals and their applications [4,5] . So, a diversified use of essential oils by the development of their use in the pest management sector could be of both economic and ecological benefit. eucalyptus oil gaultheria oil antifeedant Agrotis ipsilon deformities Objective: To evaluate antifeedant, insecticidal and insect growth inhibitory activities of eucalyptus oil (Eucalyptus globules) and gaultheria oil (Gaultheria procumbens L.) against black cutworm, Agrotis ipsilon. Methods: Antifeedant, insecticidal and growth inhibitory activities of eucalyptus oil and gaultheria oil were tested against black cutworm, A. ipsilon. Results: Significant antifeedant activity was found in eucalyptus oil (96.24%) where as the highest insecticidal activity was noticed in gaultheria oil (86.92%). Percentages of deformities were highest on gaultheria oil treated larvae and percentage of adult emergence was deteriorated also by gaultheria oil. Conclusions: These plants oil has potential to serve as an alternative eco-friendly control of insect pest.
Contents lists available at ScienceDirect S348 Nilgiris, Tamil Nadu, India and collected oils were used for bioassay against larvae of Agrotis ipsilon.
Rearing of Black cutworm, Agrotis ipsilon
The larvae were collected from cabbage field at Kodappumund, Udhagamandalam, Tamil Nadu, India.
Larvae were reared in lab condition at the department of Zoology and Wildlife biology, Government Arts College, Udhagamandalam, Tamil Nadu, India. These laboratoryreared larvae were used for bioassays and the cultures were maintained throughout the study period.
Antifeedant Activity
Antifeedant activity of plant oils was studied using leaf disc no choice method [6] . The stock concentration of plant oils (2 %) was prepared by mixing with dechlorinated water. Fresh potato leaf discs of 3-cm diameter were punched using cork borer and dipped with 0.25, 0.5, 1.0 and 2.0% concentrations of plant oils individually. Leaf discs treated with water was considered as control. After air-drying, each leaf disc was placed in petridish (1.5 cm X 9 cm) containing wet filter paper to avoid early drying of the leaf disc and single 2hr pre-starved fourth instar larva of A. ipsilon was introduced. For each concentration five replicates were maintained. Progressive consumption of leaf area by the larva after 24 hours feeding was recorded in control and treated leaf discs using graph sheet. Leaf area consumed in plant oils treatment was corrected from the control. The percentage of antifeedant index was calculated using the formula of Ben Jannet et al [7] .
Where C and T represent the amount of leaf eaten by the larva on control and treated discs respectively.
Insecticidal Activity
Fresh potato leaves were treated with different concentrations (as mentioned in antifeedant activity) of plant oils. Petioles of the potato leaves were tied with wet cotton plug (to avoid early drying) and placed in round plastic trough (29 cm X 8 cm). In each concentration 10 prestarved (2hours) IV instar larvae of A. ipsilon were introduced individually and covered with muslin cloth. Five replicates were maintained for all concentrations and the number of dead larvae was recorded after 24hours up to pupation. Percentage of larval mortality was calculated and corrected by Abbott's formula [8] .
Growth Regulation Activity of Plant Oils
Growth regulation activities of plant oils were studied at four different concentrations against IV instar larvae of A. ipsilon. Ten larvae were introduced in a petriplate having potato leaves treated with different concentration of plant oils. Water treated leaves were considered as control. After 24hours feeding, the larvae were transferred to normal leaves for studying the developmental periods. For each concentration five replicates were maintained. During the developmental period deformed larvae, pupae, adults and successful adults emerged were recorded. In addition weight gain by the treated and control larvae were also recorded.
Results
Plant oils of Gaultheria and Eucalyptus oils and their bioactivities were tested at different concentrations against Agrotis ipsilon. The bioactivity data were collected and subjected to one-way analysis of variance (ANOVA). Significant difference between the mean was separated using least significant difference (LSD) test. Antifeedant activity of plant oils was studied at different concentrations and the results are presented in table 1. Antifeedant activity of plant oils was assessed based on antifeedant index. Higher antifeedant index normally indicates decreased rate of feeding. In the present study irrespective of concentration the antifeedant activity varied significantly. Data pertaining to the above experiment clearly revealed that maximum antifeedant activity was recorded in eucalyptus oil (96.24%) at 2% concentration and followed by gaultheria oil (87.21%) compared to control. One-way analysis of variance (ANOVA) followed by least significant difference (LSD) test showed statistical significance (P<0.05) compared to control. Insecticidal activity of plant oils was studied at different concentrations and the results are presented in table 1. Insecticidal activity of plant oils was calculated based on larval mortality after treatment. High larval mortality normally indicates potential insecticidal activity of plant oils. In the present study irrespective of concentration used for oils the insecticidal activity varied significantly. Data pertaining to the insecticidal activity clearly revealed that maximum statistically significant insecticidal activity was recorded in gaultheria oil (86.92%) at 2% concentration whereas in the case of eucalyptus oil 60.24% was observed. One-way analysis of variance (ANOVA) followed by least significant difference (LSD) test showed statistical significance (P<0.05) compared to control.
Weight gained by larvae was studied at different concentration of plant oils and control. The weight gained by the larvae from the fourth instar to final instar was calculated. Statistically decreased weight gain was observed in gaultheria oil treated larvae (110.12mg) and followed by eucalyptus oil treated (133.52mg) larvae at 2% concentration (Table 2). Percentage of deformities due to the treatment of plant oils at different concentrations is presented in table 3. Among the two plant oils with four concentrations tested, maximum larval (30%), pupal (23%) and adult (21%) deformities were recorded in gaultheria oil at 2% concentration. and 25.1% in eucalyptus oil at 2% concentration but minimum deformities was recorded in eucalyptus oil. Percentage of successful adult emergence was minimum (26.6%) on gaultheria oil and maximum on eucalyptus oil (35.5%) at 2% concentration. Plant oils were subjected to preliminary phytochemical analysis for the confirmation of major group of compounds. Both plant oils showed positive results for confirmation of alkaloids and triterpenoid.
Discussion
In nature many plants have high content of unpalatable substances like phenols, alkaloids, flavanoids, terpenes, quinone, coumarin etc., which play a defensive role against insect pests. These substances possess a wide range of biological activities including antifeedant, oviposition deterrent, insecticidal, ovicidal and insect growth regulators (IGRs). Identifying sources with useful biological activity is only the starting point in the long process of development of a botanical pest management product [9] .
Antifeedant is defined as a chemical that inhibits feeding without killing the insect directly, while the insect remains near the treated foliage and dies through starvation. Most potent insect antifeedants are quinoline, indole alkaloids, sesquiterpene lactones, diterpinoids, and triterpinoids [10] . The present study reported that eucalyptus oil was promising in reducing feeding rate of fourth instar larvae of A .ipsilon. The rate of feeding significantly varied depending on the concentration of the plant oils. This indicates that the active principles present in the plant oil that inhibit larval feeding behaviour or make the food unpalatable or the substances directly act on the chemosensilla of the larva resulting in feeding deterrence.
Earlier, antifeedant effects of different plant essential oils were reported on various insect species. Krishnappa et al. [11] reported that Tagetes patula essential oil against the fourth instar larvae of S. litura for their antifeedant activity by leaf disc bioassay. Among the compounds tested Terpinolene was the most effective feeding deterrent agent against Spodoptera litura in the laboratory condition. The mean area fed 100 ppm / cm2 and 500 ppm/cm 2 . Elumalai et al. [12] reported that certain medicinal plant essential oils were tested against the fourth instar larvae of S. litura for their antifeedant activity by leaf disc bioassay. All essential oil are showed moderate antifeedant activitiy; however, the highest antifeedant activitiy was observed in the essential oil of Cuminum cyminum, Mentha piperata, Rosmarinus officinalis, Thymus vulgaris and Coriandrum sativum exhibited (100%) complete antifeedant activity at 6mg/cm 2 . Recently, Duraipandiyan et al. [13] they have been reported that antifeedant and larvicidal activities of rhein isolated from Cassia fistula flower against lepidopteron pests S. litura and H. armigera. Significant antifeedant activity was observed against H. armigera (76.13%) at 1000 ppm concentration. Rhein exhibited larvicidal activity against H. armigera (67.5), S. litura (36.25%) and the LC 50 values was 606.50 ppm for H. armigera and 1192.55 ppm for S. litura. The survived larvae produced malformed aduts. In addition, Jeyasankar et al. [14] reported that ethyl acetate extract of Solanum psuedocapsicum showed higher antifeedant activity on A. ipsilon. Further, Gokulakrishnan et al [15,16] reported that, 20 plant essential oils have been tested for their antifeedant activity of against three important lepidopteran species such as S. litura, H. armigera and Achaea janata . Among the oils tested, most significant antifeedant activity was observed at 1000 ppm concentration on S. officinalis (85.56) S. litura, M. spicata (82.85) H. armigera and M. spicata (90.55) A. janata.
In the present study the preliminary phytochemical analysis revealed that presence of alkaloids and terpinoids in the plant oils. These chemicals may inhibit the feeding of A. ipsilon. These results were supported by earlier workers on various insect pests, [17,18] . Plant oils for deleterious effects on insects are one of the approaches used in the search for novel botanical insecticides. Secondary plant compounds act as insecticides by poisoning per se or by production of toxic molecules after ingestion. These compounds also deter or possibly repel an insect from feeding. In the present study gaultheria oil exhibited significant insecticidal activity at 2% concentration. It is possible that the insecticidal property present in the selected oil compound may arrest the various metabolic activities of the larvae during the development and ultimately the larvae failed to moult and finally died. Similarly, the larvicidal effect of Basil essential oil tested against A.ipsilon was more effective than its active component (eugenol). The effect was more pronounced at the higher tested concentration. Basil oil at 3% (conc.), only 35 % of the larvae reached the pupal stage with 67.16 % reduction than control and 13 % of the pupae were deformed. Eugenol caused 40 % larval mortality. The reduction in percentage of adult emergence at 3 % and 2 % of basil reached 76.84 and 54.74 %, respectively. The deformities among adult reached 11% and 7 % at 3 and 2 % basil, respectively [19] .
Insect growth regulation properties of plant essential oils are very interesting and unique in nature, since insect growth regulator works on juvenile hormone. The enzyme ecdysone plays a major role in shedding of old skin and the phenomenon is called ecdysis or moulting. When the active plant compounds enter into the body of the larvae, the activity of ecdysone is suppressed and the larva fails to moult, remaining in the larval stage and ultimately dying [20] . In the present study maximum percentage of deformed development of larvae, pupae and adults were noted in gaultheria oil treated larvae. The morphological deformities at larval, pupal and adult stages are due to toxic effects of oils on growth and development processes. Since morphogenetic hormones regulate these processes, it can be suggested that these plant oil interfere with these hormones of the insects. These results are consistent with the recent reports on various lepidopteran species [21][22][23] who have reported that high larval mortality indicates potential insecticidal properties present in ethyl acetate extracts of Syzygium lineare against S. litura and a new crystal compound 2,5-diacetoxy-2-benzyl-4,4,6,6-tetramethyl-1,3-cyclohexanedione was isolated from the leaves of S. lineare which was responsible for significant insecticidal properties against larvae of S. litura. Its activity was better than the positive control azadirachtin. Furthermore, insecticidal compound was responsible for growth inhibition on S. litura. It induced larval, pupal and adult deformities even at low concentration.
In conclusion, gaultheria oil showed greater performance of insecticidal and growth inhibition activities against A. ipsilon. Hence, it may be suggested that the gaultheria oil, can be used for controlling the insect pest, A. ipsilon, which will replace the chemical pesticides.
Conflict of interests
We declare that we have no conflict of interests. | 2019-03-10T13:02:25.648Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "ef3da3b8499f7bc3cb4a57538d948b349b2edc37",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s2222-1808(12)60179-0",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f7cafd38035e61396fba6359ea93e0fe2befdb99",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
237907343 | pes2o/s2orc | v3-fos-license | THE EFFECT OF EXPERIENTIAL ENGAGEMENT WITH VIRTUAL LEARNING ON UNDERGRADUATE STUDENT SUCCESS
The purpose of this study is to better understand the effect of undergraduate engineering student engagement with an experiential learning opportunity on academic success in a virtual format. Students in a second year Civil Engineering Materials course that was virtual due to the COVID-19 pandemic were given the option to shift a portion of the final exam weight onto an experiential project. The project consisted of the construction and loading of a small bridge, introducing an experiential component to the virtual course. As a reflective question onthe final exam, students were asked to record a brief video testimony related to their motivations and any perceived benefits for participating or not participating in theproject. Of the students who participated in the bridge project, 58% were characterized as having thorough or above average knowledge and understanding of the graduate attribute indicators, relative to 33% of students who did not participate. Engagement with the bridge project through experiential learning therefore aligned with strengthened understanding of the graduate attributes, within the restrictions of the remote environment. In planning for future online courses, this study shows a method of engaging students with an experiential activity virtually, its positive effect on academic achievement, and other associated benefits.
INTRODUCTION AND BACKGROUND
In engineering programs, experiential activities (such as laboratories) have traditionally played a large role in student learning and engagement. Experiential learning has been characterized as learning from an experience or learning by doing something. The Experiential Learning Cycle was described by Kolb (1984) as being a cycle of concrete experience (having an experience), reflective observation (reviewing the experience), abstract conceptualization (learning from the experience) and active experimentation (trying what you have learned) [1].
This experimentation leads to another set of concrete experiments. Experiential education can take place in different forms, for example field-based experiential learning that includes internships and co-op placements [2]. Experiential learning can also be classroom based and can be used to relate theory to real-life situations. In engineering education applications, experiential education in the classroom can take the form of laboratory experiments, applied projects, simulations, competitions, or other experiences. The COVID-19 pandemic, however, necessitated many Canadian engineering institutions move to deliver their courses entirely remotely. The virtual course delivery creates challenges in incorporating traditional hands-on and experiential learning opportunities for engineering students.
With regards to further effects of the COVID-19 pandemic on students, university students have reported an increase in stress and anxiety [3]. A study by Son et al. (2020) surveyed 195 students in higher education, and 71% of respondents reported that the pandemic had increased their levels of stress and anxiety overall. More specifically, Son et al. (2020) noted that 89% of respondents indicated they had difficulty concentrating, and 82% were concerned about their academic performance during the pandemic [4]. Chakraborty et al. (2020) also surveyed graduate university students about their opinions of online education during the COVID-19 pandemic and found that 74% of students agreed that excessive screen time was causing stress and affecting their sleep patterns. Even in years unaffected by the COVID-19 pandemic, there is a need for those who interact with university students to consider student mental health, but the circumstances surrounding the COVID-19 pandemic have made it even more important to consider the mental wellbeing of university students.
It is clear that two factors of the engineering undergraduate student experience that were largely impacted by the COVID-19 pandemic are the availability of experiential learning opportunities, and the challenges surrounding mental health. Students have generally been found to have positive perceptions and see the value of experiential learning [5], while conversely a majority of students have reported they do not learn as well in the University of Prince Edward Island; June 20-23, 2021 online environment [6]. As excessive screen time is one factor negatively contributing to student mental health during the COVID-19 pandemic [6], there is a need to explore how experiential activities (away from technology) can be implemented during a remote learning semester. Experiential activities based away from technology and computer screens have the potential to significantly affect student achievement of the course learning outcomes, while minimizing factors that may harm student mental health.
The research herein aims to better understand the effect of undergraduate engineering student engagement with an experiential learning opportunity on academic success in a virtual format. The purpose of this research is to further define the benefits and limitations of one such experiential learning opportunity using a case study.
METHODS
The case study discussed herein considers students in a second-year Civil Engineering course in Canada that were given the option to complete a project requiring them to create a small bridge out of limited materials and load the bridge to failure. Students that participated were allowed to shift a portion of the weight of the final exam onto this project. On the final exam, students were asked to reflect on their experiences in participating or not participating in the project by characterizing their motivations for participating (or not) as well as describing any perceived benefits (benefits being left broad for the student to interpret). The rationale for allowing students the option to participate in the bridge project was 1) to improve student understanding of material properties and other technical aspects of the course through experiential activity, and; 2) to potentially relieve stress by reducing the weight of the final exam as well as giving students a break from their computers, technology and traditional study activities.
As previously noted, the study herein was meant to better understand the effect of undergraduate engineering student engagement on academic success in a virtual learning format. The overarching method of this investigation therefore consists of examining the correlation of engagement with a case study experiential activity on student achievement of the graduate attributes. Benefits and limitations of such an experiential activity in a virtual format are further identified using student selfreflection testimonies.
The course was comprised of 95 students. At a high level, the course covers topics related to material science, material resilience and sustainability, as well as properties of common structural and transportation building materials. Prior to the COVID-19 pandemic, the course was taught in person with three hours of lecture per week, one hour of tutorials, and two hours of weekly laboratory sessions. The laboratory sessions presented an experiential component to the course and were centred around the mechanical properties of the materials that were being studied in class. For the Fall 2020 version of the course, the course was entirely virtual. Students had two hours of asynchronous and one hour of synchronous lecture per week, as well as one hour of synchronous tutorial and two hours of synchronous laboratories. The virtual laboratories were designed to achieve the same learning outcomes as the in-person laboratories and are discussed in greater detail in planned studies by the authors.
The design of the virtual Fall 2020 version of the course was influenced by data that had been collected by other researchers regarding successes and challenges during the Winter and Spring/Summer 2020 semesters earlier in the year. Students at seven American private and public institutions were surveyed (n=3006), with participating institutions receiving the collected data specific to their institution and relative to the cumulative data [7]. Survey results can be obtained by contacting participating institutions, however for this study the data were provided to the course instructor as part of their international teaching activities of a specialized lecture series. The institution from which the data was predominantly examined is largely comprised of engineering students. Takeaways from this previously collected data included that students reported that they did not learn as well they would have in a physical classroom, and that many students felt that they were not able to balance their social/emotional needs with their coursework [7]. These studies emphasized the importance of considering student social/emotional needs in the course design and incorporating activities that would be included in a physical classroom version of the course.
In terms of requirements for the optional bridge project, students were asked to create a bridge using only popsicle sticks, dental floss, toothpicks, and white glue. Specifications were outlined with respect to the required clear span, span height, and span opening. Weight ranges of the final bridge were also specified. Students were further asked to create a 5-10-minute recorded presentation explaining the design and engineering behind their bridge, and then demonstrate the loading of the bridge to failure, reporting the final failure load. The bridge project was assessed on the originality and aesthetics of the design, the quality and spirit of their presentation, their loading demonstration, and their justification for their predicted modes of failure. Most of the criteria was meant to enhance course learning outcomes, however the 'spirit of the presentation' criterion was meant to encourage University of Prince Edward Island; June 20-23, 2021 students to have fun with the project as well showcase their interests and personalities. Student submission of the bridge project consisted of their recorded presentation.
The project was designed to be cost efficient for students. The cost of 1000 popsicle sticks, 100 m of dental floss, and 1 L of white glue was estimated to be around $20CAD, with these materials being readily available in stores and online. While the students purchased the materials themselves, the costs associated with the course were reduced overall from previous years as the students were not required to obtain personal protective equipment for the laboratory sessions (where minimum cost would have been approximately $100CAD).
The optional bridge project was inspired by the Troitsky Bridge Building Competition, an undergraduate student competition where teams from different universities create a bridge using popsicle sticks, dental floss and white glue within a time limit, and load the bridges to failure (the interested reader can look into the full requirements and evaluation procedures [8]). The origins of the Troitsky Bridge Building Competition stem from Civil Engineering coursework at Sir George Williams University (that would later become Concordia University). Students in a bridge design class were asked by their professor (Dr. Michael Troitsky) to design and build a bridge model using wood and glue. Over time, the project turned into a competition for Concordia students and eventually opened to institutions across Canada [8]. The implementation of the bridge project back into a single Civil Engineering course returns an adaptation of the project to its origins.
The reasoning for giving the students the option to participate in the bridge project was mainly to reinforce concepts taught in the course, and to reduce stress. By giving students the option to shift weight from the final exam to the bridge project, the intention was to reduce stress surrounding the final exam, which is a traditionally stressful time for students during a normal year, combined with the effects of the pandemic on student mental health had the potential to be very overwhelming. Moreover, the project was hands-on and experiential, and was therefore meant to be somewhat of a stress release by providing students with an excuse to take a break from their studies and do something creative and hands-on.
With regards to reinforcing graduate attributes covered by the course, the students needed to rely on their understanding of material properties to strategically create a design that they felt could support the greatest amount of load while keeping the weight as minimal as possible. The use of popsicle sticks in particular draws upon their knowledge of the material properties of timber -a material that has been identified as beneficial for program graduates to have increased knowledge of, as timber design becomes more prevalent in cities around the world [9]. Further, the project was meant to reinforce graduate attribute indicators as outlined in Table 1 using experiential learning. Table 1 highlights graduate attribute indicators assessed by both the project and by the final exam. Prepare a professional laboratory report including engineering charts, tables, graphs and diagrams to present information effectively A question was added to the final exam asking students about their participation or non-participation in the bridge project. The exact wording of the exam question is found in the Appendix. In general, students were asked which factors motivated them to participate or to not participate in the project. Students who participated were further asked if they believed that the project contributed to their understanding of the course material, and if there were any additional benefits in participating in the project. Submissions were to consist of a one-minute video recording of themselves.
Analysis of the video testimonies recorded by the students consisted of watching the videos, and identifying factors mentioned by the students. Where students mentioned more than one motivation or benefit, all factors mentioned were included in the analysis equally. Categories for factors that contributed towards students' motivations for participating/not participating as well as perceived benefits were created by counting the number of students who used similar phrasing in their testimonies. The categories related to mental health included student mentions of reducing stress, enjoying using hands on activities to relax, or doing similar activities as a hobby.
The technical portion of the final exam pertained to the student's comprehension of the course content, and included components of their knowledge base, problem analysis, design, and communication skills. This portion of the final exam was assessed relative to the graduate attribute indicators.
University of Prince Edward Island; June 20-23, 2021 In addition to submitting the bridge project as a part of the course, students were also able to submit their bridge presentation as a part of a competition hosted by the University's Civil Engineering Student's Association group. The project was designed to meet the requirements of the student association's competition, and the competition was also open to students not enrolled in the course. Performance in the student association's competition led to the creation of a team for the official Troitsky Bridge Building Competition for the year.
RESULTS AND DISCUSSION
Overall, 62 students (65%) chose to participate in the bridge project, and 33 students (35%) opted not to participate.
Effect of Engagement in Bridge Project on Student Achievement
The technical component of the final exam was marked in relation to the graduate attribute indicators defined in Table 1. As such, Fig. 1 is intended to convey how well students who participated or did not participate in the bridge project were successful in achieving the graduate attribute indicators. Students who received an A on this portion of the exam had strong achievement of the graduate attribute indicators, students who received a B had above average achievement of the graduate attribute indicators, and so forth. Students that did not achieve the graduate attribute indicators received an E. Students who were sufficiently engaged in the course to participate in the bridge project better demonstrated the graduate attributes indicators, as 58% of submitters scored an A or a B compared to 33% of non submitters.
While the grade received by students as shown in Fig. 1 is somewhat subjective to the person assessing the exam question, steps were taken to reduce bias. This included not marking the exam and the project together, and when marking the exam, not checking the students existing grade or whether they chose to participate in the bridge project.
Motivations and Benefits of Students that Participated
One of the final exam questions asked students about their motivations for participating or for not participating in the bridge project.
From reviewing student video responses, generalized categories of motivations could be made. It should be noted that these categories were not presented to students for a selection style response, but rather the questions were open ended with many students crediting the same motivations. It should be further noted that students were asked to name at least one motivation, but many students discussed multiple different motivations, and as such the number of responses does not align with the number of students who participated in the bridge project. Responses of students who participated in the bridge project are summarized in Fig. 2. The most common motivation was that students wanted to try applying their knowledge in a hands-on application (51% of students that participated commented that this contributed to their motivation). The high level of motivation to participate in a hands-on application could have partially been a product of the entirely virtual semester -if students were completing in person laboratories and classes, they may have had felt less of a need to do something hands-on. Nevertheless, this motivation reinforces the desire of students for experiential learning opportunities. 24 students, or 40% of those that participated, commented that they chose to participate for reasons relating to mental health -which included that doing this style of activity is relaxing or fun for them, or that stress surrounding the final exam was reduced. 15 students (23% of those who participated) distinctly indicated that they wanted to participate to shift a portion of the weight of their grade from the exam to the project to improve their overall grade, and not just to reduce stress surrounding the final exam.
Ten students (15% of those that participated) indicated that they wanted to participate simply because they were interested or curious in bridges, materials, or structures, and wanted to use this project as an opportunity to learn more. Four students (6% of those that participated) participated distinctly to submit their project to the student association's version of the bridge competition, for an opportunity to potentially compete in the national Troitsky Bridge Building Competition. Peer interactions have previously been identified as a source of discouragement and intimidation for engineering students [10], so promoting friendly peer interactions through student competitions and events is valuable in fostering positive experiences in undergraduate engineering programs.
Two students wanted to use the project to connect with other engineers, or their peers -this included discussing strategies with their peers, and connecting with other engineers that included family members for advice which was encouraged and allowed in the rules of the project. Finally, one student wanted to participate to take the opportunity to practice their presentation and communication skills.
Furthermore, students were asked to comment on any benefits they experienced from participating in the bridge project that do not directly tie into the content covered by the course material. The results of the student responses are summarized in Fig. 3. It should be noted that the motivations of the students do not directly align with the perceived benefits. It is assumed that many students felt that if they mentioned something as a motivation, they did not need to confirm that it was a benefit (e.g. the same students did not characterize reduced stress surrounding the final exam as both a motivation and a benefit).
25 students (38% of those who participated) noted that they strengthened their project management and design skills. Examples of this cited by students were that they were thinking about schedules they needed to keep to in order to finish by the deadline and thinking of their rationale as to why they chose a particular design over another. 18 students (28% of those who participated) explained that participating in the project benefitted their mental health to some extent, either by reducing the stress of the final exam or by having fun and relaxing during the creation of their bridge. Seven students (11% of those who participated) commented that they have a new appreciation for structures or are thinking about structures differently, examples included contemplating the structural design of bridges they see in their every day lives. Similarly, six students (9% of those who participated) discussed how the bridge project made them more curious or interested in engineering topics, or that they felt more motivated to study or practice engineering. Five students (8% of those that participated) indicated that they felt their communication skills improved, or that they were more confident in presenting.
Finally, three students noted that they had the opportunity to connect with others. These students explained that the project gave them an opportunity to work with other engineers -usually friends or family -where they would not have had otherwise dedicated time to spend with friends or family.
Motivations of Students that did not Participate
33 students chose not to participate in the bridge project. Their responses, when asked on the final exam of their motivations to not participate, are summarized in Fig. 4. Students were asked to provide at least one motivation.
Nine students (27% of those who did not submit) indicated that they might have liked to participate but could not make the time or meet the deadline. A further nine students indicated that they strategically chose not to participate to focus their time on studying for the final exam.
Three students (9% of those who did not participate) explained that they were not confident in their ability to University of Prince Edward Island; June 20-23, 2021 make a quality bridge or presentation, and similarly a further two students indicated they did not want to shift a portion of the weight of the final exam to the projectwhere the underlying motivation is not specified but could possibly be attributed to some of the other motivations specified for not participating (e.g. low confidence).
Two students indicated they could not obtain resources needed to participate in the bridge project, and one student indicated that they had competed in a similar project in the past and did not see the value in creating another bridge.
Further Adaptations
Given the circumstances surrounding the COVID-19 pandemic, the bridge project offered an experiential opportunity for engaged students. Students who were sufficiently engaged with the course to participate in the bridge project reported numerous benefits related to their mental health and demonstrated a stronger understanding of the graduate attributes than students who were less engaged and chose not to participate.
In future iterations of this course -or similar coursesadaptations of the project could be considered. Considering that overall, students that were engaged also demonstrated a better understanding of the graduate attributes, the rationale for students who did not participate could be analyzed in future iterations to make the project more accessible and encourage participation. This might include introducing the project as early as possible in the semester such that students can allocate time.
Some students also indicated that they could not obtain the resources needed to participate in the project. Course packs could be mailed out to students at the beginning of the semester and include the materials they need, eliminating the need for students to source materials themselves and ensuring each student is able to obtain the required materials. Considering the cost of the materials is estimated to be less than $20CAD per student, the cost of distributing materials to students should be manageable for many schools, especially in years where in person laboratory sessions can not be held. Though costs of shipping and administrative duties would need to be considered, traditional laboratory sessions for this particular course have in the past involved the design, creation, and loading of various civil engineering materials, in which the cost of materials alone has cost more than $20CAD per student.
Upon the resumption of in-person classes and laboratories, there is still value in similar projects. All of the benefits and motivations discussed in Figs. 2 and 3 will still hold true should the class be held in an in-person format, though the extent of the motivations and benefits might be altered, and new motivations and benefits may be introduced. A course held in person would have even more possibilities as to how such a project could be carried out, for example it could be adapted as a group project. An in person variation of the project could also further some of the benefits previously mentioned, for example there is the opportunity for greater connection between peers or other engineers, in brainstorming ideas and having an activity to do together on the day of testing. The in-person variation could also further strengthen communication skills, by providing students a low-stress opportunity to present. These and other possible enhanced benefits of an in-person variation should be addressed by future research.
CONCLUSION
The purpose of this study was to better understand the effect of student engagement on academic success in undergraduate students in a virtual learning format, by considering a case study looking at engagement with an optional experiential project within a Civil Engineering Materials Class. Findings from the study showed that students who engaged through participation in the optional project demonstrated a better understanding of the course graduate attributes than students who were not engaged enough to participate. Engaged students that submitted the bridge project were nearly twice as likely to demonstrate an exceptional or above average understanding of the graduate attribute indicators on the final exam (58% of students who submitted the project vs. 33% of students that did not).
Beyond enhancing student understanding of the course material, the bridge project was intended to also benefit mental wellness to some extent. Understanding that the 2020 academic year was especially stressful for students due to the COVID-19 pandemic, the bridge project was University of Prince Edward Island; June 20-23, 2021 meant to reduce stress by giving students the option to shift weight from the final exam onto the project, as well as by giving students an excuse to take a break from their computers and do something hands-on. These benefits were appreciated by students, with 37% of students who participated noting that mental wellness was one of their primary motivations for completing the bridge project, and 28% of students noting that upon reflection, improvements to mental wellness were a benefit to participating.
Limitations of this study include that the project was optional for the students. If the project had not been optional, it would have been possible to get the opinions of the entire cohort as to whether they felt this project added value, while limiting stress created. Moreover, this study should be repeated in a non COVID-19 environment, where the stresses of the pandemic are reduced, and classes are resumed in person. This would allow for a comparison of the benefits and successes of the same experiential project to achieve student success in a remote and in an inperson environment.
As a further limitation, it should be noted that while this study indicated their may be correlations between engagement with the experiential activity with academic success (as noted by student achievement of graduate attribute indicators), this study does not directly show that this success is caused by the student engagement with the activity. It is possible that, as a trend, students who would have achieved higher levels of academic success were those who chose to engage with the experiential activity. Future study could examine the extent to which engagement with the project truly causes greater academic success, which could consist of repeating the study in years where entire cohorts are asked to participate rather than having the project be optional. Moreover, although students who participated reported improvements mental health benefits, it is also possible that students who participated in the project had better mental health to begin with. This could also be explored through future study.
This case study outlined the effect of engagement with an experiential project on student success in a virtual format. The relative accessibility of the project combined with the benefits makes it worthy of consideration in future courses where it may be able to enhance the course content. The ability to have experiential learning experiences, specifically away from a computer screen, is especially valuable during virtual learning semesters. | 2021-09-01T15:11:21.382Z | 2021-06-23T00:00:00.000 | {
"year": 2021,
"sha1": "1f0ebff33975b9dfd3d650e5e4effc141a8dd20a",
"oa_license": null,
"oa_url": "https://ojs.library.queensu.ca/index.php/PCEEA/article/download/14898/9744",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b4714e09c371e9ad83af78f4dc2a9bb487820a40",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
10977259 | pes2o/s2orc | v3-fos-license | Clinical Approach to Pulp Canal Obliteration: A Case Series
This article describes four cases with safe and feasible clinical treatment strategies for anterior teeth with pulp canal obliteration (PCO) using cone-beam computed tomography (CBCT), digital radiography (DR), dental operating microscopy (DOM) and ultrasonic tips (US). Four anterior teeth with PCO were chosen. DR was taken with different angulations and analyzed with different filters. Subsequently, the access cavity was performed with the aid of DOM. If the canal was not identified, CBCT was requested. Sagittal and axial slices guided the direction of the ultrasonic tips. After identification of the canal, it was then negotiated and instrumented with the rotary instruments. All four canals were successfully identified, with no complications. In case 1, the canal was identified using DR, DOM and US tips. In cases 2, 3 and 4, the canals were identified with DR, DOM, US tips and CBCT. Complete root canal obliteration identified in radiography did not necessarily mean that pulp tissue was not visible clinically, either. The clinical evaluation of the access cavity with the aid of MO was crucial. If the canal was not identified, CBCT was mandatory in order to show more detailed view of the precise position of the canals, their directions, degrees of obstruction and dimensions. It served as a guide for the direction of the ultrasonic tips to keep them within the pulp chamber safely, with a low risk of iatrogenic injury.
Introduction
alcific metamorphosis, or pulp canal obliteration (PCO), is the pulp response to trauma, characterized by rapid deposition of mineralized tissue in the root canal space. Different factors, such as dental trauma, carious lesions, abfraction, abrasion, pulp capping, occlusal imbalance, orthodontic treatment, harmful oral habits and individual aging, can trigger PCO, which is becoming increasingly common [1,2]. Generally, PCO has no symptoms and may be noted via tooth discoloration or routine examination [3][4][5][6][7][8][9][10]. There is controversy regarding whether endodontic treatment is indicated for teeth with PCO. Some authors recommend treatment only after appearance of symptoms and radiography shows apical bone rarefaction. However, others believe that immediate endodontic treatment is indicated because PCO may develop into an infection [5,6,[11][12][13][14]. Digital radiographs of good quality, with the possibility of expansion and the use of different contrasts, are important to initiate the identification process. However, complete obliteration seen radiographically is not necessarily indicative of pulp tissue not being visible clinically or histologically [8,15,16]. The combination of dental operating microscopy (DOM) and ultrasonic tips (US) may help in identifying obliterated canals. DOM offers magnification and lighting, while ultrasonic tips allow working at greater depth within the pulp chamber safely, with a low risk of iatrogenic injury. However, in some situations, despite all of these resources and the skills and expertise of the operator, cone-beam computed tomography (CBCT) is necessary and allows threedimensional images without overlapping adjacent structures, which facilitates the identification of the canals, their directions, degrees of obstruction and dimensions [8,[16][17][18][19][20]. Due to C difficulty in managing these canals, proper diagnosis and careful planning before initiating endodontic treatment are needed.
Currently, the introduction of new technologies has increased the predictability of these treatments and, consequently, their success rates [16,[20][21][22]. However, there is still little information in the literature to guide clinicians on how to use this new technology safely and effectively. Thus, this study aims to contribute to the knowledge of the clinical approaches used for anterior teeth with obliterated canals (PCO) by describing four clinical cases.
Case Report
Case 1: A 34-year-old woman presented with a history of trauma involving tooth 21 at 10 years of age. She underwent a three-year orthodontic treatment from 17 years of age. She initially presented to a general dentist due to darkening of the crown and the presence of swelling in the apical region. The dentist performed four unsuccessful canal identification attempts, and then the patient was referred to a specialist. Initially, clinical photographs were performed ( Figure 1A). Digital radiographs (DR)(Digital system XDR®, São Paulo, Brazil) were performed in the ortho, distal and mesial angulations ( Figure 1B, 1C, 1D). Through DR, a fine radiolucent line was identified at the center of the root, using a periodontic filter ( Figure 1E, 1F, 1G). The patient was anesthetized, and a rubber dam was placed on the adjacent tooth. Then, the existing dressing was removed, and the access cavity was identified and irrigated thoroughly with 5.25% sodium hypochlorite (NaOCl) (Lenza Pharma, MG, Brazil) ( Figure 1H). With the aid of DOM (DF Vasconcelos, São Paulo, SP, Brazil), a yellowish area was identified in the center of the tooth ( Figure 1I). This area was thoroughly removed with ultrasonic tip (Helse, São Paulo, Brazil) coupled to an ultrasound machine ENAC (Osada, Inc.,California, EUA) set at low power.
A small orifice was identified ( Figure 1J). AK C Pilotfile #10 (21.0 mm) (VDW®, Munich, Germany) was introduced with winding watch movements, with minimal vertical pressure, until the total root length was accessed ( Figure 1K). Digital radiographies were taken in different angulations to confirm the correct position and the processes of cleaning and shaping were initiated. Endodontic therapy consisted of rotary instrumentation with ProTaper NEXT® (Dentsply Maillefer, was filled by vertical compaction ( Figure 1L). A one-year follow-up radiography showed complete healing and the patient was asymptomatic ( Figure 1M). Case 2: A 29-year-old woman was referred for endodontic treatment of the maxillary right central incisor due to a color change (Figure 2A). She reported no history of dental trauma. No clinical symptoms were reported, either and there was no evidence of occlusal trauma or a periodontal pocket. Digital radiographies were taken at multiple angles ( Figure 2B to D). The pulp chamber was visible only in the middle and apical thirds. The patient was anesthetized, and a rubber dam was placed at a specified distance from the tooth that was to be treated. Pulp chamber cavity access was achieved with 3-4 mm of penetration parallel to long axis of the tooth with the aid of DOM (DF Vasconcelos, São Paulo, Brazil). At the entrance to the pulp chamber, dentin obstruction was observed at the cemento-enamel junction (CEJ).It was yellowish in color, which was suggestive of the original canal ( Figure 2E). The pulp chamber was then flooded with 5.25% NaOCl (Lenza Pharma, MG, Brazil), and pulp tissue oxygenation bubbles were observed. Ultrasonic tips (Helse, São Paulo, Brazil) coupled to an ultrasound device ENAC were used to avoid excessive dentin removal. As the canal was not identified ( Figure 2F), the cavity was filled with a small ball of sterile cotton and sealed with Coltosol cement (Coltene Whaledent, New York, USA) and flowable filling material Surefil (Dentsply Caulk, Derbyshire, England). CBCT was requested, and images were obtained in the axial and sagittal views and thoroughly evaluated. In the axial sections, the visible pulp chamber was identified only in the apical third ( Figure 2G). In the sagittal sections, it was observed that erosion began when the ultrasonic tips were directed more buccally ( Figure 2H). A second procedure was scheduled, and the cavity was re-opened. Using DOM, small burnouts directed toward the palatal were conducted with US tips. A small slot was Figure 2I). Mechanical chemical preparation and obturation were performed as in the previous case ( Figure 2J and 2K). Digital follow-up radiographs taken after 1 year showed complete healing and no symptoms were present ( Figure 2L).
Case 3:
A 25 year-old woman presented with apical swelling and acute persistent pain involving tooth #13. The patient stated that the tooth had been orthodontically tractioned. Radiographic examination with different angulations and contrast showed thickening of the periodontal ligament and obliteration of the canal ( Figure 3A to 3D). Initially, the patient was anesthetized, and a rubber dam was placed at a specified distance from the tooth. Access to the pulp chamber was achieved with the aid of DOM (DF Vasconcelos, SP, Brazil). The canal orifice was not identified ( Figure 3E to 3G). Then, CBCT was requested. After evaluation via axial and sagittal imaging ( Figure 3H and 3I) showing the real position of the canal, treatment was conducted similar to the previous cases ( Figure 3J and 3K). A follow-up radiography taken 1 year after treatment showed complete healing, and no symptoms were reported ( Figure 3L).
Case 4:
A 58 year-old man was referred for re-treatment of tooth #21. According to the patient, this tooth caused acute pain several years earlier but was not endodontically treated because the canal was not identified. The clinician opted to perform apical curettage. Clinically, the tooth had been stable for four years. However, the patient's symptoms returned. The same clinician attempted to locate the canal using a probe file and EDTA, resulting in an apical perforation of the labial root surface. Then, the patient was referred to an endodontist. Initially, radiographs were taken ( Figure 4A to F). Subsequently, CBCT was performed to analyze the findings ( Figure 4G and H). The patient was anesthetized, and a rubber dam was placed on a specified distance from the tooth. With the aid of DOM and the axial and sagittal data gathered via CBCT, the perforation was identified ( Figure 4I) and sealed with mineral trioxide aggregate (MTA Fillapex, Angelus, Londrina, Brazil). The true position of the canal was identified, and cleaning, shaping and obturation were conducted the same way as in case 1. A postoperative radiography was taken showing successful treatment ( Figure 4J). A follow-up radiograph taken after one year showed complete healing of the lesion ( Figure 4K). The patient was asymptomatic.
Discussion
The current literature presents some articles describing the causes of pulp obliteration, but there still exists a paucity of protocols for the efficacious and safe treatment of PCO. Studies have shown that the success of endodontic therapy is based on the correct debridement, disinfection and obturation of the root canal system (RCS) [23]. Pulp obliterations can prevent access to the entrances of the canals, modify internal anatomy and divert inserted instruments [16,24,25]. Generally, the process of pulp obliteration confers an unfavorable prognosis because atypical morphology creates major challenges for treatment that increase the risk of iatrogenic complications [13,26], making the outcome even more uncertain in cases of pulp necrosis [10,24,[27][28][29].
Higher prevalence of obliterations occur in the central and lateral incisors [9,26]. Several adjunctive utilities and techniques, such as ultrasound, chelating agents, magnification with indirect optical fibers, lateral radiographs, EDTA combined with NaOCl and endodontic explorers, are used for the implementation of endodontic therapy [26]. Chelating agents may be useful for locating obliterated canals not identified via other means [8]. However, the use of chelating agents (CA) and probe files is controversial in the literature, which corroborates the findings of our report of 4 anterior teeth with CPO that were successfully managed using DR, US tips, DOM and CBCT. CA absorb calcium ions, increasing the porosity of the dentin, which may induce the creation of false canal deviations and perforations, as presented in case 4 ( Figure 4G and 4H). Therefore, the use of a probe file and SQ are inadvisable in the initial management of the canal.
The introduction of digital radiography (DR) in routine clinical practice in endodontics provides speed and agility in obtaining radiographic images, as well as expansion. It also allows the use of contrast and filters, which increase the possibility of identifying the canal ( Figure 1E to 1G). DR should be performed at different angles, both initially and during treatment, to assess the depth and direction of the ultrasonic tips [30,31].
When searching for hidden canals, it is crucial to note the level of the cemento-enamel junction (CEJ), which is the most consistent noticeable milestone denoting the location of the pulp chamber [32]. Usually, a rubber dam is placed with a butterfly clamp in situ [8]; however, this should be performed away from the tooth being treated because the clip prevents the display of the CEJ, and its removal allows recontamination of the SCR. Routine access in anterior teeth is achieved in the exact center of the palatal surface of the buccolingual and cervical section crown [4,26] at a 45° angle to the long axis of the tooth to reach an empty space, which may not occur in the case of CPO and may lead to a large number of iatrogenic complications. However, based on the axial and sagittal views obtained via CBCT, the canals are well centered in the tooth ( Figure 2H, 3I and 4H) [4,32], which allows access, and the cavity is prepared near or through the incisal edge [4,7], avoiding the possibility of a perforation in the labial root surface and unnecessary damage. Generally, secondary dentine has a more whitened appearance, while obliterated pulp presents a darker color ( Figure 3E to G). DOM is essential for the removal of calcified areas in the pulp chamber and within the deep canal [33,34]. US tips, in contrast to drills, provide a more conservative approach to conventional treatment [25,35]. US tips do not rotate inside the canal, ensuring greater security and control while maintaining their cutting efficiency [36]. They are useful for refining surgical access and help in breaking up calcifications covering the canal opening, which allows safe access to deeper areas, with enough safety and minimal wear, and the identification of dental structures [33].
In some situations, the identification of the canal via DR does not imply that the canal will be located clinically because its entrance may be blocked ( Figure 2E and F). DR provides twodimensional images of three-dimensional structures [31]. However, this deficit can be reduced with the aid of CBCT, which allows 3D displays without overlapping adjacent structures and visualizes the locations of the canals, their directions, degree of obstruction, dimensions and other important information [16][17][18][19][20]37]. However, CBCT should not always be requested initially in cases of PCO. As demonstrated in this report, there was no need for CBCT in case 1. However, CBCT was of great value in cases 2, 3 and 4, in which the canal was unable to be identified by DR, DOM and US tips. Based on the data gathered from axial and sagittal slices, it was possible to identify the correct position of the canal, which showed the usefulness of CBCT as it served to guide the US tips in the correct direction, avoiding iatrogenic injury and minimizing costs and patient exposure to radiation, resulting in a favorable prognosis.
Guided endodontics seems to be a safe and clinically viable method of locating PCO [21,22]. However, for the manufacture of the guided model, there is a need for high-tech equipment, such as scan-scanning intra oral 3D printer and the construction of a printed model, which results in higher cost for the patient. The diameters of the drills are still inadequate for lower teeth and fine roots [22]. In addition, the need to fix and stabilize the printed template that will guide a bur to the calcified root canal can create a bit of fear for less experienced clinicians in surgical procedures [38,39].
Conclusion
It is important to emphasize that the negotiation of small obliterated spaces is a tremendous challenge for clinicians. The use of new technologies, in addition to sufficient knowledge of pulp anatomy and radiographic techniques and patience, is the key to success in solving cases of PCO. | 2018-04-03T00:20:54.777Z | 2017-10-10T00:00:00.000 | {
"year": 2017,
"sha1": "66e70bd5290ec81537139b7224125f06dbe4b820",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "66e70bd5290ec81537139b7224125f06dbe4b820",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253723461 | pes2o/s2orc | v3-fos-license | Colouring simplicial complexes via the Lechuga–Murillo’s model
Lechuga and Murillo showed that a non-oriented, simple, connected, finite graph G is k-colourable if and only if a certain pure Sullivan algebra associated to G and k is not elliptic. In this paper, we extend this result to simplicial complexes by means of several notions of colourings of these objects.
Introduction
Graph Theory and Rational Homotopy Theory were first related by Lechuga and Murillo in a celebrated paper [15] (see also [16]) where they show that a non-oriented, simple, connected, finite graph can be k-coloured, k ≥ 2 , if and only if a certain Sullivan algebra associated to the graph is not elliptic. They also provide a link between Rational Homotopy Theory and algorithmic complexity by proving that the problem of graph colourability can be reduced in polynomial time to the problem of determining the ellipticity of a certain Sullivan algebra. Hence, since the former is an NP-complete problem, the latter is an NP-hard problem.
This interplay between Graph Theory and Rational Homotopy Theory has been proven fruitful: recently, Costoya and Viruel were able to use this interaction to solve a question of realisability of groups [4,5], and applications of these results to further problems were subsequently found [2,3].
The aim of this work is to extend the result of Lechuga and Murillo from graphs to (finite) simplicial complexes by considering eleven notions of colourability for these objects, many of which can be found in the literature. We refer to these colourings as ℭ i -colouring, for i = 1, 2, … , 11 (see Definitions 2.1, 2.4, 2.6, and 3.3), and prove the following two results: Theorem 1.1 For any k ≥ 2 , any i = 1, 2, … , 11 , and any connected simplicial complex X, which is assumed to be strongly connected and homogeneous for i = 8, 9, 10, 11 , there exists a pure Sullivan algebra M i k (X) which is not elliptic if and only if X is ℭ i -k-colourable. Theorem 1.2 For i ∈ {1, 7, 8, 9, 10, 11} and k ≥ 3 , or for i ∈ {4, 5, 6} and k ≥ 4 , determining if a connected simplicial complex is ℭ i -k-colourable is an NP-hard problem.
We point out that closely related problems have been studied in [6,8,14]. As for the necessary background, we assume that the reader is familiar with basics of algorithmic complexity and Rational Homotopy Theory, for which [12] and [9] are, respectively, excellent references. In particular, concerning algorithmic complexity we will use that the problems of total-k-colourability, k ≥ 4 , edge-k-colourability, k ≥ 3 , and k-colourability, k ≥ 3 , are NP-complete [13,17,18].
Regarding Rational Homotopy Theory, we just recall that a (simply connected) Sullivan algebra, denoted (ΛW, d) , is a commutative differential graded algebra, which is free as an algebra generated by the (simply connected) graded rational vector space W, and where the differential d is decomposable. A Sullivan algebra is elliptic if both W and H * (ΛW, d) are finite dimensional, and pure if dW even = 0 and dW odd ⊂ ΛW even .
We now recall the fundamental construction in [15] associated to any k ≥ 2 and any non-oriented, simple, connected, finite graph G = (V, E) , where V and E respectively denote the sets of vertices and edges of G. Consider the pure Sullivan algebra S k (G) = (ΛW G,k , d) where For this construction, the following holds:
Theorem 1.3 ([15, Theorem 3]) The graph G is k-colourable if and only if the Sullivan algebra S k (G) is not elliptic.
To relate this result with algorithmic complexity it is convenient to keep in mind that a graph G = (V, E) is usually encoded by its adjacency matrix A = (a ij ) i,j∈V in which a ij = 1 if (i, j) ∈ E and a ij = 0 otherwise. In binary, the codification of this matrix has length log 2 n + n 2 , where n is the number of vertices of G.
Throughout this paper, every considered simplicial complex X is assumed to be finite. The dimension of a simplex ∈ X , denoted dim , is its cardinality minus one. The dimension of X, denoted dim X , is the dimension of any of its largest simplices. Given s ≥ 0 , we denote the set of simplices of X of dimension s by X s . In particular, X 0 is the set of vertices of X, which is often denoted by V. The s-skeleton of X is the subsimplicial complex of X spanned by X s , and we denote it by X (s) . Note that X (1) is trivially identified to a non-oriented, simple graph, and we say that X is connected if X (1) is a connected graph.
Models for colourings of connected simplicial complexes
In the spirit of Theorem 1.3, we will associate to finite, connected simplicial complexes precise pure Sullivan algebras whose ellipticity encode different notions of colouring of simplicial complexes.
Colourings arising from hypergraphs
Recall that a hypergraph is a pair H = (V, E) formed by a non-empty set of vertices V and a set of hyperedges E, each of them being a non-empty subset of V. Two vertices are adjacent if they belong to a common hyperedge. An hyperedge e is incident to a vertex v if v ∈ e . Two hyperedges e and e ′ are adjacent if e ∩ e � ≠ � . The hypergraph H is connected if given any two vertices u, v ∈ V there is a sequence of hyperedges e 1 , e 2 , … , e n such that u ∈ e 1 , v ∈ e n and e i is adjacent to e i+1 , for i = 1, 2, … , n − 1.
, k} such that for any hyperedge e of more than one vertex | (e)| > 1 . Namely, at least two vertices of e have different colours. Moreover, if for any e ∈ E and any two different vertices u, v ∈ e we have that (u) ≠ (v) , we say that is a strong vertex k-colouring.
On the other hand, for any pair of different but adjoint hyperedges e and e ′ .
Finally, [7], a total colouring of H is a map ∶ V ∪ E → {1, 2, … , k} such that any pair formed by either two adjacent vertices, two adjacent hyperedges or an hyperedge and any of its incident vertices have different images through .
Trivially, a simplicial complex X can be regarded as a hypergraph H = (V, E) where V = X 0 and E = X . Hence, the above notions of colourability automatically translate to the following definition. Note that a vertex k-colouring of a simplicial complex is always a strong vertex k-colouring.
and ( ) ≠ ( ) for any pair of different simplices , with non-empty intersection.
Note that a total k-colouring yields both a vertex k-colouring and a face k-colouring. We prove:
Proposition 2.2 For any simplicial complex X and any
Proof Associated to X consider G 1 = X (1) the graph given by its 1-skeleton. On the other hand, let G 2 be the graph whose vertex set is the set of simplices of X and whose edges are pairs of distinct simplices with a common face. Finally, let G 3 be the graph whose vertex set is again the set of simplices of X and whose edges are also pairs of distinct simplices with non-empty intersection, together with pairs of vertices giving raise to a 1-simplex. Observe that G 1 , G 2 and G 3 are respectively the 2-section graph, intersection graph and total graph of the hypergraph given by X (see [1,7]). It is then clear from Definition 2.1 that a ℭ i -k-colouring of X is precisely a k-colouring of G i , i = 1, 2, 3 . Furthermore, the graphs G 1 , G 2 and G 3 are connected as a consequence of X being connected. To finish, define M i k (X) = S k (G i ) and apply Theorem 1.3. ◻
Colourings of simplicial complexes
The colourings in §2.1 are originally defined for hypergraphs, thus they do not take consideration of the additional structure of simplicial complexes. For that reason, we introduce the following: We denote the respective chromatic numbers by r (X) and � r (X). An ascending k-colouring of X in dim r is a colouring of the graph whereas a descending k-colouring of X in dim r is a colouring of called the rth exchange graph of X (see [10]). However, Theorem 1.3 cannot be used to model the colourings in Definition 2.1 using these graphs, as they may not be connected. We treat this issue in Sect. 3.
Instead, in this section we use the ascending and descending colourings to introduce new colourings which we can model in the spirit of Proposition 2.2.
Definition 2.4
Let X be a simplicial complex.
Let G 1 = (V 1 , E 1 ) and G 2 = (V 2 , G 2 ) be two graphs. Recall that the sum of G 1 and G 2 is a graph G = G 1 + G 2 with vertex set V 1 ⊔ V 2 and edges The sum of any two graphs is connected. Also recall that the union of G 1 and G 2 is the graph G 1 ∪ G 2 with vertex set V 1 ∪ V 2 and edges E 1 ∪ E 2 .
Proposition 2.5 For any simplicial complex X and any
Proof First, note that a complete ascending (resp. descending) k-colouring of X is an ascending (resp. descending) k-colouring of X in dim r when restricted to X r . Furthermore, simplices of different dimensions receive different colours. It becomes clear that if we define X admits a complete ascending (resp. descending) k-colouring if and only if the connected graph G 4 (resp. G 5 ) is k-colourable.
Regarding the full k-colouring, let I denote the strict inclusion graph of X, that is, a graph with vertex set X and where ( , ) is an edge if and only if either ⊂ or ⊂ . Define a graph Then G 6 is connected since I is so. Furthermore, X is full-k-colourable if and only if G 6 is k-colourable. To finish, define M i k (X) = S k (G i ) , i = 4, 5, 6 , and apply Theorem 1.3. ◻ We model one last colouring in this section. In [8] the authors introduce the following, more relaxed definition of vertex colouring: Definition 2.6 Let k, s ≥ 1 and let X be a simplicial complex. A (k, s)-colouring of X ( ℭ 7 -(k, s)-colouring) is a map f ∶ V → {1, 2, … , k} such that, for every ∈ X and for all 1 ≤ t ≤ k , | ∩ f −1 (t)| ≤ s . Let chr s (X) denote the least integer k such that X is (k, s)-colourable.
A Sullivan algebra whose ellipticity codifies the (k, s)-colourability of a simplicial complex had already been obtained in [6]. However, we can use the work in [14] to provide a different construction of one such algebra: where BCP s (X) is a set of partitions of the vertex set of X and G 0 (P) is a 1-dimensional simplicial complex associated to one such partition P, see [14,Definition 3]. It quickly follows that when regarding G 0 (P) as a graph, chr 1 G 0 (P) = G 0 (P) . Furthermore, G 0 (P) is connected for every P ∈ BCP s (X) . Define Let us show that M 7 k,s (X) is the desired algebra. Recall that the tensor product of Sullivan algebras is not elliptic if and only if at least one of the factors is not elliptic. Therefore, if M 7 k,s (X) is not elliptic, there exists P ∈ BCP s (X) such that S k G 0 (P) is not elliptic. Then by Theorem 1.
Models for colourings of strongly connected homogeneous simplicial complexes
As mentioned in Sect. 2.2, the colourings in Definition 2.3 cannot be immediately modelled since the graphs that encode them, G r (X) [see (1)] and and G � r (X) [see (2)], are not necessarily connected. In this section we further restrict the class of simplicial complexes that we are considering as to be able to model these colourings.
Recall that a simplicial complex X of dimension dim X = n is strongly connected if for any two n-dimensional simplices , there exist { 0 = , 1 , … , k = } ⊂ X n such that i−1 ∩ i ∈ X n−1 , for i = 1, 2, … , k . Equivalently, X is strongly connected if and only if G � dim(X) is connected. On the other hand, X is homogeneous if every vertex is contained in an n-dimensional simplex. Then, if X is homogeneous and strongly connected, so is X (k) , for 0 ≤ k ≤ n . Therefore: Proposition 3.1 For any n-dimensional strongly connected homogeneous simplicial complex X, G r (X) and G � s (X) are connected, for 0 ≤ r < n and 0 < s ≤ n.
3
Colouring simplicial complexes via the Lechuga-Murillo's… Proof The connectivity of G � s (X) , for 0 < s ≤ n is an immediate consequence of the strong connectivity of X (s) . Let us prove the connectivity of G r (X) , 0 ≤ r < n . Take , ∈ X r . Since X is homogeneous, we can find ̄,̄∈ X r+1 such that ⊂̄ and ⊂̄ . Then, since X (r+1) is strongly connected, we can find {̄0 =̄,̄1, … ,̄k =̄} ⊂ X r+1 such that i =̄i −1 ∩̄i ∈ X r , i = 1, 2, … , k . It is now immediate to check that 1 … k is a path in G r (X) joining and . ◻ An immediate application of Theorem 1.3 yields the following result: Proposition 3.2 For any n-dimensional strongly connected homogeneous simplicial complex X and for 0 ≤ r < n (resp. for 0 < s ≤ n ), there exists a Sullivan algebra M k (X, r) (resp. M � k (X, s) ) which is not elliptic if and only if X admits an ascending k-colouring in dim r (resp. a descending k-colouring in dim s).
We now introduce the last collection of colourings.
Definition 3.3
We say that a map ∶ X → {1, 2, … , k} is: • a maximal ascending k-colouring ( ℭ 8 -k-colouring) if for every 0 ≤ r ≤ dim X the restriction |X r is an ascending k-colouring in dim r for X. • a maximal descending k-colouring ( ℭ 9 -k-colouring) if for every 0 ≤ s ≤ dim X the restriction |X s is a descending k-colouring in dim s for X. • a minimal ascending k-colouring ( ℭ 10 -k-colouring) if there exists 0 ≤ r < dim X such that |X r is an ascending k-colouring in dim r for X. • a minimal descending k-colouring ( ℭ 11 -k-colouring) if there exists 0 < s ≤ dim X such that |X s is a descending k-colouring in dim s for X.
Algorithmic complexity of simplicial complex colourings
If G is a graph, it can be regarded as a simplicial complex X(G) whose 0-simplices and 1-simplices are, respectively, the vertices and edges of G. Such a simplicial complex can be encoded using an adjacency matrix, so its codification has the same length as that of G.
In this section we show that the (edge, total) colourability of a graph G is equivalent to the ℭ i -colourability of X(G) for certain indices i. As a consequence, we immediately deduce Theorem 1.2.
Remark 4.1
It is immediate that the k-colourability of a graph G is equivalent both to the ℭ 1 -k-colourability and the ℭ 7 -(k, 1)-colourability of X(G). Similarly, the total k-colourability of G is equivalent to the ℭ 6 -k-colourability of X(G).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2022-11-21T15:12:54.589Z | 2020-06-12T00:00:00.000 | {
"year": 2020,
"sha1": "61796e39fa27b70d237395287eb3a3621095f1af",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00200-020-00440-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "61796e39fa27b70d237395287eb3a3621095f1af",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
54521851 | pes2o/s2orc | v3-fos-license | The Feasibility and Impact of Delivering a Mind-Body Intervention in a Virtual World
Introduction Mind-body medical approaches may ameliorate chronic disease. Stress reduction is particularly helpful, but face-to-face delivery systems cannot reach all those who might benefit. An online, 3-dimensional virtual world may be able to support the rich interpersonal interactions required of this approach. In this pilot study, we explore the feasibility of translating a face-to-face stress reduction program into an online virtual setting and estimate the effect size of the intervention. Methods and Findings Domain experts in virtual world technology joined with mind body practitioners to translate an existing 8 week relaxation response-based resiliency program into an 8-week virtual world-based program in Second Life™ (SL). Twenty-four healthy volunteers with at least one month's experience in SL completed the program. Each subject filled out the Perceived Stress Scale (PSS) and the Symptom Checklist 90- Revised (SCL-90-R) before and after taking part. Participants took part in one of 3 groups of about 10 subjects. The participants found the program to be helpful and enjoyable. Many reported that the virtual environment was an excellent substitute for the preferred face-to-face approach. On quantitative measures, there was a general trend toward decreased perceived stress, (15.7 to 15.0), symptoms of depression, (57.6 to 57.0) and anxiety (56.8 to 54.8). There was a significant decrease of 2.8 points on the SCL-90-R Global Severity Index (p<0.05). Conclusions This pilot project showed that it is feasible to deliver a typical mind-body medical intervention through a virtual environment and that it is well received. Moreover, the small reduction in psychological distress suggests further research is warranted. Based on the data collected for this project, a randomized trial with less than 50 subjects would be appropriately powered if perceived stress is the primary outcome.
Introduction
William Cannon first described the concept of ''fight or flight'', also known as the stress response, over 80 years ago [1]. Since then, we've also learned that activation of the stress response can affect the course of disease [2]. A contrasting ''relaxation response'' (RR) has been described by Benson and colleagues as a physiologic state that is counter to that induced by stress [3][4][5]. The relaxation response is accompanied by reduced oxygen consumption and carbon dioxide production [4,6], diminished sympathetic nervous system tone [7] and increased exhalation of nitric oxide [8]. Functional studies of the brain have shown decreased EEG fast activity in the frontal areas [9] and increased activation of the hippocampus and other parts of the limbic system that may moderate emotion [10] as well as other structural changes on imaging [11][12][13]. These changes may reflect some of the reasons that regular elicitation of the relaxation response has been reported to alleviate stress-related medical disorders [14][15][16]. Specific disorders impacted include hypertension and cardiac arrhythmias [17], chronic pain [18], insomnia [19,20], anxiety and depression [21], premenstrual syndrome [22], and infertility [23].
Widespread use of the relaxation response is handicapped by the teaching method. It has traditionally required significant faceto-face interaction with clinician-teachers. These face-to-face teaching sessions, delivered weekly for up to 12 weeks, require time and travel. Those with limited mobility have difficulty taking part. Costs, the clinical model, and concerns about privacy in the group teaching model, provide additional hurdles to participation. Fortunately, the Internet has provided a conduit for reaching out to those with limited mobility who may also wish to maintain a certain degree of anonymity even while taking part in a group encounter. While many health interventions have been presented using the Internet [24][25][26][27], the exceptional opportunities afforded by online, 3-dimensional or virtual environments have not yet been systematically explored for mind-body interventions.
A virtual environment offers many advantages not available in a typical Internet based application. Individuals create representations of themselves, give that representation unique properties to reflect their emotions or feelings and can interact with others in a rich, shared space. One extremely successful application is that of Second Life TM (SL). The free software allows users to interact in a persistent virtual world represented as a character referred to as an avatar. Stock avatars are predominantly human and come with free ''animations'' that communicate basic body language such as laughing, waving, falling or flinching. While anonymity is not guaranteed, it is difficult to determine the true identity of a user if they chose not to share it.
People have been quick to employ SL for health related purposes. Self-organizing patient collectives were the first to develop, usually with expert patients at their core. However, the last few years have seen an increase in formal organizational involvement [28]. For many, these communities offer a degree of emotional support that cannot be achieved with existing telehealth solutions. Mainstream health care providers and organizations are also exploring this virtual environment as a tool to deliver technology-enabled care [29].
In this pilot study we aimed to determine if it is feasible to teach people to elicit the relaxation response using SL as the clinical teaching medium and to determine if there is an impact on perceived stress and other psychological symptoms. If successful, this approach could make it possible to reach a wider audience of individuals who could benefit from this mind body approach to health.
Methods
The project was designed as a pilot study to find out if it is logistically feasible to teach stress reduction in an online virtual world and if so, to determine the effect size. The data could be used to design a larger, randomized trial of the program. It was an open trial with no blinding or randomization. The target for recruitment was 20-40 healthy volunteers. This number of patients was selected based on the experience of clinicians at the Benson-Henry Institute with face-to-face groups where a small aggregate effect was seen in groups of 15-20 participants. We hypothesized that the virtual program would be less effective and adjusted our target to 20-40 subjects divided into 2-3 groups.
The project was reviewed and approved by the Partners Human Research Committee (PHRC) which serves as the Institutional Review Board of Partners Research Management. During the review of the Project, the PHRC specifically considered (i) the risks and anticipated benefits, if any, to subjects; (ii) the selection of subjects; (iii) the procedures for securing and documenting informed consent; (iv) the safety of subjects; and (v) the privacy of subjects and confidentiality of the data. Informed consent was obtained from all participants in the study, in person by a member of the research team. Each participant was given an opportunity to read the consent form at their leisure and to ask questions prior to signing the form.
Patients were recruited by word of mouth in virtual world communities, through advertisement kiosks in the virtual world, through contacting local SL users groups and through a presentation at the Boston Linden Lab office. A local newspaper article in the Boston Globe about the project also raised awareness and was helpful for recruitment.
The inclusion and exclusion criteria are shown in Table 1. The user interface for SL is somewhat difficult to learn and so only individuals with experience in SL were recruited to minimize the confounding effect of the interface. In addition, individuals had to be willing to attend two face-to-face meetings at the Massachusetts General Hospital. This requirement served two important purposes, to obtain standard, in-person, informed consent and to use surveys presently validated only for face-to-face or telephone administration.
Development of the virtual program
The virtual program was developed by drawing on the expertise of clinicians and team members with experience in SL. A group of 2 mind body trainers (MAB and MTM) and 2 developers (DAL and MS) met bi-weekly with facilitation by the PI and project manager (HEB), to share their knowledge. This division of expertise permitted the clinical team to focus on the conversion of their practice to SL while allowing the technical team to best understand how to best recreate the face-to-face experience in this virtual environment.. Although the face-to-face teaching area is somewhat austere, the virtual space was designed to be comfortable and welcoming, but without too many distractions.
The team integrated standard web technology into the environment to simplify the development. By using external links to standard web sites for streaming video or online surveys the team could focus on the unique technical opportunities offered by this virtual world. Representative pictures of the virtual environment are shown in Figure 1 while a list of principle features is presented in Table 2.
The participants were given some limited ability to modify the virtual environment. We felt thatif the groups could make the virtual space more personal, it would promote greater group cohesion. Mature content and software that would interfere with or slow the function of the environment were not allowed.
Procedures
Each subject completed baseline questionnaires including an intake history and pre-training survey as well as psychosocial selfassessment instruments. The latter were chosen because of ongoing experience with the impact of teaching the RR on these measures [30]. They included the SCL-90-R, a general psychological symptom checklist and the Perceived Stress Scale. After this initial face-to-face meeting, each subject was given the date and time of the initial group meeting in SL.. Subjects were assigned to a group in the order that they enrolled. Each of the 3 groups included approximately 10 people. Participants were also given a copy of the consent form, a $20 American Express Check to cover their parking/travel expenses for the face-to-face meeting, and a microphone-headset to facilitate their engagement in the program in SL (value = ,$100).
Program Details
The groups were facilitated by the same clinician (MAB). The clinician had over 20 years of experience teaching mind body medical interventions at the BHI. She is a nurse practitioner with a Masters in Nursing and specialty training in cognitive applications of positive psychology and contemplative meditation. Each group gathered in the virtual meeting area with the clinician twice per week for 8 weeks. The content of each of the 8 weeks is given in table 3.
Five different techniques for eliciting the RR were taught: 1) breath awareness, 2) mental repetition of a word, sound, phrase, or prayer, 3) mindfulness meditation, 4) guided body scan and Hatha Yoga, and 5) guided imagery. These 5 techniques are commonly used for eliciting the relaxation response and were used in previous studies of patients and healthy volunteers. The availability of more than one technique allows for individuals to explore several approaches and to avoid overuse of a single technique which may become tedious and a barrier to adherence. Instructional audio files of 20-minutes in length on each of these techniques were provided to subjects through SL for practice throughout the study. The other exercises described in Table 3 are felt to enhance resiliency and are commonly paired with eliciting the RR in mind body programs.
The first meeting of each week lasted approximately an hour and was an overview of the principals of mind body interventions, the nature of the relaxation response, and an opportunity to learn and practice several methods to elicit the relaxation response. The second meeting each week lasted 20 minutes and was offered to answer questions and reinforce the teaching present at that week. Subjects were instructed to elicit the RR every day for 20 minutes either in front of the computer with their avatar (representation of themselves) in the virtual teaching area, or in any other quiet setting. A relaxation audio recording was available at specific locations in the virtual world, activated by guiding the avatar to that area at any time. Participants were not allowed to download the audio file for use off line.
Subjects reported on their engagement in the program weekly using an online survey. The survey asked them to report the number of days they practiced for 20 min or more, methods they used for practice, the number of ''mini'' breathing relaxation exercises they used daily, and to report on symptoms they may have experienced.
Statistics
The areas of interest for this pilot study included three scales on the Symptom Checklist 90-Revised (SCL90-r) and the Perceived Stress Scale (PSS). The scales for the SCL90-r that were examined included the Global Severity Index (GSI) which measures overall psychological distress, the depression scale which measured symptoms of depression (DEP) and a scale for anxiety-related symptoms (ANX). Statistical analysis was performed using STATA on these measures that were obtained before and after taking part in the program. Since this was a pilot program in which a small number of individuals took part, we were unable to stratify by potential covariates such as gender, age and educational level.
Results
A total of 55 individuals were screened to enroll 28 subjects. The only reason for ineligibility was inexperience with SL. A complete data set for analysis was obtained in 24 of the 28 volunteers, because 4 subjects met the objective drop criteria of the study and were not included in the final analysis. (One failed to answer all questions on the pre-program surveys but competed the entire study, two completed the course but incompletely filled out the post-program surveys and one did not attend the final face-toface meeting). Since the study was designed as a feasibility pilot, and was not powered for statistical significance, we have excluded their data comletely. A description of the 28 participants is given in Table 4.
Impact of the intervention
A total of 24 of the 28 subjects completed both pre and post program surveys. Scores on the Perceived Stress Scale and all 3 of the scales of the SCL90-r were normally distributed and compared pairwise. There was a trend toward improvement in each measure. Change on the Global Severity Index of the SCL 90-r reached statistical significance as shown in Table 5.
Qualitative Results
The individuals taking part in this study were extremely positive about the experience although most did not attend all of the sessions in SL. Two of the best accepted components of the traditional, face-to-face program: guided imagery and single pointed focus, were reported by a majority of our subjects to be ''the best aspects and best takeaway from the program''. Thus, there is some consistency in the reaction to the program presented in the virtual world and face-to-face.
Many participants reported that were it not for the convenience of remotely logging in for the one hour session, they would not have been able to commit to the program schedule. In addition, some felt that the subject matter was more approachable and digestible due to the anonymity. Several direct quotes from participants demonstrate this feature well:
Discussion
The goals of this pilot study were to determine if it is feasible to convert a face-to-face mind body program into one that can be delivered in a virtual setting, and to estimate the effect size in this setting for the design of a larger clinical trial. Both goals were achieved, and important lessons were learned.
Although somewhat challenging, providing the program in a virtual environment is feasible and it would not require an unusually large sample to definitively test the efficacy of the program. The perceived stress scale would be a reasonable choice for primary outcome in such as study. It has excellent face validity and is easy to understand. Additionally there is a wealth of information about the perceived stress scale because ''stress'' is a common complaint in the general population and in clinical practice and Cohen and his collaborators have recently compared perceived stress in a relatively large sample of individuals [31].
Several important lessons were learned in the course of this study. First, although it is indeed feasible to present this type of program in a virtual world, the user interface is problematic. Recruitment was limited to individuals with prior experience in SL since the interface was known to be a barrier to entry. Even with such inclusion criteria, some of the less experienced users had problems that likely affected their participation. Prescribing this type of technology to individuals at large, with no experience in SL, would only be possible if some type of face-to-face tutorial or training program was also offered.
There are several important limitations to the use of a virtual environment as described in this report. First, the healthy subjects we recruited were relatively young and exceptionally well educated. Since much chronic illness, especially that which limits mobility, is found in the older population, a virtual environment could be very difficult to use without significant technical support. Further, in the absence of remote sensing technology, it is very difficult to know if the participants are taking part in the exercises or succesfully eliciting the relaxation response. However, improvements in access to ease of use of technologies may moderate both these concerns.
Technology changed significantly even in the short span of this study. When we initiated the project, the audio capabilities of SL were markedly unreliable. We decided to use Skype TM for the audio component. However, shortly after our study began, Linden lab improved SL ''voice'' and it is now adequate for this type of program. There is still an advantage to a separate audio technology, discovered as an unanticipated consequence of our design. If one system fails, the other persists and communication is intact.
A final limitation due to the unique profile of our subjects is that of stress level. In comparison to the normative data recently reported by Cohen [31], our sample reported a perception of greater stress than the general population. This may be a function of their high education that could result in more demanding jobs, local stressors or any number of other reasons that we did not address.
Providing care in a virtual world such as SL also presents a number of theoretical, practice-related challenges. Most malpractice is alleged due to negligence and proving negligence requires four elements: duty of care, breach of duty, injury and proximate cause. Further, one of the tenets of successful provider risk management is clear, caring communication with the patient. In a virtual world such as SL, challenges may arise in the clear establishment of duty, communication of a breach, and in documentation of injury or proximate cause. Likewise, great challenges around communication arise when the vehicle for communication is an avatar and exchange of text messages. Cues that are used to sense the clarity of communication such as eye contact and body language are missing. The anonymity associate with the virtual world can cloud all aspects of the provider patient relationship. The cues that one uses to confirm an individual's identity in a face-to-face meeting are lost. It is easy to masquerade as another individual in the virtual world since most contact occurs via text communication, albeit our use of voice helps. Working in a virtual world would place additional burden on a clinician to be clear about availability (since she will be in the virtual world for limited times) in order to avoid allegations of negligence. It must be emphasized that these concerns are indeed theoretical, as no case law has emerged on what is both a nascent and low-volume activity.
There are good reasons to continue to devise and investigate methods for widely distributing stress management programs like the one presented in this study, in spite of the challenges described above. The American Institute of Stress estimates that from 75 to 90% of primary care visits are to some extent related to stress. Face-to-face mind body interventions have been shown to be beneficial for many stress related conditions. However it will be difficult to make these self care strategies available on the scale necessary for effective primary, secondary and tertiary prevention, especially because mind body wellness approaches are not sufficiently reimbursed by 3rd party and governmental payers. With this in mind online web-based programs and cell phone applications will be of key importance in mind body public health and clinical efforts. Furthermore a segment of the public that is difficult to reach with face-to-face and with web-based programming-namely the socially phobic, those with post-traumatic stress disorder, autistic spectrum populations, and those with any illness that limits mobility will benefit from the use of virtual world mind body programming. Future research in this area faces several interesting challenges. The technology is rapidly evolving. SL, in particular, has changed significantly since the completion of this study. The company has reduced some services, eliminated nonprofit/educational pricing and made little effort to improve user security. Additionally, the client-server approach employed by Linden Lab is being supplanted by distributed virtual environments like jibe developed by ReactionGrid. The interface to these virtual worlds is changing rapidly, becoming more intuitive but also more powerful. The rapid pace of these changes means that clinical programs developed using this technology may be obsolete by the time a comparative effectiveness trial can be completed.
The rapidly changing technological landscape also makes the choice of an appropriate comparison group difficult. A reasonable approach would be to compare a virtual intervention for stress reduction to the standard of care, which is a face-to-face intervention. However, since the human-computer interface is so critical to the intervention, an additional comparison of experienced computer users to those with little online experience may also be warranted.
Research into the provision of health interventions in virtual environments should continue in spite of these hurdles. The population is living longer with a variety of chronic diseases that can limit mobility. More and more healthcare will take place at the patient's home and on line. Fortunately, even the aging population is becoming increasingly comfortable with on line commerce, social network and and personal interactions. | 2018-04-03T01:51:30.251Z | 2012-03-28T00:00:00.000 | {
"year": 2012,
"sha1": "ce8c1b697bb1189efda380bb005a6c0bb233769e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0033843&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4307a18aaf4939c033713d332cdc0f847cbed4b",
"s2fieldsofstudy": [
"Psychology",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14368835 | pes2o/s2orc | v3-fos-license | Search for star clustering: methodology and application to the Two Micron Galactic Survey
A new approach to the study of the large-scale stellar cluster distribution in the Galaxy based on two-point correlation techniques is presented. The basic formalism for this method is outlined and its applications are then investigated by the use of a simple model of cluster distribution in the Galaxy. This provides an estimate of the potentials of the two-point correlation function for indicating clustering in the measured star positions, which can be related to the presence of star clusters in the observed field. This technique is then applied to several areas of the Two Micron Galactic Survey catalogue, from which information is obtained on the distribution of clusters according to position in the Galaxy, as well as about age, density of stars, etc.
INTRODUCTION
Open star clusters provide valuable information on the evolution of the Galaxy. In this paper two-point correlation techniques are used to analyse the distribution of open clusters in order to gain an insight into the structure and evolution of the Galaxy.
Open-cluster distributions have been widely studied at optical wavelengths as a means of studying Galactic structure and evolution (see, for example , Lyngå 1987b;Janes & Phelps 1994). The Lyngå catalogue of open clusters (Lyngå 1987a) lists about 1200 clusters, which represent nearly all the open clusters accessible in the visible. Knowledge of the positions and ages of these clusters (a method of age determination for clusters is given by Carraro & Chiosi 1994) enables the scale length and scale height of the disc to be derived for both young and old clusters (for a review of old open clusters, see Friel 1995) and theories to be developed on their formation and destruction throughout the history of the Galaxy.
The limitations on these studies are imposed by the maximum distance at which open clusters can be detected. Most of the cataloged open clusters are in the solar neighbourhood, and very few have distances greater than 3 kpc (Payne-Gaposchkin 1979). Hence, information is obtained only for a small region of the Milky Way. The problem is caused by interstellar extinction. An excellent tool for studying star clusters and star formation regions is to observe in the infrared (Wynn-Williams 1977), where the effect of the extinction is far less. To date, however, the infrared has been little used in this field owing the absence of suitable databases.
The K band is probably the best region of the spectrum for tracing the stellar distribution of the Galaxy. The K radiation is a mass tracer in spiral galaxies because it follows the old stellar population (Rhoads 1995). Furthermore, the K light is dominated by high-mass stars in star formation regions, i.e. in open clusters, so it is especially appropriate in the search for young clusters, which are rich in massive stars.
As explained in more detail by Garzón et al. (1993), the Two micron Galactic survey (TMGS) is a K-band survey of various regions along the Galactic equator between −5 o < l < 35 o , |b| ≤ 15 • and 35 • < l < 180 • , |b| ≤ 5 • . The TMGS catalogue has a positional accuracy of about 4 arcsec in right ascension and 7 arcsec in declination. These errors have been estimated after cross-correlating the original TMGS source positions with Guide Star Catalogue (GSC) counterparts. The larger error in declination comes from the orientation of the array with respect to the survey direction. Due to the dead spaces between detectors, the K limiting magnitude for completeness has to be set conservatively within a range from 9 to 9.5 mag, although the detection limit magnitude of the survey is well in excess of 10 mag.
In this paper a new method is presented for automatically determining the level of clustering in catalogues, the TMGS being used as an example. A set of criteria are defined for an automatic search for correlations among stars by means of the two-point correlation function and the twopoint angular correlation function. A simple model which assumes a regular distribution of clusters with a constant star density is developed in Section 3. Predictions from this model are then compared with the TMGS and the discrepancies analysed. The causes of clustering are then discussed. The use of the tools described in Section 2 and their application to the TMGS catalogue are dealt with in Section 5, and the results for several regions of our Galaxy are given in Section 6. Finally, a summary of the results is given, and suggestions are made for future developments of the methodology described here.
THE TWO-POINT CORRELATION FUNCTION AND THE TWO-POINT ANGULAR CORRELATION FUNCTION
Occasionally, when the scale length of a system is changed, certain aspects of the system remain invariable, as in the case for the distribution of matter in space. For example, there are mathematical methods for handling the spatial distribution of atoms in solids, gases and (particularly) liquids.
Cosmologists face the same kind of mathematical problem when working with the distribution of galaxies and clusters of galaxies in the context of the large-scale structure of the Universe. They treat the Universe as a fluid whose 'particles' are galaxies. Our aim in this paper is to develop the use of similar mathematical methods on an intermediate scale, i.e. in examining the distribution of the stars that make up our Galaxy. Correlation functions describe how points are distributed (e.g. Peebles 1980;Borgani 1995). Suppose a local density of objects δN/δV = ρ( r) and an average density ρ (hereafter, the symbol · · · indicates a local volume average). Note that ρ corresponds to the exact distribution of objects, i.e. it is a Dirac delta function with zero value where there are no objects, and ρ is the local average density, i.e. the number of objects per unit volume, and provides no information concerning their distribution.
The two-point correlation function (TPCF) is defined as The function ξ(r) expresses the excess over the random probability of finding objects at separation r. (ξ(r) = 0 means that the probability is totally random; ξ(r) > 0 that the probability is greater than random, i.e. that there is clustering; and ξ(r) < 0 that the probability is less than random, i.e. that there is relative avoidance).
In the same way, the two-point correlation function can be defined for two dimensions on the surface on to which the distribution is projected (the celestial sphere in the case considered here). This is called the two-point angular correlation function (TPACF) and is defined as where σ is the surface density per unit solid angle. Another mathematical technique for deciding whether a distribution is non-Poissonian is area tessellation, as was used by Balázs (1995) to test the grouping tendency of Hαemission stars in the Orion molecular clouds without giving a quantitative measure of the departure from the Poissonian distribution. See also Pásztor et al. (1993), Pásztor & Tóth (1995) and references therein for other astronomical applications of spatial statistics.
Relationship between the TPCF and the TPACF for stars
When applying the above definitions to stars in the Galaxy, the luminosity function and space density have to be taken into account. By generalizing the result of the Limber (1953) equation for constant density, the relationship between the TPCF (which is non-zero for distances less than ∆r) and the TPACF (for small θ) for any density distribution is where r is the distance along the line of sight, M the absolute magnitude, φ(M ) the luminosity function and The minimum and maximum values of M for a distance r depend on the minimum and maximum values of the apparent magnitude and the extinction along the line of sight.
In this case, it is assumed that the absorption is not patchy, i.e. that it is independent of θ for small angles. This is not exactly true but it will be show in Section 2.4 that the effects are negligible.
The subscript 't' stands for 'total', a projection of all distances and magnitudes, and σt is the total twodimensional density for all distances and magnitudes. In the literature, σt is also called A(mmin, mmax, l, b) and represents the star counts in the magnitude range (Bahcall 1986).
This expression enables the TPACF to be found once the three-dimensional distribution of the stars is known and forms the basis of this article, in which we create a model distribution of the stars and compare the results obtained with those observationally in order to investigate the distribution of clustering in the structure of our Galaxy.
In general, the TPACF cannot be inverted to give the TPCF due to the multiplicity of possible solutions and to the lack of precise knowledge of certain parameters. However, there are certain cases in which the equation can be inverted and TPCF obtained from TPACF (Fall & Tremaine 1977). A trivial example where inversion is possible is that of a Poissonian three-dimensional distribution, which implies a Poissonian projected distribution and vice versa, i.e. ξ = 0, ω = 0 on all scales. Another example is when ρ (r) is a constant independent of r.
Definition of new variables
In order to simplify the comparison of the level of clustering for different regions of the sky, two new variables will be introduced. θmax is defined as the first zero of ωt(θ). In this article (see for example Fig. 7), ωt is positive up to a separation θmax. For values greater than θmax this value is small and oscillates about zero, as there is no correlation among stars separated by large angular distances.
Another definition, corresponding to the integration of ωt up to the limit θ = θmax (θ > θmax would give a null contribution to the integral), is which means the excess (when C2 is positive) or deficit (when C2 is negative) of the relative number of objects with respect to a Poissonian distribution in a circle centred on an arbitrary star on the celestial sphere, within the observed solid angle and with angular radius θmax. The relative correlation within the angular scale θmax is therefore measured. We call C2 the 'accumulation parameter' (N.B. there are also other definitions in the literature of the TPACF integral, e.g. Wiedemann & Atmanspacher 1990).
The variable C2 has a clear meaning associated with projected clustering and is also a useful number to measure. Since it sums several values of ω for different angles, it condenses the information of interest into a single number that can be compared for different samples of stars and give the degree of clustering. This parameter is a mathematical expression of the degree of clustering seen in fields of stars. The idea that we wish to stress here is that all mathematical developments described in this paper are designed to put the intuitive idea of clustering to a reliable test. These calculations are necessary for a quantitative, as opposed to a merely qualitative, description of clustering.
Applying the expression (3) of ωt in C2, we get where Ξ, an integrated TPCF, is
Further approximations
In order to simplify the above calculations, it will be assumed that the distribution of stars does not depend on their luminosity, i.e. that ξ(y; r, M, M ′ ) = ξ(y; r). This is not completely true as there is a small dependence on the distribution of stars in a cluster according their masses, and the luminosities are dependent on the masses. A complete calculation taking the luminosity function into consideration would be of great value. However, the relationship between the TPCF and the luminosity function is uncertain, although the effects of this approximation for the detection of clusters are expected to be small. With this approximation, and from (4) and (3), and where N * (r) is the number of stars observed per unit solid angle at a distance r: The variable ωt can also be expressed as where the averages r and Ξ are such that match (9). Also, from (6), This last equivalence is a way of averaging the function ξ.
Hence, high values of C2 indicate that there must be high projected clustering in the direction of the beam.
Patchiness of extinction
It is clear that extinction can distort the observed counts, the amount of the distortion being a matter of controversy. It is generally accepted that in the optical wavelengths this influence is very severe, particularly in regions near to or in the Galactic plane in the inner Galaxy, where the strong and patchily distributed obscuration makes it difficult to penetrate deep into the Galaxy. The amount of extinction decreases substantially with increasing wavelength. Maihara et al. (1978) quoted a value of 0.17 mag kpc −1 as typical for extinction in the Galactic plane in the K band, compared with 1.9 mag kpc −1 for the V band (Allen 1973). This has two important consequences. First, the K band is more effective at penetrating the interstellar dust. Secondly, the observed stellar distribution more closely resembles the true distribution. For the second argument to be true it is necessary that the obscuration in the K band should not only be smaller in amount than in the V band, but also that its patchiness should be less important.
This rather uniform distribution of the interstellar extinction in K can be inferred from the TMGS histograms in several cuts across the Galactic plane. Garzón et al. (1993, their Fig. 8) compared the observed stellar distribution in the TMGS and the GSC in the V band. It is noticeable how uniform the K histograms are, particularly when compared with those for the GSC. Except for small portions highly concentrated in the Galactic plane and more marked in the central regions, the shape of the high spatial resolution distribution curves of the TMGS does not exhibit the 'noisy' pattern of the GSC plots, which is certainly due to the presence of strong and patchily distributed extinction. Hammersley et al. (1994) showed similar histograms for different areas which also have similar shapes. Moreover, a good fit to a classical exponential disc can be seen in Fig. 3 of that paper; this would not be the case if the extinction were important and non-uniform.
This conclusion can also be reached from the contour maps of the bulge of the Galaxy of Dwek et al (1995), who showed the residuals of the DIRBE data after disc subtraction and extinction correction. Again, the general shape of the maps proves the basic uniformity of extinction distribution in the near infrared.
We now estimate these effects. From (4) with the change of variable r = 10 (5+mmax −Mmax)/5 and the local cumulative counts σt follow the expression ignoring the variation of extinction with the distance. If we take the density D as constant, then Taking D as constant is sufficient for estimating the the order of magnitude of the patchiness due to extinction. In any case, the above proportionality is followed in the observed cumulative counts but with a constant value of between 1 and 2 instead of 3/5 in the exponent. An excess of extinction, ∆a(θ), due for example to a cloud at an angular distance | θ − θ0| with respect to a given point θ0, will cause a reduction in the apparent flux of a fraction, f , of stars (behind the cloud), thereby creating the same effect as a reduction in the maximum apparent magnitudes of these stars by ∆a(θ) mag, or, if ∆a(θ) is relatively small, a reduction in mmax by f ∆a(θ) mag for all stars. Hence, σt(θ) ∼ σt(θ0)10 −3(a(θ)−a(θ 0 ))f /5 .
If it is assumed that the observed flux fluctuations, ∆F , are due mainly to extinction variations, with the smallfluctuation approximation, then both are related by (the factor f appears again here for the same reasons as above). So, from equation (16), using the small-fluctuation approximation, This means that the angular correlation of star density is about 3/2 times the angular correlation of the flux. Averaging the DIRBE K flux (Boggess et al. 1992) fluctuations from the maps with 2520 ′′ resolution over | b |≤ 3 • for constant-l strips over the range −35 • < l < 35 • (where the effects of extinction are most relevant), we get root mean squares of with an average of The oscillations of flux fluctuations are not very high in the plane, their maximum being 2.3 times the average. From equation (18), and taking into account that the root mean square is ω(0), for regions of 2520 ′′ in size. In the most unfavourable case, where the extinction is highest (multiplied by a factor of 2.3 2 because the maximum root mean square is 2.3 times greater than the average), ω(0) ∼ 0.08. Higher-resolution flux maps are not available in the Kband for the whole sky so we cannot derive these numbers for smaller scales, but they are not expected to be much higher since average cloud size is of the order of degrees (rather higher than 42 ′ ) and the cloud distribution is fairly smooth. A fractal distribution would increase the contamination but this may apply only to very cold gas clouds (Pfenninger & Combes 1994) which are not the main cause of extinction in the K band.
We conclude that extinction in K cannot be responsible for correlations ω(0) greater than ∼ 0.08. This is just an estimate, but the order of magnitude should not be very different. As will be shown, the results when applied to the TMGS are above this value (see, for example, Fig. 7), and causes other than patchy extinction must explain this.
A SIMPLE CLUSTERING MODEL
In order to gain an understanding of how the accumulation parameter (C2) varies with Galactic position, a model of stellar distribution is required. The model to be adopted in this section is very simple and consists of a group of spherical star clusters separated by distances much larger than the sizes of the clusters and embedded in a Poissonian distribution of field stars with average density ρnc . The density of clusters is n cl . To simplify the problem, a constant star density, ρ cl , is assumed for each cluster (ρnc < ρ cl ); it is also assumed that the clusters have the same radius, R cl (i.e. they are filled homogeneous spheres). In fact, the density of clusters, their radii and their internal stellar densities are different in different regions of the Galaxy, however there is insufficient information to construct a more detailed model, and our main interest here is in how C2 varies qualitatively. According to the definition, the TPCF is the average of the product of two numbers (for a distance y): the first is the probability of finding an object in a given position, and the second is the number of times that the object counts in a shell of radius between r and r + dr exceeds the same counts in a Poissonian distribution. The first number, the probability of finding an object in a volume dV centered on r within a total volume V , is ρ( r)dV / ρ V . With regard to the second number ( Fig. 1), there are two cases to be considered: (i) When r is at distance x < R cl + y from the centre of a cluster, the second number is the sum of the excess (with respect the average, i.e. ρ ) of objects in the part of the shell that is inside the cluster of objects in the part of the shell outside the cluster ((4πy 2 − Sss(x; y, R cl ))/4πy 2 × ( ρ − ρnc)/ ρ ). Here Sss(x; y, R cl ) is the area of a spherical surface of radius y inside another sphere of radius R cl whose centre is at distance x from the first one. (ii) When the distance, x, from any cluster is larger than R cl + y, the second number is the negative quantity −( ρ − ρnc)/ ρ , i.e. the deficit of objects compared with a Poissonian distribution with density ρ .
When y is sufficiently large there will be cases in which the shell intersects more than one cluster. However, such large values of y are of no interest here, and only those cases in which the magnitude of y is of the same order as that of R cl will be considered.
Thus, the expression for the TPCF is The average density is and Also, from the appendix A1: and R cl +y We insert the last five equalities in (22) and, after simplifying, this leads to: The reader should bear in mind that the applicable value of y which is that for distances smaller than the minimum distance between two clusters. Due to the properties of the TPCF, the quantity all space dVyξ(y) = 4π ∞ 0 dy y 2 ξ(y) should be equal to zero. This is not exactly the case in (29), which is not valid for y as large as the typical distances among clusters. Nevertheless, it can be seen that Vy = n cl −1 dVyξ(y) = 0, i.e. the volume in which (29) can be used is roughly n cl −1 . The behaviour of the function can be seen in the Fig. 2 for n cl R 3 cl = 10 −3 , ρ cl /ρnc = 100. It becomes constant for y > 2R cl , and the value in which ξ = 0 is y = 1.90R cl , which also is very close to 2R cl . In what follows, the term 4 3 πR 3 cl n cl will be neglected because in practice it is too small (the separation among clusters is much larger than R cl ). The next step is the calculation of the integrated TPCF (Ξ) using (7) and including (29). In this case, ∆r is the first value that follows ξ( y 2 + (r − r ′ ) 2 ) = 0, which is ∆r = 4R 2 cl − y 2 . Hence, Fig. 2 shows that the behaviour of the TPCF and the integrated TCPF (ξ and Ξ) are not very different. Now, using (6), the observed value C2 will be calculated. The theoretical lower limit of the integral should be zero, but in practice values under a ymin = rθmin cannot be observed, due to the resolution θmin of our detector (r is the distance of the zone of clusters).
where F (t0) is The observational value of C2 would be an average of this quantity with weight N * 2 (r), according to (12), and with the characteristic parameters of the cluster depending on r, i.e. n cl (r) and ((ρ cl − ρnc)/ ρ ) 2 (r).
The contribution of a single shell of clusters
In the case where there are clusters only in the shell between r and r + δr, with δr/r ≪ 1, and where the contribution to C2 is given only for this range of r (when in the other ranges of r the contribution is nil, or when the other shell with clusters has a negligible N * (r)), then because the TPCF (ξ) in (29) is zero for this value (as 4 3 πR 3 cl n cl is much smaller than unity). Then, introducing (33) in (31) and feeding the result into (12), we obtain The factor F reduces the value of C2 when θmax is only few times greater than θmin. Since θmax = 2R cl /r, the more distant the clusters the smaller the value obtained for C2. θmax is normally enough large compared with θmin for F to be considered as always close to unity. The values that F takes are shown in Table 1. As can be seen, when x is greater than ≈ 1/4, i.e. when the distance of the cluster is greater than ∼ R cl /2θmin, the effect of the factor F begins to be noticeable.
Clusters distributed throughout the Galaxy
Suppose that there are clusters distributed throughout the Galaxy, i.e. that there are clusters at all distances along the line of sight in any direction. The TPACF (ωt), which is related to the TPCF (ξ) through (9), is computed numerically. The function ξ(y) is obtained with (29), where it is assumed that the size and star density in the cluster is the same in all clusters, and that also the density of clusters, n cl , is proportional to the mean density of stars: The density of stars (ρnc) is inferred from the relationship (24). N * (r) is calculated using (10), where ρ and φ(M ) are calculated for a model Galaxy with two components: a disc and a bulge. The disc is taken from the Wainscoat et al. (1992) model and the bulge model described in López-Corredoira et al. (1997). The extinction law given in Wainscoat et al. (1992) is also used.
The explicit dependence of C2 on l and b is calculated from the ω values and a θmax that is derived with the ap- θmax is calculated in this way because the first zero of the function ω cannot be derived. In the previous approximations, ω was always positive and negative values were neglected . When these operations are carried out for the values R cl = 1 pc, ρ cl = 500 pc −3 and C = 10 −5 , the results shown in Figures 3 and 4 are obtained.
In Figures 3 and 4 it can be seen how the correlation increases with distance from the Galactic plane (increasing |b|) as well as with distance from the centre of the Galaxy (increasing |l|). An explanation of this can be seen directly in the expression (29), which is proportional to n cl ((ρ cl − ρnc)/ ρ ) 2 , where ρ cl is a much larger constant than ρnc and n cl is proportional to ρ . It can be seen that ξ is greater for smaller values of ρ , i.e. it is greater away from the Galactic plane and far away from the bulge. Since the angular correlation function is an average of all the correlation functions in the line of sight, it produces the above results.
This is merely a hypothetical example, and no meaning should be given to the actual values obtained since the parameters are invented. However, the qualitative dependence of ω on l and b is significant. This result is even more general than the particular case of the proportionality n cl = C ρ . An increase in ω for increasing |l| or |b| is given when n cl ((ρ cl − ρnc)/ ρ ) 2 increases in a typical region in the line of sight. Even a dependence such as n cl = C ρ α with α < 2 is acceptable to allow this kind of dependence. Another significant prediction is that in the anticentre region the dependence with l is very smooth and nearly constant.
THE CAUSES OF THE CLUSTERING
The existence of clustering indicates that the formation of the stellar components of a cluster share the same time and place of birth, and that all the stars that are observed in the cluster once belonged to same originating cloud, which explains why they occupy a neighbouring position in space. The alternative hypothesis of an initial Poissonian distribution of stars which collapsed to form the cluster is not feasible because cluster's gravity would have been too weak to restrain the velocities of the stars, which would have "evaporated" from the primitive cluster. The fact is, then, that clusters originate from earlier clusters that were formed from a cloud, or several clouds in the same region.
It is assumed in the TPCAF, equation (3), that only ξ, and not the extinction through the relationship between apparent magnitudes and absolute magnitudes, is dependent on θ. As discussed in section 2.4, the contribution from cloud irregularity is irrelevant.
4.1
Relationship with the evolution time The relationship of the accumulative parameter with the evolution time comes from the rate of evaporation of stars. When the effects of dust clouds are ignored, all contributions to C2 come from ξ, i.e. clusters that happen to be in the line of sight. The relationship with the evolution time is through ρ cl since stars escape from the cluster over time. According to Chandrasekhar (1942), the rate of escape of stars is where Q and TE are constants that depend on the characteristics of the cluster. Q is the rate of stars which can escape and TE is the average time that takes these stars to leave the cluster. Q is the fraction of stars with velocities greater than escape velocity. Chandrasekhar (1942) calculates a value of Q = 0.0074 for a relaxed cluster. TE (in Gyr) is the relaxation time that is, for an average cluster with N stars, radius R (in pc) and average stellar mass m (in M⊙) Insofar as C2 is proportional to ((ρ cl − ρnc)/ ρ ) 2 through the proportionality dependence with ξ in (29) and assuming n cl and R cl as constant and ρ cl ≫ ρ , then C2 may be approximated as or, expressed differently, where K1 and K2 are positive constants. This is the theory for simple cases but some cases are more complicated. Certain other effects are not negligible, such as dynamical friction (see Chandrasekhar 1943a,b,c) or Galactic rotation and gravitational tides (Wielen & Fuchs 1988), both of which produce different values of these constants. Spitzer (1958) points out that most open clusters should be destroyed by interactions with molecular clouds on time-scales of few hundred million years, meaning that few very old open clusters are known to have survived to the current epoch. However, N -body simulations (Terlevich 1987) predict that only by encounters with the most massive molecular clouds would the cluster be disrupted.
Apart from theoretical considerations, exponential decrease is indicated by other authors from observational data (for example, Lyngå 1987a). Janes & Phelps (1994) fit a relationship between the age and cluster abundance in the solar neighbourhood which follows a sum of two exponentials. If it is assumed that there is a constant creation of clusters, and that the death of a cluster corresponds to a low value of ξ, this would then imply that C2 is dependent on the sum of two decaying exponentials, although with so many effects it is difficult to determine which is the correct dependence. What is clear, however, is that high values of C2 indicate the existence of young clusters.
Measurement of the TPACF
The following discussion concerns the measurement of the TPACF derived from a rectangular field image of angular size a × b in a direction containing N stars of known coordinates and magnitudes (between mmin and mmax). One method of determining correlation functions in a distribution of objects, discussed by Rivolo (1986), is to use the following estimator for N points: where Mi(r) is the number of particles lying in a shell of thickness δr from the ith particle, and Vi(r) is the volume of the shell. The same applies to σσ but with areas instead of volumes: This expression must be corrected for edge effects, i.e. some stars are lost in the calculation of Mi(θ) when a star i is at distance less than θ from an edge of the rectangular image. In the quantity N i=1 Mi(θ), the excess probability is measured of finding a star at an angular distance θ from other stars in a ring of thickness ∆θ with surface area Ω(θ) and this should be proportional to Ω(θ) for a Poissonian distribution. The excess probability is reflected when there is an excess of stars inside the ring Ω(θ). The loss of stars due to edge effects is cause by part of the ring falling outside the area of rectangle.
To solve the edge-effect problem it is necessary to calculate how many stars are lost beyond the edges with respect to a non-edge case. Only a fraction FBE is measured for stars separated by an angular distance θ, and each Mi(θ) must be divided by FBE(θ). The calculation of FBE is given in Appendix A2 for a rectangle. This is represented by When this is applied to (2), it gives With this simple algorithm, the angular distances of the stars with respect to one another (once their coordinates are known) and the angular correlation for a rectangular field of stars are obtained.
The error in the TPACF, as for the TPCF, is derived by Betancort-Rijo (1991). In the limit of small ∆θ (the interval for calculating the different values of ω(θ)) the error expression leads to After this, the different integrals containing ωt, and their errors, can be calculated with standard numerical algorithms.
Some examples with known clusters in visible
The TPACF, as a statistical tool for inferring correlation, is applicable to any survey. Some test examples are given below in order to determine how good the method is at finding regions of the sky with clusters, both where there known to be one or two open clusters and where no clusters have been identified. Some open clusters were randomly selected from Messier catalogue and are listed in Table 2. These are six regions with one Messier open cluster, two regions with two Messier open clusters and two regions with none. Stars down to magnitude 12 in V were selected from the GSC. A square field three times larger than the catalogued size of the largest cluster was selected and ωt (hereafter called simply ω) was derived from equations (42) and (2). The value of θmax is derived as the angle whose ω is zero within the error; for larger values of θ, ω is more or less equally positive and negative. This criterion is not precise when the errors are large but it gives an acceptable estimate. C2 is obtained for each region from equation (5).
Figures 5a-c show the TPACF, and Table 2 lists θmax and C2. The correlations are positive to within the errors for scales shorter than the size of the clusters. Since the correlations have been calculated with stars down to magnitude 12, many of the stars do not belong to the cluster. Because of this, the correlation is not excessively high, although it is high enough to distinguish it from the cases with no clusters ( Fig. 5c), which represents two random cases in regions without clusters two degrees to the north of the two fields with two clusters each. In the first field without clusters there is a small correlation but it is insignificant to within 3.5σ.
Hence, the method does indeed detect clustering where there are known to be clusters but not where there is believed to be none. One cluster would be enough if its effect were not too attenuated by foreground and background stars in the chosen range of magnitudes. Moreover, the predictions of the sizes of the clusters is quite acceptable in comparison with the catalogued sizes (see Table 2 and Figure 6).
Peculiarities of TMGS data in calculating the TPACF
Due to the way in TMGS data is obtained, the following considerations must be taken into account when applying the method and in the examination of the results obtained.
• The method of assigning declinations to the stars in the TMGS will give extra angular correlations in the Figure 6. θmax, the predicted size of the average cluster according to the simple model used in this paper versus the catalogued size of the cluster in this region (in case of two clusters, the average size of both is plotted).
angles which are multiples of the angular size of the detector (17 arcsec) or multiples of the quarter diameter.
Declinations are assigned with discrete values, which are separated by distance multiples of a quarter of the detector size (∼ 4 ′′ ). However, the correlation using θ greater than 4 or 5 arcsec is not significantly affected by this characteristic. The range of θ to be used in this paper is from 5 to 250 arcsec; moreover, C2 is averaged over a wide range of θ, much higher than 4 arcsec, so this effect is negligible for this particular case. However, for the correlation of angles smaller than 4 arcsec this effect could be important. • The strips do not cover the whole sky; neither do they completely cover 100% of the area of the rectangles that we will use to calculate the angular correlation function. The TMGS (Garzón et al. 1993) was carried out by means of drift scanning with strips of constant declination, and in some cases there are small gaps between adjacent strips which make the sky coverage within the squares in the region of interest incomplete. To correct for this effect, we must multiply (1 + ωt(θ)) by the fraction of area covered in the rectangle with regard the expression (44), assuming that the positions in the rectangle which are not covered is random. The fraction of area covered in the squares that we use is high (greater than 80 %) so the measure of the error is good enough because it is only affected by a factor of between √ 0.80 and unity.
Application to TMGS
As an example, two cases will now be applied to the TMGS.
In this case the correlation is very weak (C2 is ∼ 40 times smaller than in the previous case), and there is a difference of only 0.2 σ from zero, which, within the errors, implies that there is no correlation or clustering among the stars from this field. Figure 8 shows that ω is almost zero for every value of θ within the error bar. The value of θmax is meaningless in this case, because ω has such a low value that the error in the search for the first zero of the function is large. When there is little correlation in the field θmax will be small because the algorithm that eliminates the zeros due to fluctuations does not work well when ω is much smaller than its error. Also, in this case σ(C2), the error in C2, will be larger than, or of same order as, C2. To overcome this problem those data with large σ(C2) will be separated and only those with C2 > 3.5 × σ(C2) will be considered as having confirmed correlations.
5.5
Which clusters can be detected with this procedure?
In the following application, the TPACF will be measured for stars down to K = 9.0 mag and for a maximum angle of 250 ′′ . From Appendix A2, this requires a minimum strip width of ∼ 500 ′′ . Larger clusters would be detected with larger angles; however, 250 ′′ is almost the largest angle that can be used if the same analytical criteria are to be applied to all ten TMGS strips that we will use. A typical open cluster has an average size of 5 pc (Janes & Phelps 1994), so 500 ′′ will be sufficient to detect it at ∼ 2 kpc. Therefore, mostly distant clusters will be detected. However, the TMGS is dominated by late K and M giants. Therefore, the majority of the stars detected in the magnitude range to be used are significantly further away than 2 kpc. It is estimated that ∼ 10 ′ is the maximum size of the clusters which will affect C2, although this is difficult to calculate accurately, and the contribution of the largest clusters is not nil but decreases as the cluster increases in size.
The exact calculation of the minimum size is also difficult to analyse. The TMGS survey does not detect the individual sources in globular clusters since the stars are too close to be separated and the whole cluster will appear as an extended source. Similarly, if the open cluster is very small there is excessive overcrowding, or confusion, of sources, a few of them contributing very little to the parameter C2. The separation between the stars in a cluster needs to be more than twice the diameter of the detector for its components to be detected, i.e. a minimum of 30 ′′ to 35 ′′ .
CORRELATIONS AS A FUNCTION OF GALACTIC COORDINATES
When the procedure is used in other regions, the behaviour of C2 as a function of l and b can be determined. Calculations of ω(θ) were carried out on several strips with constant declination and various sub-strips (Table 3). The value of l quoted is that at which the strip intersects the Galactic plane (b = 0 • ). The range of Galactic latitude is |b| < 15 • for l < 35 • and |b| < 5 • for l > 35 • . Table 3. TMGS regions (with constant declination) used in this paper.
(i) There is general a C2 dependence on Galactic latitude in the disc for l < 90 • (i.e. strips 1-8 ) with some exceptions pointed out in (iv). This relationship is especially remarkable in strips 1, 2 and 5. The function C2(b) is approximately parabolic. Figure 10 shows the following fits to the best data (those with C2 > 3.5 σ(C2)) of these strips: and C2 = 20 × 10 −3 b 2 + 0.036b + 1.00 for l = 31 • .
(ii) Outside the bulge, there is a general increase in C2 with l, as seen in equations (46), (47) and (48). When the data with C2 > 3.5 σ(C2) are averaged between b = −3 • and b = 3 • , and also for 3 • < |b| < 5 • , there is a dependence on l as shown in Table 4. (iii) In the inner bulge region, with l < 15 • and b < 5 • , the correlations are almost zero (Fig. 9 a)). When the relative correlation differences at l ∼ 30 • and the inner bulge are compared with those predicted by the simple model (Figs. 4, 3) a correlation deficit can be seen for the bulge. (iv) Three regions where there is an excess of correlation with respect to both the l and b dependence occur at at l = 31 • , l = 37 • (Fig. 9b) and l = 70 • (Fig. 9c), i.e. strips 5, 6 and 8. (v) The anticentre region gives a correlation similar to that of the intermediate Galactic longitude (50 • < l < 100 • ) region in the plane, or even lower (see Table 4). The value of C2 does not increase with l, or it does so very smoothly outside the intermediate l region. The simple model prediction is not very accurate but, comparing with the results in Figs. 3 and 4, it can be seen that C2 is larger at l ∼ 150 • than at l ∼ 40 • . This significant departure cannot be explained without including an extra component in the model.
Causes of the dependence on Galactic coordinates
(i) The b-dependence: This is predicted by the simple model created above. As seen in Fig. 4, ω should increase with |b| because n cl ((ρ cl − ρnc)/ ρ ) 2 in expression (29) increases with |b|. (ii) The l-dependence: Again, the l-dependence can be explained by the model. (iii) The bulge: When C2 values for bulge regions are compared with those for other regions where there is only a disc component, the observational data give a lower relative correlation than is prediced by the model (Fig. 3), i.e. there is less correlation in the bulge than expected. This is consistent with the bulge being older than the disc (see Section 4.1). The bulge is known to have a different population of stars from the disc (Frogel 1988;López Corredoira et al. 1997), and these are expected to be an older population (Rich 1993), so the lack of correlation would be expected. Moreover, as pointed out earlier, the central region will have patchy extinction which will tend to increase the correlation. Hence, the C2 values generated by clusters should be even lower than the observed value indicating even fewer young clusters in this region. (iv) Excess correlation in some zones in the plane: Feinstein (1995) points out that very young clusters are tracers of the spiral arms. As has been noted young clusters provide a significant contribution to C2. However, spiral arms are not included in the simple model, which contains only the disc and bulge; hence where an arm is crossed it is to be expected that the correlation will be higher than predicted. Whereas the distribution of old clusters varies smoothly with Galactic position, the young clusters have a far more irregular distribution. Of the four areas which cross the plane between l = 31 • and l = 70 • three show a significant correlation excess. The l = 31 • region is almost tangential to the Scutum arm and also runs through the Sagittarius arm. The l = 37 • region will also cut the Sagittarius arm. The excess correlation at l = 70 • can be attribute to the star formation region in the Perseus arm. The far-infrared source G69C in this region has been attributed to star formation regions by Kutner (1987). Strip 7 (l = 58 • ) is an exception which could possibly be due to this line of sight missing a significant star formation region as it crosses the arm. Using the simple model described previously with one shell of clusters (Section 3.1) but only allowing clusters in the arms, the degree of clustering in the arm can be estimated. With the use of (34) a relationship between various cluster parameters can be obtained (for example, at l ≈ 70 • , b ≈ 0 • , C2 is 1.88 ± 0.34). It is assumed that the greater part of the contribution to C2 is due to the arm clusters, and that, as estimated by Cohen et al. (1997, in preparation) ∼ 30% of the stars observed in that region are from the arm (with ∆r ≈ 1 kpc in the line of sight), whereas the rest belongs to the disc. As θmax is 46.5 ′′ in this region, the equivalent cluster diameter, from (33), is 2.0 pc (when the distance is r = 8.8 kpc; Georgelin & Georgelin 1976). This is smaller than the average value of 5 pc given by Janes & Phelps (1994), although this could possibly be due to the fact that only the core of the cluster is seen by the TMGS (the typical size of the core of a cluster is 1 or 2 pc; Leonard 1988), in which there are bright and more massive stars; alternatively, a possible difference from the solar neighbourhood could also be the explanation. Nevertheless, this is only a rough estimate and no further conclusions should be drawn from this number. The order of magnitude should, however, be correct. With these data it possible to estimate the density of stars within the clusters. Using (34) gives or, making use of (23) with 4 3 πR 3 cl n cl ≪ 1 and ρnc ≪ ρ cl , n cl ≈ 3 × 10 4 4 3 πR 3 cl n cl ρ cl + ρnc ρ cl 2 .
When n cl is determined with last equation, the following condition is then derived for solving the seconddegree equation with real numbers: and 5 × 10 −7 pc −3 < n cl < 2 × 10 −6 pc −3 , which is equivalent to saying that most of the stars within the arm in this line of sight are forming clusters. Indeed, with such a low ρnc (because of (51)), by (23) and (52) gives 10 5 ρ < ρ cl < 5 × 10 5 ρ , which, with ρ ∼ 1.4 × 10 −3 pc −3 (From the Galaxy model used in Wainscoat et al. 1992), gives 140 pc −3 < ρ cl < 700 pc −3 , i.e. between 500 and 3000 stars per cluster, a quite reasonable number (Friel 1995 talks about a typical mass of young clusters of few thousand solar masses). The same case is repeated at l = 37 • and at l = 31 • . When the line of sight cuts an arm, the correlation is greatly increased. If the arm contribution to the number of stars is low, the correlation will be diluted, although its contribution may be not totally negligible. It also depends on the density of the other components for that direction; the disc, for example, dilutes the correlation. Hammersley et al. (1994) and Garzón et al. (1997) suggest that there should be an excess of bright stars in the region at l = 27 • , b = 0 • which might be due to the interaction of a bar with the disc, giving rise to a star formation region. Concerning the deficit of correlation measured for this star formation region, it could be concluded that, if the star formation were sufficiently large-much greater, say, than the size of the rectangle used from sampling-then this would explain the nondetection of clustering in this region. (v) Deviation from a simple model of l dependence in the anticentre: Towards the anticentre (210 • > l > 150 • ) the correlation is significantly less than that predicted by the simple model. This implies that the number of clusters is small and, in particular, that very young clusters are rare in this direction. This is in agreement with the results from visible observations of clusters (Payne-Gaposchkin 1979). Janes & Phelps (1994) argue that there is a lack of old clusters in the inner disc, since they would be destroyed by molecular clouds (see Section 4.1), but that there will a relatively large number of young clusters. However, the ISM density falls off with distance from the Galactic centre, so in the outer disc there will be significantly less star formation and hence far fewer young clusters. The existence of a gradient in open cluster age has been commented on by Lyngå (1980) and Van den Bergh & McClure (1980), and an explanation was attempted by Wielen (1985). As has been noted previously, young clusters contribute significantly to C2, so the lack of young clusters in the anticentre region would lead to a reduction in the amount of correlation. Within a few degrees of the plane the arms have a significant influence on the amount of correlation for the longitude range 30 • < l < 90 • . One possible reason for the apparent deficit in C2 towards the anticentre could be the excess due to the arms in the comparison regions. In order to discount this possibility, a comparison of the in-plane anticentre region can be made with an off-plane region at l = 31 • . The model predicts that the ratio C2(l=31 • ,5 • <|b|<15 • )/C2(l=168 • ,|b|<5 • ) should be less than unity, however, the measurements give a value of 4 or 5. This gives further support to the hypothesis that there are fewer young clusters than expected towards the anticentre. A further possible reason for the lack of correlation in the anticentre is that there could be a significant decrease in the total numbers of clusters in the anticentre rather then an increase of age with Galactocentric distance. However, observations in the solar neighbourhood (Lyngå 1980;Van den Bergh & McClure 1980;Janes & Phelps 1994) support the hypothesis of increasing age of the clusters with Galactocentric distance. Also, clusters in the anticentre have a greater angular size because they are nearer but this is taken into account in the model and should not cause the deficit of correlation. When the analysis presented here is applied to other large-area infrared observations it may contribute to our understandanding of this dependence on cluster age on Galactocentric distance. The TMGS data clearly shows the presence of young clusters in the inner Galaxy and consequently a decrease in C2 in the anticentre direction. However, An accurate quantification is not possible because of arm contamination in most parts of the regions, for which complete information is unavailable.
CONCLUSIONS
A technique is developed for searching for clustering in stellar surveys using correlation functions. The mathematical tools are useful for any field of stars and can be applied to any survey, especially those at carried out at infrared wavelengths, which permit a study of the distribution of stars throughout almost the entire Galaxy. The DENIS (Epchtein 1997) or 2MASS (Skrutskie et al. 1997) surveys will be ideal for this technique as the increased numbers of stars will reduce the errors. It is even possible, with a large number of stars in the survey, to apply the technique for different ranges of apparent magnitude. Studying the clustering of stars at different apparent magnitudes is equivalent to do studying in three dimensions (l, b and the average distance r which is associated with the treated range of magnitudes). A simple model has been developed. This model could be improved by introducing a density dependence as a function of the distance from the centre of the cluster, perhaps a power-law dependence.
In this paper the method has been applied to the TMGS. Is has been shown that a simple model in which old open clusters trace the whole Galaxy with a density of clusters proportional to the density of stars agrees quite well with the data. An exception to the general agreement are specific regions in the plane where the higher-than-expected clustering can be attributed to star formation in the spiral arms. A second departure from the simple model is the reduced C2 in the outer disc and in the bulge due to a lack of young clusters.
In one of the regions with an excess, at l = 70 • in the plane, the approximate limits for the cluster density and the density of stars inside the cluster are derived. These are, respectively, 5 × 10 −7 pc −3 < n cl < 2 × 10 −6 pc −3 and 140 pc −3 < ρ cl < 700 pc −3 . There is, however, a lower-thanexpected correlation at l = 27 • , b = 0 • . There is believed to be a huge star formation region in this direction and the lack of correlation could be due to the star formation region being far larger than the sample area.
As has been pointed out by Friel (1995), the oldest open clusters may be viewed from two perspectives with regard the formation of the Galaxy: a halo collapse or a continuous accretion and infall of material from the halo on to the Galactic disc. Either perspective is possible. The first should justify which were the original star formation regions that were the origin of the present old clusters in the outer disc and how they travelled there from their place of origin. The second perspective needs to test the infall of matter from the halo as well as the existence of star systems in the halo. Further improvements on these cluster searches and better numbers will give us a hint concerning these questions on the origin of the Galaxy. A better determination of C2 in the bulge region will tell us about the age of bulge clusters if these exist. In this article we have observed a relative absence of correlation in the bulge that is somewhat less than the prediction of our simple model, but at best the prediction could say, as in the case of the anticentre, whether the correlation is greater or less than the improved model and enable us to reach further conclusions.
Quantities that we are interested for calculating are R cl 0 dx x 2 Sss(x; y, R cl ) and R cl +y R cl dx ×x 2 Sss(x; y, R cl ). Again, we distinguish several cases: (i) y < R cl : following (A1) and (A5) We have a rectangular surface with size a × b (x from 0 to a and y from 0 to b). A ring of negligible thickness and of radius θ whose centre is located at (x, y) contains part of the surface inside the rectangle, f (x, y), and the rest of it is outside the rectangle. Due to the loss of a part of the ring surface outside the rectangle we measure only a fraction FBE of the star counts separated by an angular distance θ.
Assuming that the distribution of the stars in the rectangle is homogeneous, we have | 2014-10-01T00:00:00.000Z | 1998-04-21T00:00:00.000 | {
"year": 1998,
"sha1": "867b37d39a40269da6e67abb47d8d22dfbe479e5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/9804214v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "699234c1940f7ce229bc0c0168305109e78f641f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
226678691 | pes2o/s2orc | v3-fos-license | EDUCATIONAL CHALLENGES FACING SWEDISH PHYSICAL EDUCATION TEACHING IN THE 2020s
: Many countries around the world have experienced neoliberal turns which strongly affected educational systems. In Sweden, for example, the social democratic Welfare State has taken a radical neoliberal turn since the 1990s. A number of school reforms have been carried out and they are described as the most extensive in a hundred years of public schooling. These changes have also affected the subject of Physical Education and Health (PEH). Growing research interest is one of those changes, and PEH has become one of the most explored areas in Swedish physical culture. This article points to some of the new research, identifying and formulating possible educational challenges facing teachers as we move into the 2020s. Central questions are: What characterizes teaching in PEH? What do students learn in the subject? Is there effective evaluation in Swedish PEH? Resumo:
INTRODUCTION
Many countries around the world have experienced a neoliberal turn, which has powerfully affected their educational systems (CONNELL, 2013).In Sweden, for example, the social democratic welfare state has been radically transformed in a neoliberal direction, especially in the last three decades.A number of school reforms have been carried out -which have been described as the most extensive in a hundred years of public schooling (NILSSON LINDSTRÖM;BEACH, 2015).The entire system has gone from being one of the most centralised in the western world to one of the least centralised (DAUN, 2004).In the past, the government dictated how the work of schools should be organised and implemented, and especially what the subject matter of the various subjects ought to be.Nowadays, the government dictates the school result by stipulating which goals or learning outcomes students should achieve and which abilities they should develop.The system has thus been transformed from a rule-driven operation to a goal-driven one.Accordingly, school reforms have also meant a shift in the educational focus -from what schools should teach to what students should learn.As Carlgren and Marton (2000, p. 92) point out, teachers' work has thereby changed from […] focusing on teaching to focusing on learning, from thinking about methods to thinking about goals and outcomes, from being occupied with what the teacher does to what the student perceives (our translation).
According to this perspective, a greater emphasis should now be placed on what students actually learn and their level of knowledge in relation to the stated goals.Judgements about the distribution of grades are to be made against specified criteria, or 'knowledge requirements', as they are now called.The pedagogical responsibility for facilitating the achievement of these goals is devolved to the teachers.In this way, great confidence is expressed in how teachers teach and what they teach as long as the students achieve the stated learning outcomes.As teachers are solely responsible for awarding grades, they therefore play an important role in the assessment and grading process.
The change from a rule-driven to a goal-and result driven school system have also affected the subject of physical education (PE).The name, for example, has changed from PE to physical education and health (PEH), which means that a more pronounced health perspective is now present in the national syllabus for the subject.A greater research interest in PEH has also emerged in the wake of these reforms.Before the year 2000, very few researchers in Sweden were interested in issues related to PE.The pedagogue Claes Annerstedt, whose main interest concerns didactics in PE, was one of the pioneers in this field (ANNERSTEDT, 1991).Since the year 2000, however, PEH has become one of the most explored areas of Swedish physical culture.Around ten years ago, this awakened research interest, alongside the educational reforms that had been carried out, prompted us to review the state of Swedish PE teaching and research and, on that basis, formulate a number of educational challenges for PE teaching (REDELIUS; LARSSON, 2010).One of the most prominent challenges we identified was how to transform PEH from a practical subject for "fun and recreation" to a subject for learning.Relevant questions to ask 03 today are whether PEH has become a subject for learning and what the current challenges for teaching are as we approach a new decade.
Researchers still have an immense interest in PEH and a number of PhD theses, journal articles, reports and government evaluations have been produced during the last ten years.What is more, yet another curriculum with new syllabuses and knowledge requirements for all subjects has been implemented in Swedish schools (SKOLVERKET;2011;updated version 2020).The aim of this article is to point to some of the new research and to identify possible educational challenges that teachers face in Swedish PEH today as we have just moved into the 2020s.Central questions are: What characterises the teaching in PEH?What do students learn in the subject?Is there assessment efficacy in Swedish PEH?
SWEDISH PHYSICAL EDUCATION AND HEALTH
Before trying to answer the above questions, we will offer a short overview of the conditions framing the subject in the Swedish primary and secondary schools.PEH is a mandatory subject for Swedish students.Given that co-education has been stipulated in the syllabus since the 1980s, girls and boys have PEH together and that is how most of the teaching is carried out.As already mentioned, the system is goaldriven and there should be a constructive alignment between what is described in the syllabus as general aims, main content and knowledge requirements (the same as grading criteria/learning outcomes).Some of the general aims for teaching in the subject of PEH are as follows (SKOLVERKET, 2020).Teaching in PEH should aim at students developing all-round movement capacity and an interest in being physically active and spending time outdoors in nature.Through the teaching, students should encounter different kinds of activities.Students should be given the opportunity to develop knowledge about which factors affect their physical capacity and how they can safeguard their health throughout their lives.Students should also be given opportunities to develop healthy lifestyles and knowledge about how physical activity relates to mental and physical well-being.Through the teaching, students should develop the ability to spend time in outdoor settings and nature throughout the year and acquire an understanding of the value of an active outdoor life.The teaching should also contribute to students developing knowledge about the risks and safety factors related to physical activities and how to respond to emergency situations.
According to the national curriculum, teaching in PEH should essentially give students opportunities to develop four specific -yet also overall -abilities: 1) to move without restriction in different physical contexts; 2) to plan, implement and evaluate sports and other physical activities based on different views of health and, 3) to carry out and adapt their recreational and outdoor life to different conditions and environments; and 4) to promote safety and prevent risks during physical activities and to manage emergency situations on land and in water (SKOLVERKET, updated version 2020).
RESEARCH ON TEACHING AND LEARNING IN SWEDISH PEH
Above we outlined the official goals of PEH in Swedish nine-year compulsory schools.In this section, the overarching question is: What does teaching in PEH look like in Swedish schools and what do students learn in the subject -according to the existing scholarly research on the topic?The contemporary Swedish research is quite extensive and it is not our ambition to summarise it here; rather, we would like to point to two main topics that have been given special attention.These topics concern teaching and learning, and assessment and grading respectively.They affect both overall pedagogical issues highlighted in the general national curriculum, as well as subject-specific issues highlighted in the national syllabus for PEH.The educational issues relate mainly to prevailing social norms and conditions regarding, for example, gender, the body and what is valued in terms of student behaviour.The subject specific issues are predominantly about which knowledge is valued in the subject, especially movement capability (a concept that is used in deliberate contrast to notions such as 'skill' and 'motor ability') and the PEH practice.
It should be noted, that although the research is inevitably affected by the national context, for example regarding policy documents and cultural traditions of PEH (mainly dance and outdoor activities, so called friluftsliv), the Swedish research relates closely to international research on physical education.This was evident as Swedish researchers, over the course of a decade after the turn of the millennium, almost completely changed from publishing in Swedish to publishing in international (English-speaking) journals.Concerning theoretical anchoring, the Swedish PEH research balance between a European continental tradition of didactics, and an Anglo-American tradition of curriculum studies.Finally, although most of the Swedish PEH research relates to international research on similar topics, some significant themes are missing, for example racism, and talent identification and development.Arguably, a reason for this may be that racism has not been as prominent in the public debate as gender, and talent identification is a matter for Swedish sport clubs and not for Swedish schools.
EDUCATION, THE BODY AND GENDER
As was stated above, the 1994 school reform meant significant changes regarding the aims and scope of PEH.To put it simple, the shift can be framed as a transition from recreational purposes and a focus on physical, mental and social training towards educational purposes.This transition also mirrors a more general trend in school governance at a national level: that is, to focus on learning outcomes in all school subjects.Hence, central questions among PEH researchers have been to study what this shift in the national curriculum has meant to PEH practice.For some time a simple way of answering these questions have been "not much".The transition from training to education led to a number of challenges for PEH teachers and PEH teacher educators that were difficult to address.In 2004, a number of researchers contributed insights into the PEH practice in an edited research report entitled Mellan nytta och nöje (in English, "Between utility and pleasure"; LARSSON; REDELIUS, 2004; for an English synthesis of this research see LARSSON; REDELIUS, 2008; 05 REDELIUS; LARSSON, 2010).The title highlights a tension between what PEH teachers value about the subject and what is actually going on in practice.In summary, the tension revolved around two different conceptualisations of the subject as either being "physical activity for health" (utility) or "physical activity for pleasure".Physical activity for health centred on the health-related benefits of physical activity "there and then", whereas physical activity for pleasure centred on providing students with pleasurable experiences due to being physically active, so that they would be more likely to choose a physically active lifestyle in the future.
Another important theme in the above edited research report was the tension between physical activity per se, e.g. for the purpose of overcoming sedentariness and obesity, and physical activity -or movement -for educational purposes.This tension is even more pronounced in Jan-Eric Ekberg's thesis (title in English "Between physical education and activation"; EKBERG, 2009EKBERG, , 2016)), which reveals that although the curriculum documents highlight education in terms of exploration, creativity and the production of knowledge, the PEH practice largely focuses on reproducing predetermined knowledge about established movement activities, especially in some sports, and knowledge relating to fitness (compare KIRK, 2010).These tensions between physical education and activation can also be related to research by Swartling Widerström (2005), Quennerstedt (2006) and Öhman (2008) about views of the body, health and 'the good student'.
In her study about views of the body in PEH, Swartling Widerström (2005) shows that while many teachers favour humanistic perspectives of the body (being a body), the PEH practice seems to foster scientific perspectives of the body (having a body).Similarly, in Quennerstedt's (2006) study, teachers often seem to favour salutogenic perspectives of health (salutogenic from salus, health), even though the PEH practice appears to foster pathogenic perspectives (pathogenic from pathos, suffering).This means that rather than focusing on resources for health, PEH practice tends to be about reducing health hazards and preventing sedentary behaviour.Further, scientific perspectives of the body and pathogenic perspectives of health match the moralistic approach to students that Öhman (2008) discovered in her study about the social construction of the PEH student.A moralistic approach means that 'a good student' is expected to be attentive to health information and respond to it, i.e., to participate in PEH, be active and benevolent.
As yet no study has pulled these different strands together in a systematic metaanalysis.However, it can be speculated that humanistic and salutogenic perspectives are difficult to promote due to the consequences of a neoliberal governance that emphasises certainty, risk reduction and accountability.This kind of governance instead prioritises scientific and pathogenic perspectives (see, e.g.BALL, 2015; BALL; OLMEDO, 2013;EVANS;DAVIES, 2014).Clearly, fostering the humanistic and salutogenic perspectives of bodies and health stands out as a challenging task for contemporary physical education teachers.
Another prominent theme in Swedish PEH research is gender.To a great extent this research originates from the ambition formulated in the general national curriculum, which states that schools have a mission to 'counteract gender patterns'.This mission is attributable to equal opportunity objectives (although it is sometimes interpreted as counteracting gender difference for its own sake).Gender patterns can be seen as resulting from gender(ed) norms that prevent boys and girls from acting based on their individual, rather than gender designated aspirations and ambitions.This is why these patterns need to be addressed.Further, traditional gender norms typically assume the existence of two genders that are both 'opposite' and complementary -; an assumption that can hardly be maintained in today's highly individualised societies, as has been demonstrated by LGBTQI persons.
In a series of studies, Larsson and 2009).These studies have revealed that PEH teachers do not always interpret gender patterns as problematic in the sense that they restrict students' opportunities to participate on equal terms.In fact, scientific perspectives, in this case about 'sex differences', seem to some extent to cement the practice.The studies by Joy & Larsson (2019) and Larsson, Fagrell, Redelius (2011) indicate that stereotypical gendered behaviour is often 'under the radar' of PEH teachers, possibly because they regard them as 'natural' or because challenging them would require considerable effort and could be seen as 'risky business' in relation to the ambition to keep students 'busy, happy and good' (PLACEK, 1983).
However, a study by Larsson, Quennerstedt and Öhman (2014) reveals some of the strategies that teachers could use to counteract gender(ed) norms and behaviour patterns, such as leaving room for students to challenge the dominating norms and taking students' queries seriously.Allowing students to challenge the dominating norms may, for example, mean favouring student-centred explorative teaching strategies rather than teacher-centred instructive strategies.Taking students' queries seriously may include being attentive to the often implicit norms that student resistance makes explicit, for instance heteronormativity, i.e., the taken for granted assumption that all students are (or will become) heterosexual until proven otherwise (LARSSON, QUENNERSTEDT; ÖHMAN, 2014).
TEACHING AND LEARNING MOVEMENT CAPABILITY
Much of the last decade's research has been devoted to showing how the educational perspective of the PEH practice could be strengthened.This includes both conventional researcher-led research and teacher-led research.Researcher-led research has mainly been conducted in the form of observations, including video filming, of PEH classes (QUENNERSTEDT et al., 2014) and has shed light on the 'nitty-gritty' of the PEH practice, such as the ways in which lessons unfold and how different forms of practice can affect how students learn.For instance, Larsson and Karlefors (2015) show that most lessons take place within the framework of a 'training session' (i.e. with warm-ups, a main activity and some form of closing activity), where movement activities are mainly reproduced by the students themselves with little communication between the teachers and the students about what the latter are actually expected to learn.This included lessons with physical training and games, but not lessons with dance.On the contrary, dance lessons included quite a lot of deliberation between teachers and students about both the purpose and the learning objectives.Additionally, the students were invited to explore movements and to create movement sequences, rather than mimic pre-established ones.Larsson and Karlefors (2015) suggest that this way of teaching dance lessons could serve as inspiration for all PEH teaching and that such an approach could also contribute to student learning and a greater focus on the educational perspectives of PEH.
One particular aspect of PEH that has been prominent in research is the development of movement skills, although within a PEH framework this is not necessarily the same as improving skills in different sports.In order to create some distinction from conventional thinking about movement skills, whether sportsinfluenced or science-influenced (i.e.sports skills, motor skills), Nyberg (2014) developed the term 'movement capability' in a series of studies, in a way that the author believed would better match the educational aspirations of PEH.Notwithstanding, Nyberg commenced her work by exploring the capabilities of 'movement experts', although the exploration did not start from the observer's point of view, but from that of the practitioner.The basic research question was: What does one know when one knows how to do X?, which was also Nyberg's point of departure when exploring the movement capabilities of expert pole vaulters and free skiers.This question displaced the focus from technique to capability.Technique is traditionally formulated based on information that is external to the practitioner (objective description), while capability integrates both objective description and subjective experience.For instance, Nyberg (2014) found that movement experts seemed to develop their ability to discern ways of moving, navigate awareness and find alternative ways of moving or solving movement problems.
Consequently, one possible way of enhancing the educational perspective of PEH could be to develop teaching methods that would allow students to discern their ways of moving, navigate their awareness, and solve movement problems in order to develop movement capability.This could involve implicit -and often quite narrow -standards of excellence (i.e.what is 'good performance') and lead to students developing movement capability regardless of their present abilities (LARSSON; NYBERG, 2016).
Teacher-led research has been conducted within the framework of a particular graduate school for PEH teachers as part of their service in schools.Several of the projects have been interventions in which a lesson sequence with a particular focus, e.g. on health education as part of PEH (GRAFFMAN-SAHLBERG; BRUN SUNDBLAD; LUNDVALL, 2014; VESTERLUND, 2018) has been planned, taught and documented.In these studies, the focus was on the qualitative aspects of learning, i.e. how the students developed more differentiated and nuanced ways of understanding and relating to the object of learning.In contrast to the often quite short lesson sequences that conventionally characterise PEH (typically two lessons with the same content), these interventions included more extensive lesson sequences (about six to eight lessons with the same content).The results indicated that prolonged lesson sequences facilitated learning in the sense that the students had enough time to understand what the objective of the unit was and to explore and practise what they were expected to learn.They also revealed that student learning benefitted from teachers complementing the conventional 'how do I teach x' question with the question 'what does it mean for students to know x?' Highlighting this question seems to be more appropriate in a goal-related or criterion-referenced system, where specific learning outcomes are formulated for each subject or learning area.
Taken together, research on general educational issues and subject specific issues of PEH, whether researcher-led or teacher-led, has revealed a number of challenges for PEH teachers and teacher educators.Many of these challenges are prompted by changes in school governance, for example the transition from a governance based on content (school knowledge perspective) to one based on goals or learning outcomes (student learning perspective).For a subject like PEH, this has also meant a greater emphasis on education or learning.Our interpretation is that these challenges typically derive from difficulties of revaluating the practices that teachers mostly take for granted.Most teachers endeavour to change the practice within the recognised framework, i.e. they try to adapt new goals to the current practice, rather than trying to change the framework, which would mean opening up for substantial change.Some teachers seem to be able to change the framework, although this is generally regarded as a highly challenging endeavour (see, e.g.GIBBS; QUENNERSTEDT; LARSSON, 2017; GRAFFMAN-SAHLBERG; BRUN SUNDBLAD; LUNDVALL, 2014; see also CASEY;LARSSON, 2018).Changing the framework is a very demanding task that requires comprehensive cooperation between many teachers, teacher educators, researchers and others involved in forming the PEH practice.
One area that is greatly influenced by the neoliberal reforms described in the introduction is that of assessment and grading.A greater emphasis should now be placed on what students actually learn and their level of knowledge in relation to stated goals.If PEH has been transformed from a subject for 'fun and recreation' to a 'subject' for learning', could in that sense be detectable in the way teachers handle the assignment to assess and grade students.Therefore, we now turn to the topic assessment and grading, as that has been a particularly prioritised issue in Swedish PEH research.
RESEARCH ON ASSESSMENT AND GRADING IN SWEDISH PEH
Before presenting and evaluating some of the current research on assessment and grading, we will shortly describe the function of grades in the Swedish school system.One official and important function of grades is to serve as selection instruments to the next educational level.All grades have an equal value in this respect (and add up to an average point), which means that grades in PEH are highstakes and just as important as grades in other subjects.In a goal-related school system, such as the Swedish one, students are graded for accountability reasons (do teachers/schools make sure that students reach the goals?), to provide information about what kind of knowledge students have (to parents and to students themselves)
09
and to increase their motivation to learn (although whether or not grades have this function is a debated issue).In all cases, it is important that grading is trustworthy and that students are assessed and graded on equal grounds regardless of gender, who their teacher is and which school they attend.Assessment and grading also have informal functions.What is assessed is an indicator of what is valued in a subject.In that sense, assessment sends a powerful message about what counts as legitimate knowledge (HAY; PENNEY, 2013).A relevant questions to ask here if therefore if students are being awarded grades in line with the intentions.
In a goal-or criterion-referenced grading system, grades should be awarded on the basis of how well students meet the stated knowledge criteria or learning outcomes.The grading criteria only include the knowledge that students are expected to acquire.The PE curriculum stresses the importance of knowledge about health, how the body works and a healthy lifestyle.Students are also expected to be able to participate in games, dance, sports and other activities and to adjust their movements appropriately to a task.However, several studies of Swedish PE have shown that how students behave is just as important as the kind of knowledge and skills they have (ANNERSTEDT; LARSSON, 2010;REDELIUS;FAGRELL;LARSSON, 2009), which indicates that teachers do not always grade in accordance with the steering documents.
In a study by Wiker (2017) about student perspectives on PEH teaching, it was apparent that the students thought they needed to have a special talent in PE before they started their PE education.The students saw prior knowledge as a requirement for obtaining a high grade in PE, which according to them was not the case in other subjects.Where did they learn that prior ability was a prerequisite for a high grade?Maybe because teaching practices in a school subject are often deeply rooted in habits, traditions and customs, and teachers normally regard the content as natural and obvious.There is still a heavy focus on doing, rather than paying attention to what to learn.From this perspective, it is then quite natural for students to imagine that they need to have learned things before a PEH class.
Other studies on students' perspectives of assessment and grading show that many students are unsure about the grading criteria and how teachers assess and determine the grades (REDELIUS; HAY, 2009;2012).However, when asking students what they thought was important in order to receive a high grade in PEH, they came up with a number of suggestions, many of which were not in line with the official criteria.Instead, students had the impression that trying hard, doing their best, being positive, always attending class and having the right attitude were important factors for receiving a high grade in PEH.Again, the following question can be asked, how did the students get the impression that a certain attitude was important?Öhman and Quennerstedt (2008) have shown that teachers' exhortations and encouraging cries, such as 'good, come on, work hard, keep going, you can do it', largely pervade PEH lessons.Their results demonstrated that the primary foci in the teaching of PE were physical exertion and the fostering of good character.Furthermore, the character-building elements of teachers' comments mainly seemed concerned with the development of a willingness to be physically active (ÖHMAN;QUENNERSTEDT;2008).In other words, students have largely understood that the subject is about working hard, doing their best and cooperating (and doing it with a smile).The students may have assumed, in the absence of sufficient reasons to think otherwise, that the values, beliefs and expectations promoted through the daily pedagogy formed the basis on which judgements were made.From this perspective, the students would have expected consistency between what they experienced in class and the judgements of their teachers.
In another study relating to the overall question posed in the introduction (is PEH a subject for learning today?), the focus was on examining whether and how aims and learning goals were communicated by the teachers in PEH.The study was based on a socio-cultural perspective and a special focus was on scrutinising how teaching practices were framed in terms of whether and how the aims and learning goals were made explicit or not to students.The results showed that many of the students taking part in the study did not understand what they were supposed to learn in PE.However, and not surprisingly, if the goals were well articulated by the teachers, the students were more likely to both understand and be aware of the learning outcomes and what to learn.The opposite was also true.If the goals and objectives were not clarified, the students found it difficult to state the learning objectives and know what they were expected to learn.
ASSESSMENT EFFICACY -OR NOT?
In order to evaluate the above findings, we will use the four interdependent conditions that Hay and Penney (2009) promote for assessment efficacy in PE as a starting point.The first is a primary focus on assessment for learning.Hay and Penney emphasise that even if an assessment is done for summative purposes, such as grading and reporting, it should still be done in a way that promotes student learning.The second condition, and an overarching one, is validity.Unless the proposed assessment is valid it is useless and does not serve its purpose.When determining grades, teachers need to make sure that they are reliable, reflect the requirements and are free from irrelevant factors, such as students' dispositional and behavioural characteristics.The third condition is authentic and integrated assessment.Authenticity in assessment is concerned with the relationships between learning content and the world outside the PE classroom.Teachers should try to find tasks that are meaningful and have a value beyond the instructional context.The fourth and last condition is called socially just assessment.It concerns the opportunities that all students are given to engage in the assessment, receive attention, and have the chance to demonstrate knowledge and what they have learned.In this respect it is also important that students are "let in on the secrets of teachers' grading criteria through access to understandable criteria" (HAY; PENNEY, 2009, p 399).In addition, students should be given multiple opportunities in varying contexts to demonstrate their knowledge.
The key interest in proposing these conditions is to limit the negative consequences that assessment may have, such as students' reduced sense of capacity in the PE field, their disconnection with physical culture in and beyond the classroom, and the learning of undesirable ideologies such as sexism and elitism.
From the research that has been conducted on Swedish PEH, we can conclude that the four conditions proposed by Hay and Penney (2009) are not always met.Students are generally not able to describe what teachers base their assessment on and they are not sure about the grading criteria.In particular, the conditions of validity and socially just assessment need to be attended to.Consequently, there is not assessment efficacy and several educational challenges therefore derive from evaluating research on assessment and grading practices.
CONCLUDING COMMENTS
It should be noted that although much of the above presented research highlights educational challenges concerning, the subject remains in fact one of the most popular ones in Swedish schools (LUNDVALL; BRUN SUNDBLAD, 2016).PEH is also regarded as important by the Swedish government.Recently an additional 100 hours were allocated to the previous 500 hours for the nine-year compulsory school, thereby increasing the mandated time allocation to 600 hours.Thus, the critical approach of the research does not necessarily indicate a generally poor quality of the teaching.To some extent it reflects the challenges that emerge from the reforms and the new neoliberal agenda that have been introduced over the last 30 years, which have not been accompanied by appropriate education and in-service training for teachers.The research that was the result of first wave of PEH research during the first decade of the 2000s was largely research-driven and lacked systematic attempts to offer guidelines for teachers.The second wave of PEH research during the 2010s, that we have presented, has had a more interventionist approach, at least the teacher-led research within the graduate school for PEH teachers.Here, PEH teacher researchers have endeavoured to offer possible ways, through intervention studies, to gain knowledge about what happens in the practice when teachers try to change it.Thus, the research can offer not only critique, but also a concrete basis for further development.Taking gender as one example, illustrating in research how gender norms benefit and hinder students in different ways seems not to be enough.Through interventionist approaches, research also needs to offer guidelines and advice on how to deal with gender issues in PEH teaching.
The critical approach guiding PEH research is also influenced by a 'critical friend' perspective, i.e. "a trusted person who asks provocative questions, provides data to be examined through another lens, and offers critique of a person's work as a friend" (COSTA; KALLICK, 1993, p. 50).This approach has developed as an alternative to the conventional idea of researchers as experts who know better than practitioners what the practice should look like.A critical friend of PEH poses challenging questions to practitioners and offers new perspectives on how to approach them.However, being a critical friend, rather than someone who merely criticises practitioners, requires effort and imagination on the part of researchers if they are to offer challenging questions in a constructive and prospective way (compare CASEY;LARSSON, 2018).It is our sincere hope that the research presented here will have both an impact and contribute in a constructive way to developing the PEH practice into the 2020s. | 2020-04-30T09:01:25.286Z | 2020-04-21T00:00:00.000 | {
"year": 2020,
"sha1": "9679e0a1ed3562002eea95e99d7f78d769568844",
"oa_license": "CCBY",
"oa_url": "https://seer.ufrgs.br/Movimento/article/download/98869/56794",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7867340bd0b7d742ad078bba00d00b0fc8e5744d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Political Science"
]
} |
267066364 | pes2o/s2orc | v3-fos-license | Deficiency of copper responsive gene stmn4 induces retinal developmental defects
As part of the central nervous system (CNS), the retina senses light and also conducts and processes visual impulses. The damaged development of the retina not only causes visual damage, but also leads to epilepsy, dementia and other brain diseases. Recently, we have reported that copper (Cu) overload induces retinal developmental defects and down-regulates microtubule (MT) genes during zebrafish embryogenesis, but whether the down-regulation of microtubule genes mediates Cu stress induced retinal developmental defects is still unknown. In this study, we found that microtubule gene stmn4 exhibited obviously reduced expression in the retina of Cu overload embryos. Furthermore, stmn4 deficiency (stmn4−/−) resulted in retinal defects similar to those seen in Cu overload embryos, while overexpression of stmn4 effectively rescued retinal defects and cell apoptosis occurred in the Cu overload embryos and larvae. Meanwhile, stmn4 deficient embryos and larvae exhibited reduced mature retinal cells, the down-regulated expression of microtubules and cell cycle-related genes, and the mitotic cell cycle arrests of the retinal cells, which subsequently tended to apoptosis independent on p53. The results of this study demonstrate that Cu stress might lead to retinal developmental defects via down-regulating expression of microtubule gene stmn4, and stmn4 deficiency leads to impaired cell cycle and the accumulation of retinal progenitor cells (RPCs) and their subsequent apoptosis. The study provides a certain referee for copper overload in regulating the retinal development in fish. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1007/s10565-024-09847-8.
• Stmn4 knockout affects cell cycle progression and the subsequent cell apoptosis.
• The down-regulated expression of stmn4 might be the other attributor in Cu overload induced retinal developmental defects.
Background
Stathmin family proteins play their functions by promoting mitotic spindle disassembly and the subsequent exiting from mitosis (Jourdain et al. 1997).The absence of Stathmin expression leads to accumulation of cells in the G2/M phases and is associated with severe mitotic spindle abnormalities and difficulty in the exit from mitosis (Rubin and Atweh 2004), indicating Stathmins are important for microtubule (MT) dynamics (Charbaut et al. 2001) and are very crucial in the process of mitosis, cell cycle and cell differentiation (Mistry and Atweh 2002).Stathmins are involved in neuronal development, plasticity and regeneration (Chauvin and Sobel 2015), are regarded as neuronal microtubule-regulatory proteins (Burzynski et al. 2009;Levy et al. 2011;Shih et al. 2014), and play crucial roles in mitosis (Belletti and Baldassarre 2011).The overexpression or downregulation of Stathmins disrupts the correct completion of cell division, and they are important targets for the main regulatory factor of M-phase cyclin-dependent kinase 1 (CDK1) (Belletti and Baldassarre 2011).Some special findings in STMN4 have been reported recently, such as STMN4 is the only response protein in the Stathmin family proteins to optic nerve (ON) axotomy in rats (Nakazawa et al. 2005) and the only one to induce the differentiation of PC12 cells in vitro (Beilharz et al. 1998), suggesting that STMN4 may play a crucial role among the neuronal development process, especially in optical development.Meanwhile, STMN4 possesses a unique N-terminal domain, which makes the function of the full-length STMN4 not only has MT destabilization activities similar to Stathmins, but also enhances the binding affinities of STMN4 for MTs (Nakao et al. 2004).However, whether and how Stathmins, especially STMN4, act in optical or retinal development, have rarely been studied.
Recent studies have reported that Cu overload causes developmental defects of retinal cells in zebrafish embryos and larvae (Li et al. 2023;Zhao et al. 2020), and stmn4 is significantly down-regulated in Cu overload hematopoietic stem and progenitor cells (HSPCs) (Li et al. 2023).However, whether Cu overload induces the development defects of retinal cells by down-regulating stmn4 expression, and the potential mechanisms of stmn4 in regulating the development of zebrafish retinal cells, remain unknown.
Optical retina is very special somatosensory tissue for sensing light, which is required in the survival behavior regulation such as foraging and avoiding natural enemies in healthy aquaculture in fish.Zebrafish, as a model organism, with advantages of in vitro fertilization, rapid development, and embryonic transparency, has been served in embryonic developmental studies for long time (Malicki et al. 2016).Studies in the optical retinal cell development, cell proliferation and differentiation are precisely coordinated for the development and growth of zebrafish eyes (Easter and Malicki 2002;Stenkamp 2015).
In this study, we found the down-regulated expression of stmn4 in retina in Cu overload embryos and larvae, and asked whether the down-regulation of stmn4 mediated copper overload induced developmental defects of retinal cells.Here, we unveiled that stmn4 was required in the differentiation of retinal progenitor cells (RPCs).Stmn4 knockout led to small eye and reduced retinal cells via affecting cell cycle progression and differentiation of RPCs and the subsequent cell apoptosis during zebrafish embryogenesis.Our findings here reveal the critical roles of stmn4 in responding Cu overload in retinal cell development.
Page 3 of 17 2 Vol.: (0123456789) Behavior assays In this study, stmn4 −/− and WT larvae at 96 hpf or 120 hpf in 48-well plates (one larva per well, at least 2-3 repeats for each group) were put into the Zebrafish behavior tracking system (ViewPoint Life Sciences, Montreal, Canada), and the larvae behaviors were recorded for 30 min after the larvae had been adapted for 10 min.Meanwhile, in touch response assays, stmn4 −/− and WT larvae at 96 hpf or 120 hpf were placed in 48-well plates and were stimulated with toothpicks, and video images of their motor behaviors and escape responses after touch stimulation were recorded by Zebrafish behavior tracking system (ViewPoint Life Sciences, Montreal, Canada).The video was broken down by QuickTime Player software (version 10.4,Apple Inc.) to get different time points of each frame, and the time of each frame was displayed on each panel.
Morpholino (MO) and mRNA injection
The morpholinos of p53 was purchased from Gene Tools LLC (Philomath, Oregon, USA) and dissolved in ddH 2 O at 3 mM (stock solution).The full-length of stmn4 was amplified with the specific primers, F primers: 5' ATG ACC TTG GCA GCA TAT CGA GAC A 3', R primers: 5' CTA CCG AAC TGA AAA GCT ACC AGA A 3', and the stmn4 full-length mRNA was synthesized using the Ambion MAXIscript T7 Kit (Cat#AM1344, Invitrogen, USA) as instructed by the manufacturer.In all experiments, the MOs and mRNAs were injected into one-cell stage embryos, respectively, with the MO dose of p53 at 0.6 mM, and the concentrations of stmn4 mRNA at 200 ng/µL.
Whole-mount in situ hybridization (WISH) WISH was performed as previously described (Zhang et al. 2020).Probes for myelin basic protein a (mbp), proteolipid protein 1a (plp1a) and vimentin (vim) were synthesized as we performed previously (Zhang et al. 2020), and probes for other genes tested in this study were synthesized using T7 in vitro transcription polymerase (Roche Molecular Biochemicals, Germany) and DIG RNA labeling kit (Roche Molecular Biochemicals, Germany), sequences for all primers used in this study were listed in Table S2.The images were captured by an optical microscope (Leica.M205FA, Germany).Data quantification and visualization were carried out using ImageJ software (NIH, Bethesda, Maryland) and GraphPad Prism 8.0, respectively.A minimum of 15 embryos per group were used for WISH analysis, and three independent experiments were performed.A representative image in each group is shown.
Immunofluorescence and Hematoxylin-eosin (H&E) assays
Embryos and larvae at 24 hpf, 48 hpf, 72 hpf, 96 hpf and 7 dpf were fixed with 4% PFA overnight at 4 °C, and then were dehydrated with 30% sucrose PBS solution for 2 h at room temperature.Next, the permeated embryos were embedded in TissueTek® O.C.T. compound (Sakura Finetek, USA) for cryosectioning at 6 ~ 8 μm in thickness with frozen microtomy (Thermo scientific, USA).After drying at 4 °C, the sections were used for Hematoxylin and Eosin (H&E) staining, immunofluorescence assays, and in situ hybridization (ISH) assays, respectively.The H&E staining and ISH assays was performed as reported previously (Niu et al. 2014;Zhao et al. 2020).Then, high-resolution images for the H&E staining and ISH assay sections were obtained under a microscope (ZEISS Axio Imager A2) after the staining was completed.
RNA-Sequencing (RNA-Seq) and analysis
In this study, fifty zebrafish embryos of control and stmn4 −/− mutants at 16 hpf and 24 hpf were collected separately and used for RNA extraction and RNA sequencing (RNA-Seq).RNA-Seq was performed on an Il-lumina HiSeq2000 platform by Novogene (Beijing, China).Genes with significant alterations due to stmn4 deletion (adjusted P < 0.05) were defined as differentially expressed genes (DEGs).Enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis was conducted for each sample using KOBAS v.2.0 based on the lists of DEGs.Gene Ontology (GO) analysis was conducted using the lists of DEGs by GO seq Release 2.12.Hierarchical clustering was performed by TIGR Multi Experiment Viewer (MeV) to generate different heatmaps.
Statistical analysis
RNA extraction, protein extraction and other experimental samples were collected, about 50 embryos in each group.The sample size for different experiments in each group was larger than 10 embryos (n > 10) with 3 biological replicates for WISH test.The data were quantified by Image J and analyzed and visualized by GraphPad Prism 8.0.The results were passed by t test and post hoc Turkey's test in SPSS (20.0) software.The statistical significance between Page 5 of 17 2 Vol.: (0123456789) groups was determined at P < 0.05 (*), P < 0.01 (**) or P < 0.001 (***).Data are expressed as the mean ± standard deviation (SD) for normal distribution and median (range) for no-normal distribution.
Results
Stmn4 −/− exhibits eye developmental defects We have recently unveiled that the microtubule related DEGs were significantly enriched in the GO terms, in both Cu overload zebrafish embryos and their HSPCs at 24 hpf and 33 hpf, in which stmn4 was significantly down-regulated (Li et al. 2023).Meanwhile, Cu overload zebrafish embryos and larvae exhibit dysfunctional locomotor behavior, microphthalmia, and retinal developmental defects (Zhao et al. 2020), and stathmins have been reported to be required in neural cell development (Beilharz et al. 1998;Levy et al. 2011;Zhao et al. 2020).Thus, in this study, we asked whether the down-regulation of stmn4 mediated the retinal developmental defects occurred in Cu overload zebrafish embryos.The expression of stmn4 was significantly down-regulated in retina in Cu overload zebrafish embryos (Figs. 1A1 − A6), and stmn4 transcripts were predominantly distributed in the head of zebrafish embryos and larvae (Figs.S1A1 − A16).Stmn4 knockout zebrafish line with an 8 bp deletion in exon2 (stmn4 −/− ) has been constructed and reported recently (Li et al. 2023).In this study, we unveiled the obviously decrease in protein and the mRNA levels of Stmn4 in stmn4 −/− embryos and larvae (Fig. 1B1 − B3).Meanwhile, the mRNA transcripts of stathmin family genes (stmn1a, stmn1b, stmn2a, stmn2b, stmn3) were tested further, and only stmn2b exhibited up-regulated expression in the mutants while the others were not changed (Fig. S1B).Compared with their siblings, stmn4 −/− mutants showed almost identical morphology but with smaller eyes (Figs.Next, this study further examined the development of other neural cells in stmn4 −/− embryos and larvae at different developmental stages.Gene marker labelling neural progenitor and stem cells, sox2, was significantly increased in the whole mutants at 24 hpf, 48 hpf, and 72 hpf (Fig. 3A − C, S2A) and in stmn4 −/− retina (Fig. 3C, S2A).While the expression of marker genes labelling the mature neurons derived from neural precursors, elavl3, rbfox3a, otx2b, all exhibited reduced expression in the mutants (Figs. S2B − D).Meanwhile, gene markers slc1a3a for astrocyte precursors and sox10 for oligodendrocyte precursors, also exhibited increased expression in the mutants (Fig. S3A, S4A), while the expression of genes labelling the mature glial cells, gfap, vimentin, plp1a, mbp, was significantly down-regulated in the mutants (Figs. S3B, S4B − C), suggesting generally impaired differentiation of neural cells occurred in stmn4 −/− embryos and larvae.The observations here were similar as the report that Cu overload larvae exhibit CNS developmental defects (Zhang et al. 2020), further suggesting the down-regulated expression of stmn4 might mediate Cu overload induced retinal and neural system developmental defects in zebrafish embryos and larvae.In this study, we focused on the impaired differentiation of retinal cells in the stmn4 −/− mutants.
The lamination of retina is initiated by migration of neurons through mitosis to the different cell layers, where they become mature neurons and form synapses to make a link between the various cell layers (Amini et al. 2017), and the all differentiated neurons are derived from the neural precursors.Retinal stem and progenitor cells (RSPCs) distribute at the most periphery of retina, where it was considered to be a stem-cell niche (Cerveny et al. 2010;Pujic et al. 2006).Sox2, marking RSPCs, exhibited obvious increase in the mRNA (Fig. 3A1 − A9) and protein levels (Fig. 3B1, B2) in the whole mutants, and at the most periphery of retina (Fig. 3C1 − C14), suggesting abnormal accumulation of RSPCs in stmn4 −/− retina.Meanwhile, the signals of retinal progenitor cell (RPC) markers vsx2 and ccnd1 were obviously upregulated in stmn4 −/− at 24 hpf (Fig. 3D1 − D9), 48 hpf and 60 hpf (Fig. 3E, S5A), but the signals of neuronal marker crx were reduced in stmn4 −/− at 48 hpf and 60 hpf (Fig. 3F), and the signals of mature neuron makers elavl3 and rbfox3a were also reduced in stmn4 −/− mutants at 48 hpf (Figs.S2B − C), suggesting stmn4 deficiency damaged the differentiation process of crx labelling neurons derived from vsx2 and ccnb1 labelling RPCs in retina and of the following.
The obviously reduced expressions of mature neuron marker elavl3 (Fig. 4A1 − A4) and neuronal marker crx (Fig. 4A10 − A13) while obviously increased expression of RPC marker ccnd1 (Figs.S5C1 − C4) were also observed in retina in Cu overload embryos and larvae.On the contrary, overexpression of stmn4 mRNA could obviously increase the expressions of the retinal markers opn1mw1, rhodopsin, and opn1sw2 (Figs.S5B) in WT zebrafish embryos and larvae.Meanwhile, overexpression of stmn4 mRNA could not only effectively rescue the increased expression of sox2 to nearly normal level in retina in stmn4 −/− embryos (Fig. 4B1 − B9), effectively rescue the reduced expression of retinal genes opn1mw1, opn1sw2, and rhodopsin in the mutants (Figs. S5B), but also effectively rescue the increased expression of ccnb1 and the reduced expression of elavl3, while only slightly rescue the expression of crx in Cu overload embryos and larvae (Fig. 4A1 − A18, S5C).Additionally, overexpression of stmn4 mRNA could also effectively rescue the increased retinal cell apoptosis in Cu overload embryos (Fig. S6).Taken all of the aforementioned results together, with the reports that Cu overload induces retinal rod and cone cell developmental defects via stress induced cell apoptosis (Zhao et al. 2020), we speculated that the down-regulated expression of stmn4 might be another potential contributor to Cu overload induced retinal developmental defects and cell apoptosis.
M-Phase arrest in retinal cells in stmn4 −/−
The accumulation of RPCs occurring in stmn4 −/− retina suggests that the RPCs are either hyper-proliferative or arrested in the cell cycle, thus, we investigated the cell cycle progression in retina with BrdU incorporation as the S-phase marker and phosphorylated histone H3 (PH3) as the M-phase marker.It is known that the cell cycle length of retinal cells in the early stages of development is approximately 6 to 8 h (Li et al. 2000;Wehman et al. 2007), in this study, we chose zebrafish cell death in the most periphery of retina (Fig. 5C1-C7) where might be Sox 2+ cells (Fig. 3C8-C14).
We next wondered whether the expressions of cell cycle-related regulators have also been affected with stmn4 deficiency.Transcriptome in stmn4 −/− embryos were tested and the data showed that GO terms related to apoptosis (green box) and mitosis (red box) were enriched for DEGs (Fig. 6A), and cell cycle-related regulators exhibited differential expressions and were down-regulated in stmn4 −/− embryos (Fig. 6B1 − B2), which were verified further by qRT-PCR assays and were significantly down-regulated at both 16 hpf and 24 hpf (Fig. 6C1 − C2).The protein levels of cell cycle-related key regulators (Cdc25b, Cdk1, Ccnb1 and Tubulin) (Wang 2022) were tested further in stmn4 −/− mutants, and their protein levels were all reduced in the whole stmn4 −/− embryos at 48 hpf (Fig. 6D1 − D5).Particularly, Tubulin protein was reduced more significantly at 24 hpf in the mutants (Fig. 6D1, D5).These results indicated that stmn4 could also indirectly regulate cell cycle processes by regulating expressions of cell cycle-related proteins.
Next, we wonder whether the P53 pathway was mainly involved in the apoptosis of retinal cells in stmn4 −/− .Thus, we injected p53 morpholino (MO) (which will block the translation of p53 transcripts) in zebrafish embryos (Li et al. 2023;Robu et al. 2007), to block P53 signaling in the stmn4 −/− mutants and the corresponding WT zebrafish, respectively.The TUNEL GFP positive signals could be observed in red Sox 2+ cells in retina of stmn4 −/− mutants (Fig. 8A10) and the mutants co-injected with p53 MO (Fig. 8A20), and more TUNEL positive signals were observed in the p53 MO co-injected mutants (Fig. 8A1 − A22), not only suggesting that RPC Sox2 + cells underwent apoptosis, but also suggest that knockdown of p53 in stmn4 −/− zebrafish further deteriorated the impaired differentiation of RPCs and led to the more accumulation of RPCs, and p53 MO couldn't rescue the apoptosis of retinal cells in the mutants.Together, these results suggested that activation of P53 pathway might be a compensatory mechanism for the impaired cell cycle and RPCs accumulation occurred in retina in the mutants, which might be not responsible for the occurred apoptosis.
Discussion
Recently, we have reported that Stathmin family genes, especially stmn4, differentially expresses and exhibits down-regulated expressions in copper overload zebrafish embryos and HSPCs (Li et al. 2023).
It is suggested that stmn4 is closely related to the regulation of embryonic development and Stathmins are required for neural cell development and retinal regeneration (Beilharz et al. 1998;Levy et al. 2011).Cu overload induces retinal developmental defects in zebrafish embryos and larvae via triggering cell apoptosis (Zhao et al. 2020).Thus, in this study, we wonder whether the down-regulated expression of stmn4 mediates Cu overload induces retinal developmental defects.Here, we unveil the reduced expression of stmn4 in retina and that ectopic expression of stmn4 mRNA could effectively rescue retinal developmental defects and the cell apoptosis in Cu overload embryos and larvae.Meanwhile, functional deficiency of stmn4 induces impaired cell cycle in retinal cells, and leads to the accumulation of RPCs, which are also observed in Cu overload embryos and larvae, further suggesting the down-regulated expression of stmn4 in retina of Cu overload embryos and larvae might be another potential contributor to their retinal developmental defects.Cu has been unveiled to be spatial proximity to F-actin, especially at the basis of dendritic protrusions, suggesting that Cu might potentially modulate microtubule morphology in dendrites and spines (Domart et al. 2020).Additionally, the microtubule remodeling in response to Cu elevation has been directly demonstrated in the bone marrow mesenchymal stem cells (Chen et al. 2020).Recently, we unveil that Cu overload down-regulates expressions of microtubule genes and damages cytoskeleton morphology, then to lead to the impaired cell cycle and proliferation of hematopoietic stem and progenitor cells (HSPCs) during fish embryogenesis (Li et al. 2023).Normal expression of microtubule genes preserves cell morphology, and orderly and accurately microtubule rearrangements help advance the cell cycle (Heng and Koh 2010;Nunes and Ferreira 2021), while disruption of their integrity will lead to cell cycle stagnation (Blajeski et al. 2002;Heng and Koh 2010).Studies have reported that retinal cells in zebrafish are abnormal due to the cell cycle, and failure of progenitor cells exiting the cell cycle will result in the accumulation of RPCs (Baye and Link 2007).
In this study, we unveil that the down-regulating expression of microtubule gene stmn4, might be the other attributor in mediating Cu overload induced the accumulation of RPCs and the subsequent retinal cell apoptosis via regulating cell cycle.
In this study, we demonstrate the novel roles of stmn4 in retinal cell development and in cell cycle process.Stmn4 −/− embryos and larvae exhibit touch response defects and developmental defects of retinal cells, and exhibit general differentiation impairments of neural cells, such as neurons, astrocytes, and oligodendrocytes, suggesting normal functional stmn4 is required for general neural cell differentiation.
Mature neural cells are functional items to ensure normal behavioral expression in vertebrates, thus, retinal cell developmental defects and general neural cell differentiation defects might potentially contribute to the touch response defects in stmn4 −/− , implying the pivotal roles of neural system in regulating fish behaviors (Portugues and Engert 2009).Meanwhile, stmn2b is up-regulated in the stmn4 −/− mutants.Studies have shown that stmn2b is mainly expressed in the anterior central nervous system (the forebrain region, retina, optic tectum and hindbrain) and cranial ganglia starting from 48 hpf in zebrafish (Burzynski et al. 2009), the up-regulated expression of gene stmn2b may compensate for the retinal developmental defects in stmn4 −/− mutants as genetic compensation response (GCR) reported recently (El-Brolosy et al. 2019;Ma et al. 2019).A little of Stmn4 protein is still detected in the mutants at 24 hpf, we speculate that stmn4 is maternal factor and the maternal Stmn4 protein might exist in the mutants in this study as studies reported recently (Hu et al. 2016;Li et al. 2021;Song et al. 2023).In this study, depletion of stmn4 in zebrafish leads to disorders in spindle assembly and in cell cycle exit, as well as M-phase arrest in retina, and results in the further cell apoptosis and the reduced retinal cells in the stmn4 −/− mutants.Meanwhile, in this study, genes related to the regulation of microtubule dynamics exhibit differential expressions and are significantly enriched in stmn4 −/− mutants.Stathmins are required in facilitating the mitosis and cell cycle progress via acting as microtubule destabilizers (Charbaut et al. 2001).Stathmin families have the SLD-like domain, which allows the family proteins to have similar regulating functions in the cells to be involved in mitosis by participating in the polymerization of microtubules (Chauvin and Sobel 2015).Stmn4 has been less studied compared with other family genes, but it has been reported to play a role in activity-induced neuronal plasticity and neuronal differentiation (Beilharz et al. 1998), and have been proofed to influence the cell cycle progress via regulating the G 2 /M phase to regulate midbrain development before 24 hpf (Lin and Lee 2016), which is consistent with our findings that stmn4 deficiency causes cell cycle arrest in the M phase and induces neuronal cell development defects during zebrafish embryogenesis.Disruption of microtubules can induce cell cycle arrest in G 2 /M phase and the formation of abnormal mitotic spindles (Kaur et al. 2014), and the M-phase arrests in retinal cells in stmn4 −/− mutants are observed in this study, suggesting that the dynamics of polymerization and depolymerization of microtubules (Gardner et al. 2008;Grenningloh et al. 2004) are damaged in the cells, further demonstrating the essential roles of Stathmin in cell cycle process (Hanash et al. 1988;Luo et al. 1991) via participating in microtubule assembly and regulating the depolymerization dynamics of microtubules (Desai and Mitchison 1997;Wäsch and Engelbert 2005).
In this study, we find that the expressions of cell cycle functional proteins (Cdc25b, Ccnb1 and Cdk1) are significantly down-regulated in stmn4 −/− mutants, consisting with the report that Stmn4 can indirectly control the process of neuronal cells entering to the G2/M phase by regulating the expression of Cdc25a in zebrafish midbrain (Lin and Lee 2016).The normal expression of CDK1/CCNB1 is the basic condition for cell exit from mitosis (Wäsch and Engelbert 2005).The down-regulated expression of Cdk1/ Ccnb1 protein, the increased expression of RPC markers while reduced expression of mature neuron and rod/core cell markers and the increased PH 3+ cells in retina, are observed in stmn4 −/− mutants, suggesting that RPCs encounter difficulties in exiting mitosis, which lead to the arrest of RPCs in M-phase.Combined with the aforementioned detection of the expression level of Tubulin and the observations of the mitotic process of RPCs, we demonstrate that the severely affected expression and function of Tubulin would lead to the difficulty of spindle formation.
Taken the above points together, we demonstrate that stmn4 deficiency induces cell cycle impairments in retina via down-regulating key cell cycle regulators Cdc25a, Ccnb1 and Cdk1 and damaging microtubule assembly dynamics, which jointly contribute to the finally developmental defects of retina and the resulted in touch response defects in the mutants.
The outcome of cells with arrested cell cycle is cell death, and microtubule dysfunction could easily lead to cell cycle arrest and even apoptosis with activated apoptotic signals in the cells (Iuchi et al. 2009;Liu et al. 2019;Nagireddy et al. 2022).Studies have shown that cell cycle arrest of RPCs may easily lead to cell apoptosis (Baye and Link 2007;Li et al. 2019Li et al. , 2021) ) and the long-term stagnation of the M-phase naturally leads to cell apoptosis (Mc Gee 2015;Vitale et al. 2011;Vitovcova et al. 2020).In this study, we observe the enrichment of apoptosis related GO terms and the increase of apoptosis positive regulators P53 and cleaved Caspase3 while the decrease of apoptosis negative regulator Bcl-2 in stmn4 −/− mutants, which are prone to responding to cell cycle arrest caused by defective microtubule expression and to checking 1C1 − C13), and cells in outer nuclear layer (ONL), inner nuclear layer (INL), and ganglion cell layer (GCL), were obviously reduced in stmn4 −/− mutants at 48 hpf (Figs.1D1 − D6), and at 72 hpf and 7 dpf (Figs.S1C1 − C11).Meanwhile, stmn4 −/− larvae generally responded more slowly to touch responses compared with their WT siblings (Figs.S1D1 − D8), and exhibited almost no
Fig. 4
Fig. 4 Stmn4 mRNA could effectively rescue retinal developmental defects in both Cu stressed embryos and stmn4−/− mutants.(A) Stmn4 mRNA effectively rescued the changed expression of elavl3 and crx in retina in Cu stressed embryos (A1 − A8, A10 − A17), and the calculations of the relative expressions of elavl3 and crx in embryos from each group (A9, A18).(B) Stmn4 mRNA effectively rescued the changed
Fig. 6
Fig. 6 Expression of cell cycle genes in stmn4 −/− mutants.(A) GO pathway of apoptosis and mitosis were enriched for DEGs in stmn4 −/− mutants at 24 hpf.(B) Heatmaps of the down-regulated cell cycle related DEGs (B1, B2).(C) qRT-PCR assays for the cell cycle related genes in zebrafish embryo at 16 hpf
Fig. 7
Fig. 7 Stmn4 deficiency led to changed expressions of apoptotic proteins.(A) Expression of P53, Bcl-2, and Cleaved Cas-pase3 in head of WT and stmn4 −/− mutants at 24 hpf and 48 hpf (A1), respectively, and the calculation of the protein levels | 2024-01-23T06:17:29.276Z | 2024-01-22T00:00:00.000 | {
"year": 2024,
"sha1": "eb9692350a654b00d30a2ecace8e2a9d68ee0643",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "52b912a89f774e4a78213cda672eb496eb4be5b5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
674120 | pes2o/s2orc | v3-fos-license | Nonequilibrium Phenomena in Liquid Crystals
This paper summarizes a talk presented at the April NATO ASI on Spatiotemporal Chaos in Complex Fluids, in Santa Fe, NM. The paper gives reasons that make complex fluids good material systems for conducting experiments on pattern formation and other nonequilibrium phenomena. Much of the discussion focuses on the different phenomena observed in solidification and how the increasing complexity of fluid systems decreases the velocity scale for achieving"rapid"solidification. Five systems are compared to illustrate this point: simple fluids, simple alloys, thermotropic liquid crystals, lyotropic liquid crystals, and diblock copolymers. Finally, an example is given of the kinds of transitions that may be observed in rapid solidification.
electric field. The transition is really a supercritical bifurcation from one uniform state to a second, stationary state. This is perhaps the simplest of bifurcations, and one can immediately expect to see all of the universal behavior associated with such transitions.
(For example, the maximum deflection of molecules grows as the square root of the distance above threshold, as suggested in Fig. 3a.) In 1971, Schadt and Helfrich [27] modified the Freedericksz experiment slightly by rotating one plate 90 • with respect to the other -thereby twisting the molecules in the sampleand by adding crossed polarizers. In this configuration, as shown in Fig. 2, the transmitted light-intensity curve follows that of the molecular distortion. The configuration was the basis for the first commercially successful liquid crystal display and is still extensively used for small displays where a limited amount of information is to be shown.
In 1982, it was found that if the twist angle is increased past 270 • , the bifurcation becomes subcritical [32,28]. (See Fig. 3.) Although the resultant hysteresis causes difficulties for display switching, the limiting case of a 270 • twist angle is useful. As Fig. 3c shows, the transmission curves switches more abruptly for such "tricritical" bifurcations than for supercritical bifurcations. (An elementary analysis shows that the intensity now rises as the distance from threshold to the 1/4 power [1]). This "supertwist" display is the dominant one used for the large flat-screen displays found in notebook computers.
I have outlined the history of liquid-crystal displays in some detail because -at least in hindsight -simple ideas from nonequilibrium science are relevant. A good display requires a sharp transition from the "off" state to the "on" state. Thus, it makes sense to use a supercritical transition, as opposed to a design in which the intensity is an analytic function of the control parameter. Changing the bifurcation from supercritical to sub-or tricritical further speeds the switching.
Simple ideas from nonequilibrium science can thus be combined with the special properties of complex fluids (birefringence, electric-field alignment of the optical axis) to create useful devices. The large markets for such devices -well over $3 billion per year for liquidcrystal displays [24] -certainly justifies continued research into understanding and further 2 cataloguing of analogous special effects. Other special effects I could have cited include drag reduction in turbulent flows by adding small amounts of polymer [13], which has been used to make fire hoses shoot farther and submarines move faster; the giant swelling transition in gels [25], which promises robotic "fingers" that can grasp delicate parts without damage; and electrorheological fluids [18], which are being tested in active automobile suspensions.
In this conference, K. Amundson [2], R. Larson [20], and H. R. Brand [7] have discussed other interesting polymer effects. I could go on, but I hope the point is clear.
In addition to the "bestiary" of special effects, there is a second, more fundamental reason to study nonequilibrium phenomena in complex fluids. Nonequilibrium science can loosely be characterized as the systematic exploration of systems as some "stress" is increased. And, simply put, complex fluids are easier to drive out of equilibrium than simple ones.
To understand this remark, consider what I shall call -with no disapproval impliedthe "conventional" view of the progression of nonequilibrium phenomena. This view, largely shaped by work in fluid dynamics, is sketched in Fig. 4: unstressed or lightly stressed systems are in a simple "lamellar" state. As the stress is increased, the system undergoes a sequence of bifurcations that results in a time-dependent, chaotic state with limited temporal but full spatial coherence. As one further increases the stress, a second series of transitions -less well understood -progressively destroys the spatial coherence of the system and results in a fully turbulent flow. Well-studied examples that illustrate this progression include Rayleigh-Bénard convection and Taylor vortex flow [11], where "stress" is measured by the Rayleigh and the Reynolds numbers, respectively.
At first glance, the behavior in complex fluids would seem to parallel that of simple fluids.
For example, when the Freedericksz experiment is performed on a nematic that tries to align perpendicularly to the applied field, convective motion is observed. (See, for example, W. Zimmerman's contribution to these proceedings [34].) I want to suggest, though, that there is an important difference between the behavior of complex and simple fluids when driven out of equilibrium: In simple fluids, for reasonable driving stresses, the fluid is always in local -but not global -thermodynamic equilibrium. For simple fluids, this observation has a 3 number of consequences. If, during an experiment on simple fluids (e. g., Rayleigh-Bénard convection using water), you were to sample the fluid used, you would find its material properties to be the same as in equilibrium. Moreover, at the end of the day, when you switched off the experiment, the fluid would settle down to its equilibrium state. Water that has been churned about at Reynolds numbers of 10 5 cannot be distinguished from water that has spent all day sitting at rest in a glass. Such observations -trivial as they may be -stand in contrast to the case of complex fluids where, I shall argue, modest driving forces can push a system out of equilibrium on length and time scales comparable to the microscopic scales that characterize the structure of the fluid.
Rather than discuss fluid dynamics, I want to focus on a phenomenon that is equally rich and about which I have personal experience: solidification. As is well-known, freezing fronts are often unstable to shape undulations. (See Fig. 5.) This instability was first analyzed in detail by Mullins and Sekerka [23] and is relevant whenever front growth is controlled by diffusive processes (typically, these are either the diffusion of latent heat or chemical impurities away from the interface). If one freezes more rapidly, however, one finds another regime, the kinetics-limited regime, where front behavior is controlled by local ordering processes at the interface itself. As we shall see, the velocity separating the diffusion-from the kinetics-limited regimes, v 0 , sets the scale for nonequilibrium phenomena. Fronts moving with v ≪ v 0 are nearly in equilibrium, while fronts moving with v > ∼ v 0 are strongly out of equilibrium. I shall call the former regime one of slow solidification and the latter regime one of rapid solidification.
To understand why v 0 sets the scale for nonequilibrium "stress" in solidification, we need to recall two facts: On the one hand, fronts have a finite thickness ℓ. This means that an interface moving at velocity v will take a time t p = ℓ/v to pass a given observation point. On the other hand, a front may be viewed as an "ordering wave" that propagates through the fluid. As the front passes through an observation point, fluid molecules that were formerly in a disordered state now have to order. The ordering takes time -call it t 0 . If the ordering time t 0 ≪ t p , then we have slow solidification, since the front has ample time to order. If t 0 > ∼ t p , then the front will have already passed through the observation point before the ordering is complete, and one may expect new phenomena to be observed. Equating the two time scales gives the velocity v 0 ∼ ℓ/t 0 described above.
The characteristic solidification speed of a front, v 0 , is the ratio of a microscopic length, ℓ, to a microscopic ordering time, t 0 . For simple fluids, this scale velocity turns out to be roughly the sound speed, and one can imagine that concocting a controlled experiment on fronts moving a kilometer a second is not easy! It turns out, though, that in a complex fluid, v 0 can be dramatically reduced, so that controlled experiments become feasible. This Next, consider a simple alloy, made of a mixture of two simple fluids. The fundamental length scale is still about an angstrom (ℓ ≈ 10 −8 cm), but now the solid phase is formed with an additional constraint: not only must energy be removed form the interface, but also the A and B molecules must be arranged in a precise pattern in the solid phase. In addition, the relative concentration of B and A molecules will differ in the two phases. Thus, freezing an alloy requires rearranging atoms, so that the time scale is set by mass diffusion and not by heat diffusion. Since the mass diffusivity D ≈ 10 −5 cm 2 /sec is a hundred times smaller than the heat diffusivity, we expect t 0 ≈ 10 −11 sec. and v 0 ≈ 10 −5 /10 −8 ≈ 10 3 cm/sec (10 m/sec).
Indeed, rapid solidification experiments on metallic alloys do show interesting phenomena when fronts move faster than about 10 m/sec. [8] Notice that the microscopic time scale t 0 determining v 0 is set by the slower of the two relaxational processes (heat and mass diffusion). This is a general feature of complex fluids: the slowest relaxational process sets the microscopic ordering time scale. Notice, too, that although the length and time scales both increase as we go from a simple pure fluid to a simple alloy, the ratio v 0 decreases. This, too, is general.
Next, we consider thermotropic liquid crystals, which are pure materials made up of rigid, anisotropic molecules. In most cases, the molecules are rod-shaped, but disk-shaped molecules also form liquid-crystal phases [9]. In my own work, I have studied the solidification of thermotropic liquid crystals with Patrick Oswald, Adam Simon, and Albert Libchaber [5]. Our directional solidification apparatus allowed a maximum speed of about 300 µm/sec. This is still somewhat slower than the scale speed of v 0 ≈ 1 cm/sec, but already interesting phenomena were observed. In particular, we observed that in addition to a velocity threshold above which a flat interface 7 destabilized, there was a second threshold above which the flat interface reappeared. In fact, the original study of a flat interface had predicted that for large freezing velocities and for large thermal gradients, the front would restabilize. The front restabilization velocity is indirectly linked to v 0 and occurs at about 300 µm/sec for the nematic-isotropic interface of a thermotropic liquid crystal lightly doped with ordinary impurities (i.e., impurities that are themselves simple molecules). A typical stability curve is shown in Fig. 8. These observations were significant in that the restabilization velocity of simple alloys is on the order of meters/sec. We were thus able to explore the entire bifurcation diagram, while previous experiments had probed just a small piece of it. We tested the linear stability analysis in the restabilization regime and also found a number of interesting secondary instabilities in the interior of the bifurcation diagram (parity breaking, traveling waves, breathing modes, etc.). [29,14] One answer, then, to the question "why use liquid crystals and other complex fluids to study nonequilibrium phenomena" is that they can facilitate the study of instabilities that were already known in the context of simpler fluids. A second answer is that they allow access to the locally nonequilibrium regime. What can one expect to see here? In contrast to the usual nonlinear regime, much less is known, and I can only suggest what is to be learned. If we consider the case of solidification, we see that if we were to freeze a liquid instantaneously, the disorder of the fluid would be quenched in and produce a glassy state. One possibility, then, is that in the kinetics-limited regime, the ordered state will be progressively disrupted as the velocity is increased. The defect density in the ordered phase would then be a smoothly increasing function of the freezing velocity [30].
Another -and to my mind, more interesting -possibility is that the route from the or- that displays such behavior [6,31]. As illustrated in Fig. 9, we have proposed that a rapidly moving front can split into two separately moving fronts, one dividing the disordered phase (phase 0) from a new metastable phase (phase 1), the second dividing this metastable phase from the ordered, thermodynamically stable phase (phase 2). A necessary condition for the front to split is that the velocity of the leading edge v 10 exceed that of the trailing edge In the meantime, poor man's versions of the splitting transition have been observed in thermotropic liquid crystals. The transition is not between two thermodynamically distinct phases but between two configurations of the nematic phase. In Fig. 10, I show a side view of the meniscus of the nematic-isotropic (NI) interface discussed above. The glass plates are treated to align surface molecules perpendicular to the plates (homeotropic orientation).
There is another, globally incompatible condition at the NI interface itself. The resulting frustration forces a singularity in the nematic phase. (See Fig. 10a.) Topologically, the defect can either be next to the interface or be deep in the nematic phase. (See Fig. 10b.) In the latter situation, the twisted region has a higher elastic energy than the homeotropic region. The defect line will then move back towards the NI interface at a velocity v def ect set by the nematic's viscosity and elastic constants. However, if the isotropic phase is moving faster than the defect line, the defect cannot catch up and we have the splitting transition described above. In this case, the isotropic is phase 0, the homeotropic phase 2, and the new (planar) orientation of the nematic is the metastable phase 1. If the freezing velocity v is low, we expect to see a homeotropic-isotropic interface (20 interface). For v > v def ect , we would expect to see the defect line peel back, creating a widening region of phase 1.
In fact, something slightly different happens. (Fig. 11.) The defect line detaches only when v substantially exceeds v def ect and then only when the interface passes through a dust particle. The interface detaches locally, and a planar region spreads out, creating a triangular shape that is a record of the space-time history of the new domain. Note that in Fig. 11 there are simultaneously 20 and 10 interfaces present. This means that the splitting transition here is hysteretic. Finally, while physicists tend to be intrigued by the triangular shape of the domain, metallurgists are distinctly unimpressed: in the rapid casting of metal alloys, they see these shapes all the time.
Summing up, Fig. 12 shows what the complete spectrum of behavior of a front might be as the driving force is systematically increased. In the near-equilibrium regime, the front is unstable to undulations whose size decreases with velocity. Above, v 0 , one can expect to see front splitting and, eventually, disordering of the low-temperature phase. For lack of time, my discussion of rapid solidification has been incomplete, and I regret not talking about oscillatory instabilities [19] and solute trapping [3,33]. | 2014-10-01T00:00:00.000Z | 1993-05-21T00:00:00.000 | {
"year": 1993,
"sha1": "3e51fe253f39682d4f5cd72b54f93f92831ebe1e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4895d0b977983df7fe3394fd9243980f25ba98d1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
5007988 | pes2o/s2orc | v3-fos-license | Molecular and Quantitative Genetic Differentiation in Sitobion avenae Populations from Both Sides of the Qinling Mountains
Quantitative trait differences are often assumed to be correlated with molecular variation, but the relationship is not certain, and empirical evidence is still scarce. To address this issue, we sampled six populations of the cereal aphid Sitobion avenae from areas north and south of the Qinling Mountains, and characterized their molecular variation at seven microsatellite loci and quantitative variation at nine life-history traits. Our results demonstrated that southern populations had slightly longer developmental times of nymphs but much higher lifetime fecundity, compared to northern populations. Of the nine tested quantitative characters, eight differed significantly among populations within regions, as well as between northern and southern regions. Genetic differentiation in neutral markers was likely to have been caused by founder events and drift. Increased subdivision for quantitative characters was found in northern populations, but reduced in southern populations. This phenomenon was not found for molecular characters, suggesting the decoupling between molecular and quantitative variation. The pattern of relationships between FST and QST indicated divergent selection and suggested that local adaptation play a role in the differentiation of life-history traits in tested S. avenae populations, particularly in those traits closely related to reproduction. The main role of natural selection over genetic drift was also supported by strong structural differences in G-matrices among S. avenae populations. However, cluster analyses did not result in two groups corresponding to northern and southern regions. Genetic differentiation between northern and southern populations in neutral markers was low, indicating considerable gene flow between them. The relationship between molecular and quantitative variation, as well as its implications for differentiation and evolution of S. avenae populations, was discussed.
Introduction
Barriers to gene flow presented by mountain ranges can have significant consequences for ecology and evolution of various organisms, and even lead to the isolation of populations. The isolation of populations may in turn cause allopatric speciation in different environments where reproductive isolation between populations could occur gradually and incidentally as a result of mutation and drift, and indirect effects of local selections (causing local adaptations) [1][2]. The Qinling Mountains (with highest peak of 3, 767 m) in Shaanxi Province of China, which extend for nearly 2,500 kilometers in the east-west direction, provide a unique scenario to study such effects. The mountain ranges are located in the transition zone between two macroclimatic regimes (i.e., subtropical and warm-temperate zones) [3]. The regions south of the mountains have subtropical characteristics with wet summers and warm winters, while the regions north of them belong to warm-temperate and temperate zones with relatively dry summers and cold winters [4].
Significant geographic variation in life-history traits (e.g., developmental time and fecundity) can occur among populations of a species separated by environmental barriers like the Qinling Mountains [4]. Determining the mechanisms that cause such variation is one of the basic objectives of ecological and evolutionary studies. It has long been recognized that geographic variation in life-history traits of an organism can include both genetic and environmental components [5]. The genetically-determined components of life-history trait variation are a significant factor which drives evolution by supplying raw material for evolutionary changes. Genetic variation can be affected not only by gene flow, but by natural selection, genetic drift, colonization history, mutation, and their interactions [6][7]. Among these causes, natural selection is generally considered as the major force generating changes in the level of genetic variation within and among populations (especially in heterogeneous environments), and thus driving adaptation and evolutionary changes [7][8][9]. A population's ability of responding to changing environments depends on additive genetic variation for quantitative traits that are ecologically relevant [10]. Due to technical and logistic difficulties in measuring such variation, many studies rely solely on molecular markers to evaluate adaptive genetic variation, however, the degree to which molecular variation reflects quantitative genetic variation between populations is still controversial [10].
The cosmopolitan aphid (Sitobion avenae (Fab.)), a serious pest on cereal crops [11], offers a good model organism to study how differentiation can evolve among populations of small organisms with assumed high dispersal ability for several reasons. Sitobion avenae is widely distributed in areas around the huge Qinling Mountains, which can be a significant barrier for its dispersal, since its occurrence on wild hosts is rare above 2, 000 m in the mountains (D. L., personal observation). This aphid reproduces by cyclical or obligate parthenogenesis, and can be maintained clonally for an indefinite period under controlled laboratory conditions. So quantitative traits of interest can be evaluated by using clonal individuals, and the partitioning of phenotypic variation into its genetic and environmental components can be achieved. Finally, microsatellite markers have been developed for this species, allowing for evaluation of its molecular variation and population structure. In our previous study [4], considerable divergence in life-history traits among S. avenae populations from north and south of the Qinling Mountains was identified, but it is still not clear about the relative importance of different evolutionary forces in shaping and maintaining genetic variation in those populations. We hypothesize that the separation of Qinling Mountains can cause increasing quantitative trait divergence and local adaptation of S. avenae populations in northern and southern Shaanxi Province, thus creating different population structuring patterns in those areas. To do so, we compared populations of S. avenae collected from areas north of the Qinling Mountains (hereafter referred to as northern populations) to those from areas south of the mountains (hereafter referred to as southern populations) using both Q ST (an index of differentiation in quantitative traits [12]) and F ST (an index of differentiation in neutral genetic markers [12]). The aims of the present study were: (a) to characterize patterns of genetic differentiation among S. avenae populations at molecular traits (evaluated using seven microsatellite markers) and at quantitative life-history traits (measured under common laboratory conditions), (b) to assess the relationship between molecular and quantitative trait variation, and (c) to evaluate the relative importance of selection and drift in shaping and maintaining genetic variation between northern and southern populations.
Materials and Methods
Aphid sampling and colony establishment From April to May in 2013, individuals of S. avenae were sampled from six locations including three each in areas north and south of the Qinling Mountains. The aphid clones came from wheat fields in the north (collected at Fuping, 34°46'46"N, 109°01'56"E; Tongchuan, 35°0 7'05"N, 109°06'58"E; and Luochuan, 35°46'26"N, 109°24'29"E) and south (collected at Jinshui, 33°16'10"N, 107°47'13"E; Longting, 33°12'41"N, 107°38'31"E; and Mianxian, 33°11'36"N, 106°5 6'55"E) of the Qinling Mountains (no particular permissions were needed for sample collecting activities at sites mentioned above, and no endangered or protected species were involved in the collecting activities). Aphid samples were collected from the field, and insect colonies were maintained in the lab as described in detail previously in [9]. Briefly, over 15 wingless adults of S. avenae were collected in each of the five or more fields randomly selected at each location. Winter wheat seeds (Triticum aestivum cv. Aikang 58) were planted, and aphid colonies were kept in rearing rooms. For eliminating environmentally induced effects, the S. avenae populations were reared under common laboratory conditions for three generations before bioassays. Even without genetic changes among populations, maternal effects produce adaptive plastic responses that can be mistaken as local adaptation, but after three generations of cultivation under common laboratory conditions such effects can become negligible [13].
Life-history data collection
Plants used in life-history tests were grown, and aphid life-history data were collected as described in detail previously in [9]. Briefly, wheat seedlings of one-to two-leaf stage (one per plant) received aphids transferred from rearing plants. Each plant with an aphid individual was enclosed with a transparent plastic cylinder (6 cm in diameter, 15 cm in height), and maintained in environmental growth chambers at 20±1°C, a relative humidity of 65±2%, and a photoperiod of 16:8 (L:D). Thirty clones were randomly selected from aphid colonies of each location for use in the tests, and three to four replicates were conducted for each clone. Test aphid individuals were observed twice daily from birth until the onset of reproduction, and molting occurrences and mortality counts were recorded. After reproduction started, numbers of offspring and dead aphids were recorded daily, and their offspring were then removed until the death of all test aphids.
Quantitative trait analysis
Developmental time of first to fourth nymphal instars, the total developmental time of nymphs, adult lifespan, reproductive time, post-reproductive time, and lifetime fecundity were calculated as described in [9]. The abovementioned quantitative traits were analyzed with nested analyses of variance (nested ANOVAs) (sources of variance: 'region', 'location' nested within 'region', and 'clone' nested within 'location') in SAS [14]. The variance explained by 'clone' (δ 2 clone ) was determined. When overall variation in ANOVA was significant, separation of treatment means was carried out using Tukey tests at α < 0.05. Data were log-transformed when needed to meet the assumptions of normality and homoscedasticity (i.e., homogeneity of variance) in ANOVA.
Principal component analysis (PCA) was performed with all quantitative traits measured above after the raw data were log-transformed [14]. The purpose of PCA is to quantify the significance of the variables that explained the differences in the abovementioned quantitative traits, and identify similarity in group data structures [9]. The first two PCA components (PC1 and PC2) were plotted using the PROC GPLOT procedure. The factor weightings of each replicate for PC1 from the PCA were calculated, and they were used as a composite life-history factor (i.e., PC1) in subsequent analyses.
Our bioassays use clonal genotypes, and this design allows us to estimate the total variance of a particular quantitative trait (V P ), which can be partitioned into among-clone genetic components V G (i.e., the broad-sense genetic variance) and within-clone components V E (i.e., environmental variance or residual variance) [9]. Broad-sense heritabilities were calculated as described previously in [9]. Genetic variance and covariance estimates for life-history traits were obtained with the restricted maximum likelihood (REML) method implemented in the software VCE 6.0.2 [15]. The genetic correlation between traits x and y was calculated from the genetic covariance estimate (cov[x, y]) and their additive variances as r = cov(x, y)/[(v x ) × (v y )] 0.5 The Flury hierarchical method was utilized to compare the resulting G matrices in the software CPCrand [16]. Based on maximum likelihood, this method can identify structural differences among G matrices by analyzing their eigenvectors and eigenvalues as described in [17]). Briefly, the method can test, in order, the models of unrelated structure, partial common principal components, common principal components, proportionality, and equality (see also in [17]). The statistical significance of genetic correlations and broad-sense heritabilities was assessed with likelihood-ratio tests (LRTs) (for more details, see also in [9,17]. Q ST values were evaluated using genetic variances and defined as Q ST = V GB /(V GB + 2V GW ), where V GB is the genetic component of the variance between population means and V GW is the average genetic variance within populations [18]. Q ST was estimated between each pair of populations, as well as between northern and southern regions. Differentiation among populations within a region was evaluated similarly as Q SR . Pairwise population Q ST values (a global index of differentiation) were also computed for the composite life-history trait, which was the first component obtained from the PCA performed on abovementioned quantitative traits. Following Chapuis et al. [7], Q ST was considered different from corresponding F ST if their respective 95% confidence intervals did not overlap. Another method of evaluating the variation in genetic variances for comparison among characters and populations is the coefficient of genetic variance (CV G = V G 1/2 /X, where X is the mean phenotype) [19].
Microsatellite data collection and analysis
Over 40 clones per population were randomly selected and genotyped at seven microsatellite loci (Sm10, Sm12, S17b, Sm17, S16b, Sa4∑ and S5L) following the procedures described in Simon et al. [20]. Using identified microsatellite loci, population differentiation was assessed by calculating pairwise F ST (θ) [21] between populations and between both regions. Deviations from Hardy-Weinberg equilibrium and unbiased estimates of F IS (f) [21] were estimated. Microsatellite diversity was assessed by calculating unbiased estimates of gene diversity (H E ) [22] for each sample. Region-specific F ST (hereafter referred to as F SR ), F IS , H E , Ho (observed heterozygosity), and R S (allele richness) were also estimated and their differences between regions were tested by performing 1000 permutations of alleles among individuals. The abovementioned F-statistics and related tests were conducted in FSTAT version 2.9.3 [23].
We also analyzed the microsatellite data with an individual based principal component analysis (PCA) using the program PCA-GEN version 1.2.1 [24]. This analysis uses allele frequencies to define new variables (components) that can characterize the neutral genetic variation among populations. PCA is a preferable method for visualizing microsatellite data in situations of potential gene flow because traditional bifurcating trees constructed from genetic distances can be difficult to interpret [25].
Differences in quantitative traits
Northern populations showed significantly reduced developmental time for the first, second and third instar nymphs, but not for the fourth instar nymphs ( Table 1). The total developmental time of nymphs for northern populations was also significantly lower than that for southern populations. Southern populations had much higher lifetime fecundity than northern populations. Post-reproductive time and adult lifespan of southern populations were much higher than those of northern populations, whereas the reproductive time of southern populations was only slightly higher than that of northern populations. Similarly, all the abovementioned quantitative traits showed significant variation among populations within a region except the developmental time of fourth instar nymphs. Of the nine quantitative characters, eight differed significantly among populations within regions, as well as between the two regions.
Quantitative genetic differentiation
Genetic difference between northern and southern populations was shown in Table 2. There were no significant differences between northern and southern populations in genetic variance (δ 2 clone ). However, the coefficient of genetic variance (CV G ) and mean broad sense heritability (H 2 ) for the populations in the northern region were both significantly higher than those in the southern region. Significant genetic correlations were found between life-history traits for S. avenae populations from both regions (Table 3). Significantly negative correlation (i.e., trade-off) between DT1 and DT2 was found for southern populations, but not for northern populations. DT1, DT2 and DT3 were all shown to be significantly and negatively correlated with FEC, POS, SPA and RET for northern populations, whereas all the genetic correlations were non-significant for southern populations except that DT2 was significantly correlated with FEC and SPA. DT5 was significantly and negatively correlated with FEC, POS, SPA and RET for northern populations, whereas it was only significantly correlated with FEC among the characters for southern populations. The correlations of POS-FEC and POS-RET were significantly positive for northern populations, but they were significantly negative for southern populations. The correlations of FEC-SPA, FEC-RET and SPA-RET were all significantly positive for populations from both regions. G-matrix comparisons by Flury's method and jump-up approach (that is, at each step in the hierarchy, the hypothesis is tested against the hypothesis of unrelated structure) showed Note: The genetic variance (δ 2 clone ), the coefficient of genetic variance (CV G ) and the broad sense heritability (H 2 ) were measured for each region and over all quantitative traits; allelic richness (R S ), observed heterozygosity (Ho), and inbreeding coefficient (F IS ) were estimated for each region and over all loci, and significance tests were performed between northern and southern regions by randomization procedures using FSTAT software.
Microsatellite differentiation
Microsatellite variation among populations (Table 5) was evident, with overall gene diversity for southern populations ranging from 0.285 in population Mianxian to 0.766 in population Jinshui. Overall gene diversity for single northern populations varied from 0.393 in population Luochuan to 0.538 in population Fuping. So, variation in gene diversity among populations within a particular region tended to be higher in the south than in the north. Levels of gene diversity tended to be lower in the north (0.466 ± 0.049) than in the south (0.530 ± 0.086), although the difference was not significant (P > 0.05). The differences in allelic richness (R S ), observed heterozygosity (Ho), and inbreeding coefficient (F IS ) between the two regions were not significant (all P > 0.05, Table 2). The Q ST values of the developmental times of first to fourth instar nymphs and the total developmental time of nymphs were not significantly higher than corresponding F ST values (Fig. 1). However, lifetime fecundity, post-reproductive time, adult lifespan and reproductive time all showed significant differentiation among populations and significantly exceeded the neutral expectation set by F ST (i.e., Q ST > F ST ).
The magnitude of subdivision in quantitative traits (i.e., Q SR ) was significantly higher for northern populations than that for southern population, but there were no significant differences in molecular subdivision (i.e., F SR ) between northern and southern regions (P > 0.05, Fig. 2). Quantitative trait differentiation among populations (Q SR ) in the north was significantly higher than molecular subdivision among populations (F SR ) in the same region. F SR appeared to be higher than Q SR in the south, but no significant differences were found between the two parameters.
Principal component analyses (PCA)
The first two components from PCA of life-history traits explained 91.4% of the total data variability (78.4% and 13.0% respectively for PC1 and PC2; Fig. 3, upper panel). Post-reproductive PCA on microsatellite data showed that the first two axes accounted for 95.6% of the total inertia (77.7% and 17.9% respectively for PC1 and PC2; Fig. 3, lower panel). This analysis clustered four populations (i.e., Longting, Mianxian, Luochuan and Tongchuan) together in the middle left of the plot, whereas the Jinshui and Fuping population fell in the upper and lower right of the plot, respectively.
Relationship between molecular and quantitative variation
Many studies use molecular markers as a surrogate for adaptive genetic variation [10], and very few studies have addressed the relationship between molecular and quantitative subdivision for different populations of a species. Sitobion avenae provides a good model to explore this relationship because its clonal individuals can be reared in common laboratory conditions for several generations (to minimize confounding environmental effects) and then phenotyped [4], allowing for better estimates of heritable variation and comparisons between Q ST and F ST .
In our study, northern and southern populations showed significantly different quantitative traits and G-matrices, so high quantitative variation was evident between populations from the two regions. However, microsatellite data did not show the expected variation between both types of populations. In addition, populations within the northern region had increased subdivision in quantitative traits (shown by higher Q SR and CV G ), but not in molecular characters. The decoupling of molecular and quantitative variation was also evident in PCA analyses where clustering patterns from molecular and quantitative trait data were distinct. The conversion of non-additive variation underlying quantitative traits to additive variation could occur due to isolation and drift following bottlenecks, and quantitative variation can be increased as a result [10,26]. So, frequent bottlenecks may contribute to the decoupling of molecular and quantitative variation in our case, since S. avenae lives on highly seasonal and sometimes unpredictable habitats [27]. In our analyses of microsatellite markers, overall molecular differences between northern and southern populations appeared to be very low (F ST = 0.018), indicating populations from the two regions shared the majority of identified genotypes. Therefore, high dispersal ability of S. avenae and the resulting gene flow can be an important factor in retarding the molecular differentiation between populations from both sides of the Qinling Mountains. The different pattern for molecular and quantitative traits might also be simply explained by differences in mutation rates, because mutation rates for quantitative Fig 2. Comparison between molecular (F SR ) and quantitative genetic (Q SR ) divergence within each region (*, significant differences between Q SR and F SR based on non-overlapping confidence intervals; NS, non-significant differences between Q SR and F SR ; F SR of the northern region was not significantly different from that of the southern region, P = 0.458). characters are typically several orders of magnitude higher than those for molecular characters [28].
Indeed, the extent to which molecular markers reflect quantitative genetic subdivision between populations is still controversial [10]. Our results suggested that molecular variation might not be a reliable indicator for quantitative variation, and could only provide a conservative estimate of adaptive divergence. The decoupling of molecular and quantitative variation in our study is consistent with the findings of McKay and Latta [29], Palo et al. [30] and Chapuis et al. [7] that no significant correlations existed between molecular and quantitative variation, although positive correlations have been reported in both plants and animals [19,[31][32]. So, our results provided data cautioning against the use of molecular metrics of variation as a surrogate for the ability of S. avenae populations to respond to environmental changes. Further studies are still needed to clarify the relationship between molecular and quantitative variation in insects, as well as in other organisms.
Differentiation and evolution of S. avenae populations
In our current study, all tested life-history characters but one (i.e., DT4) differed significantly between northern and southern regions, as well as among populations within a region. However, northern populations showed a higher DT4 than southern populations in our previous study [4]. Compared to those from northern populations, individuals from southern populations generally had longer developmental times, but higher fecundity, adult lifespan and reproductive time in this study. In contrast, no significant differences in DT2 and reproductive time were found between northern and southern populations in our previous study [4]. The seemingly contradictory results in some life-history traits may be attributed to different population sampling time of both studies since environmental conditions can vary significantly at the same location in the same month of different years. Despite the abovementioned inconsistency, consistent differences between northern and southern populations were found in DT1, post-reproductive time, adult lifespan, and lifetime fecundity in both studies. Northern and southern populations also differed significantly in genetic correlations between quantitative traits and in G-matrices. So, genetic variation among S. avenae populations from both sides of the Qinling Mountains was evident.
Between-population differentiation for neutral genetic markers-seven microsatellite locivaried with F ST ranging from 0 to 0.1228 (F ST > 0.1 implies a very high degree of differentiation among populations, see more in Simon et al. [20] and Vialatte et al. [33]. The high genetic differentiation in neutral markers found between particular populations can be partially explained as a founder effect, due to fast growth and large population sizes after a bottleneck in population foundation by limited number of individuals. In several cases of pairwise comparisons (e.g., Luochuan vs. Tongchuan), no significant differences were found between F ST and Q ST , so it can not be ruled out that genetic drift alone could account for the differentiation of S. avenae populations in those cases.
The role genetic drift plays in genetic differentiation of S. avenae populations is well documented [33][34][35], but the impact of natural selection has received little attention. Sitobion avenae showed a high potential to respond to natural selection, and rapid changes in ecologically relevant traits (e.g., phenotypic plasticity) have been reported in response to changes in selective pressure [27]. However, empirical data on the actual occurrence of local adaptation are scarce. In our study, differentiation of four quantitative traits closely related to reproduction (i.e., lifetime fecundity, post-reproductive time, adult lifespan and reproductive time) exceeded the neutral expectation set by F ST (i.e., Q ST > F ST ), indicating divergent selection among S. avenae populations. There were over half pairwise population comparisons in the Q ST -F ST analysis for the composite life-history trait that yielded higher Q ST values than their corresponding F ST estimates (i.e. more genetic divergence in the trait than expected by neutral genetic drift only), providing additional evidence of divergent selection. Populations of S. avenae that are better adapted to seasonal and ephemeral habitats are expected to have higher fitness under our experimental conditions (e.g. higher lifetime fecundity). In this study and [4], southern populations showed much higher fitness than northern populations, suggesting the occurrence of local adaptation. So, our results provide substantial evidence for patterns that suggest local adaptation in S. avenae. Another way to test for local adaptation is to perform reciprocal transfer of individuals in wild populations, and it is logistically demanding and extremely difficult to carry out such experiments [7,36]. However, such experiments may further substantiate the occurrence of local adaptation in S. avenae.
The occurrence of local adaptation in S. avenae is not unexpected in our study, because the separation of Qinling Mountains might cause the isolation of its populations. In addition, the pattern of natural selection can vary greatly between areas north and south to the Qinling Mountains because of different environmental conditions (esp., weather). The mountains cause harsh winter weather conditions in the north and mild ones in the south. The differences in winter weather conditions can have significant consequences for S. avenae's life-history, for example, the reproductive mode of S. avenae clones in Romania (harsh winter) was different from that in France (mild winter) [34]. Thus, local adaptation, as a result of natural selection of local conditions, appears to be a common phenomenon for this insect. In this study and [4], natural selection seemed to favor individuals with smaller DT1, shorter post-reproductive time and shorter adult lifespan in the north of the mountains; on the contrary, those with larger DT1, longer post-reproductive time and longer adult lifespan were selected for in the south. We observed significant heritabilities, which show that populations have the potential to respond to local selection. As a result, it is likely that some individuals have adapted to local conditions.
Despite the highly likely occurrence of local adaptation in northern and southern populations, S. avenae individuals seemed to be able to disperse across the Qinling Mountains because cluster analyses using both quantitative trait (in this study and [4]) and microsatellite data didn't result in two groups corresponding to northern and southern regions. The amount of dispersal of S. avenae individuals across the huge mountains was thought to be very low, because the Tianshan Mountains in Xinjiang of China led to clear geographic separation between northern and southern populations of a similar wheat aphid, D. noxia [37]. On the contrary, a substantial amount of S. avenae dispersal across the Qinling mountains could occur, because the overall low genetic differentiation between northern and southern populations (F ST = 0.018) indicated a significant amount of gene flow between them. Therefore, the Qinling Mountains did not result in the isolation of S. avenae populations in our study area, and allopatric speciation for S. avenae appears to be highly unlikely.
Sitobion avenae presents a good model to study how differentiation can evolve among populations of small organisms with high dispersal ability, which is crucial to understand speciation and the match between environment and phenotype [12]. Further experiments are needed to determine what specific factors are important in differentiation and evolution of S. avenae populations. Temperature is a particularly likely candidate for the cause of differences between populations identified in our study. Because development is closely tied to temperature, natural selection might cause individuals from colder, northern climates to develop more rapidly than southern ones when raised under common laboratory conditions. The structure of G-matrix for S. avenae's quantitative traits also has implications for its population differentiation and evolution. Interestingly, quite a few negative covariances (i.e., trade-offs) were found between quantitative traits measured in our study. These trade-offs (also shown by negative genetic correlations) may play a role in slowing the evolution of habitat specific S. avenae genotypes and thus slowing the differentiation of S. avenae populations in our study. The G-matrix structure of different quantitative traits for aphid genotypes have complex relationships with factors like genotype specialization, trade-offs, and genotype-by-environment interactions [38]. The identified strong structural differences in G-matrices for S. aveane populations represented a rare phenomenon, because empirical studies have generally supported the stability of G-matrices for populations of the same species [39]. Further studies are needed to explore the implications of G-matrix structure and stability in maintaining and shaping genetic differentiation in S. avenae, as well as in its evolutionary potential. | 2016-05-12T22:15:10.714Z | 2015-03-30T00:00:00.000 | {
"year": 2015,
"sha1": "dc203c38842f15ae9ad6abf8e66d18ed665d05cc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0122343&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc203c38842f15ae9ad6abf8e66d18ed665d05cc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14124068 | pes2o/s2orc | v3-fos-license | A Theory of Measurement Uncertainty Based on Conditional Probability
A theory of measurement uncertainty is presented, which, since it is based exclusively on the Bayesian approach and on the subjective concept of conditional probability, is applicable in the most general cases. The recent International Organization for Standardization (ISO) recommendation on measurement uncertainty is reobtained as the limit case in which linearization is meaningful and one is interested only in the best estimates of the quantities and in their variances.
Introduction
The value of a physical quantity obtained as a result of a measurement has a degree of uncertainty, due to unavoidable errors, of which one can recognize the source but never establish the exact magnitude. The uncertainty due to so called statistical errors is usually treated using the frequentistic concept of confidence intervals, although the procedure is rather unnatural and there are known cases (of great relevance in frontier research) in which this approach is not applicable. On the other hand, there is no way, within this frame, to handle uncertainties due to systematic errors in a consistent way.
Bayesian statistics, however, allows a theory of measurement uncertainty to be built which is applicable to all cases. The outcomes are in agreement with the recommendation of the Bureau International des Poids et Mesures (BIPM) and of the International Organization for the Standardization (ISO), which has also recognized the crucial role of subjective probability in assessing and expressing measurement uncertainty.
In the next section I will make some remarks about the implicit use in science of the intuitive concept of probability as degree of belief. Then I will briefly discuss the part of the BIPM recommendation which deals with subjective probability. The Bayesian theory of uncertainty which provides the mathematical foundation of the recommendation will be commented upon. Finally I will introduce an alternative theory, based exclusively on the Bayesian approach and on conditional probability. More details, including many practical examples, can be found in [1].
Claimed frequentism versus practiced subjectivism
Most physicists (I deal here mainly with Physics because of personal biases, but the remarks and the conclusions could easily be extended to other fields of research) have received a scientific education in which the concept of probability is related to the ratio of favorable over possible events, and to relative frequencies for the outcomes of repeated experiments. Usually the first "definition" (combinatorial) is used in theoretical calculations and the second one (frequentistic) in empirical evaluations. The subjective definition of probability, as "degree of belief", is, instead, viewed with suspicion and usually misunderstood. The usual criticism is that "science must be objective" and, hence that "there should be no room for subjectivity". Some even say: "I do not believe something. I assess it. This is not a matter for religion!".
It is beyond the purposes of this paper to discuss the issue of the so called "objectivity" of scientific results. I would just like to remind the reader that, as well expressed by the science historian Galison [2], "Experiments begin and end in a matrix of beliefs. . . . beliefs in instrument types, in programs of experiment enquiry, in the trained, individual judgements about every local behavior of pieces of apparatus . . . ".
In my experience, and after interviewing many colleagues from several countries, physicists use (albeit unconsciously) the intuitive concept of probability as "degree of belief", even for "professional purposes". Nevertheless, they have difficulty in accepting such a definition rationally, because -in my opinion -of their academic training. For example, apart from a small minority of orthodox frequentists, almost everybody accepts statements of the kind "there is 90 % probability that the value of the Top quark mass is between . . . ". In general, in fact, even the frequentistic concept of confidence interval is usually interpreted in a subjective way, and the correct statement (according to the frequentistic school) of "90 % probability that the observed value lies in an interval around µ" is usually turned around into a "90 % probability that µ is around the observed value" (µ indicates hereafter the true value). The reason is rather simple. A physicist -to continue with our exampleseeks to obtain some knowledge about µ and, consciously or not, wants to understand which values of µ have high or low degrees of belief; or which intervals ∆µ have large or small probability. A statement concerning the probability that a measured value falls within a certain interval around µ is sterile if it cannot be turned into an expression which states the quality of the knowledge of µ itself. Unfortunately, few scientists are aware that this can be done in a logically consistent way only by using the Bayes' theorem and someà priori degrees of belief. In practice, since one often deals with simple problems in which the likelihood is normal and the uniform distribution is a reasonable prior (in the sense that the same degree of belief is assigned to all the infinite values of µ) the Bayes' formula is formally "by-passed" and the likelihood is taken as if it described the degrees of belief for µ after the outcome of the experiment is known (i.e. the final probability density function, if µ is a continuous quantity).
BIPM and ISO Recommendation on the measurement uncertainty
An example which shows how this intuitive way of reasoning is so natural for the physicist can be found in the BIPM recommendation INC-1 (1980) about the "expression of experimental uncertainty" [3]. It states that
The uncertainty in the result of a measurement generally consists of several components which may be grouped into two categories according to the way in which their numerical value is estimated:
A: those which are evaluated by statistical methods; B: those which are evaluated by other means.
Then it specifies that
The components in category B should be characterized by quantities u 2 j , which may be considered as approximations to the corresponding variances, the existence of which is assumed. The quantities u 2 j may be treated like variances and the quantities u j like standard deviations.
Clearly, this recommendation is meaningful only in a Bayesian framework. In fact, the recommendation has been criticized because it is not supported by conventional statistics (see e.g. [4] and references therein). Nevertheless, it has been approved and reaffirmed by the CIPM (Comité International des Poids et Mesures) and adopted by ISO in its "Guide to the expression of uncertainty in measurement" [5] and by NIST (National Institute of Standards and Technology) in an analogous guide [6]. In particular, the ISO Guide recognizes the crucial role of subjective probability in Type B uncertainties: The BIPM recommendation and the ISO Guide deal only with definitions and with "variance propagation", performed, as usual, by linearization. A general theory has been proposed by Weise and Wöger [4]. which they maintain should provide the mathematical foundation of the Guide. Their theory is based on Bayesian statistics and on the principle of maximum entropy. Although the authors show how powerful it is in many applications, the use of the maximum entropy principle is, in my opinion, a weak point which prevents the theory from being as general as claimed (see the remarks later on in this paper, on the choice of the priors) and which makes the formalism rather complicated. I show in the next section how it is possible to build an alternative theory, based exclusively on probability "first principles", which is very close to the physicist's intuition. In a certain sense the theory which will be proposed here can be seen as nothing more than a formalization of what most physicists unconsciously do.
A genuine Bayesian theory of measurement uncertainty
In the Bayesian framework inference is performed by calculating the degrees of belief of the true values of the physical quantities, taking into account all the available information. Let us call x = {x 1 , x 2 , . . . , x nx } the n-tuple ("vector") of observables, µ = {µ 1 , µ 2 , . . . , µ nµ } the n-tuple of the true values of the physical quantities of interest, and h = {h 1 , h 2 , . . . , h n h } the n-tuple of all the possible realizations of the influence variables H i . The term "influence variable" is used here with an extended meaning, to indicate not only external factors which could influence the result (temperature, atmospheric pressure, etc.) but also any possible calibration constants and any source of systematic errors. In fact the distinction between µ and h is artificial, since they are all conditional hypotheses for x. We separate them simply because the aim of the research is to obtain knowledge about µ, while h are considered a nuisance.
The likelihood of the sample x being produced from h and µ is H • is intended as a reminder that likelihoods and priors -and hence conclusions -depend on all explicit and implicit assumptions within the problem, and, in particular, on the parametric functions used to model priors and likelihoods. (To simplify the formulae, H • will no longer be written explicitly). Notice that (1) has to be meant as a function f (·|µ, h) for all possible values of the sample x, with no restrictions beyond those given by the coherence [7]. Using the Bayes' theorem we obtain, given an initial f • (µ) which describes the different degrees of belief on all possible values of µ before the information on x is available, a final distribution f (µ) for each possible set of values of the influence variables h: Notice that the integral over a probability density function (instead of a summation over discrete cases) is just used to simplify the notation. To obtain the final distribution of µ one needs to re-weight (2) with the degrees of belief on h: The same comment on the use of the integration, made after (2), applies here. Although (3) is seldom used by physicists, the formula is conceptually equivalent to what experimentalists do when they vary all the parameters of the Monte Carlo simulation in order to estimate the "systematic error" 1 .
Notice that an alternative way of getting f (µ) would be to first consider an initial joint probability density function f • (µ, h) and then to obtain f (µ) as the marginal of the final distribution f (µ, h). Formula (3) is reobtained if µ and h are independent and if f • (µ, h) can be factorized into f • (µ) and f (h). But this could be interpreted as an explicit requirement that f (µ, h) exists, or even that the existence of f (µ, h) is needed for the assessment of f (x|µ, h). As stated previously, f (x|µ, h) simply describes the degree of belief on x for any conceivable configuration {µ, h}, with no constraint other than coherence. This corresponds to what experimentalists do when they first give the result with "statistical uncertainty" only and then look for all possible systematic effects and evaluate their related contributions to the "global uncertainty".
Some comments about the choice of the priors I don't think that the problem of the prior choice is a fundamental issue.
My view is that one should avoid pedantic discussions of the matter, because the idea of "universally true priors" reminds me terribly of the Byzanthine "angels' sex" debates. If I had to give recommendations, they would be: • the a priori probability should be chosen in the same spirit as the rational person who places a bet, seeking to minimize the risk of losing; • general principles may help, but, since it is difficult to apply elegant theoretical ideas to all practical situations, in many circumstances the guess of the "expert" can be relied on for guidance; • in particular, I think -and in this respect I completely disagree with the authors of [4] -there is no reason why the maximum entropy principle should be used in an uncertainty theory, just because it is successful in statistical mechanics. In my opinion, while the use of this principle in the case of discrete random variables is as founded as Laplace's indifference principle, in the continuous case there exists the unavoidable problem of the choice of the right metric ("what is uniform in x is not uniform in x 2 "). It seems to me that the success of maximum entropy in statistical mechanics should be simply considered a lucky instance in which a physical scale (the Planck constant) provides the "right" metrics in which the phase space cells are equiprobable.
In the following example I will use uniform and normal priors, which are reasonable for the problems considered.
An example: uncertainty due to unknown systematic error of the instrument scale offset In our scheme any influence quantity of which we do not know the exact value is a source of systematic error. It will change the final distribution of µ and hence its uncertainty. Let us take the case of the "zero" of an instrument, the value of which is never known exactly, due to limited accuracy and precision of the calibration. This lack of perfect knowledge can be modeled assuming that the zero "true value" Z is normally distributed around 0 (i.e. the calibration was properly done!) with a standard deviation σ Z . As far as µ is concerned, one may attribute the same degree of belief to all of its possible values. We can then take a uniform distribution defined over a large interval, chosen according to the characteristics of the measuring device and to our expectation on µ. An alternative choice of vague priors could be a normal distribution with large variance and a reasonable average (the values have to be suggested by the best available knowledge of the measurand and of the experimental devices). For simplicity, a uniform distribution is chosen in this example.
As far as f (x|µ, z) is concerned, we may assume that, for all possible values of µ and z, the degree of belief for each value of the measured quantity x can be described by a normal distribution with an expected value µ + z and variance σ 2 • : For each z of the instrument offset we have a set of degrees of belief on µ: Weighting f (µ|z) with degrees of belief on z using (3) we finally obtain .
The result is that f (µ) is still a gaussian, but with a variance larger than that due only to statistical effects. The global standard deviation is the quadratic combination of that due to the statistical fluctuation of the data sample and that due to the imperfect knowledge of the systematic effect: This formula is well known and widely used, although nobody seems to care that it cannot be justified by conventional statistics. It is interesting to notice that in this framework it makes no sense to speak of "statistical" and "systematical" uncertainties, as if they were of a different nature. They are all treated probabilistically. But this requires the concept of probability to be related to lack of knowledge, and not simply to the outcome of repeated experiments. This is in agreement with the classification in Type A and Type B of the components of the uncertainty, recommended by the BIPM.
If one has several sources of systematic errors, each related to an influence quantity, and such that their variations around their nominal values produce linear variations to the measured value, then the "usual" combination of variances (and covariances) is obtained (see [1] for details).
If several measurements are affected by the same unknown systematic error, their results are expected to be correlated. For example, considering only two measured values x 1 and x 2 of the true values µ 1 and µ 2 , the likelihood is (8) The final distribution f (µ 1 , µ 2 ) is a bivariate normal distribution with expected values x 1 and x 2 . The diagonal elements of the covariance matrix are σ 2 i + σ 2 Z , with i = 1, 2. The covariance between µ 1 and µ 2 is σ Z and their correlation factor is then The correlation coefficient is positively defined, as the definition of the systematic error considered here implies. Furthermore, as expected, several values influenced by the same unknown systematic error are correlated when the uncertainty due to the systematic error is comparable to -or larger than -the uncertainties due to sampling effects alone.
Conclusions
Bayesian statistics is closer to the physicist's mentality and needs than one may naïvely think. A Bayesian theory of measurement uncertainty has the simple and important role of formalizing what is often done, more or less intuitively, by experimentalists in simple cases, and to give guidance in more complex situations.
As far as the choice of the priors and the interpretation of conditional probability are concerned, it seems to me that, although it may look paradoxical at first sight, the "subjective" approach (à la de Finetti) has the best chance of achieving consensus among the scientific community (after some initial resistance due to cultural prejudices). | 2014-10-01T00:00:00.000Z | 1996-11-21T00:00:00.000 | {
"year": 1996,
"sha1": "f93660ee4ad560672e977b77c73b860a15a3ca32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f93660ee4ad560672e977b77c73b860a15a3ca32",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
51870831 | pes2o/s2orc | v3-fos-license | Using Sentence-Level Neural Network Models for Multiple-Choice Reading Comprehension Tasks
. Comprehending unstructured text is a challenging task for machines because it involves understanding texts and answering questions. In this paper, we study the multiple-choice task for reading comprehension based on MC Test datasets and Chinese reading comprehension datasets, among which Chinese reading comprehension datasets which are built by ourselves. Observing the above-mentioned training sets, we find that “sentence comprehension” is more important than “word comprehension” in multiple-choice task, and therefore we propose sentence-level neural network models. Our model firstly uses LSTM network and a composition model to learn compositional vector representation for sentences and then trains a sentence-level attention model for obtaining the sentence-level attention between the sentence embedding in documents and the optional sentences embedding by dot product. Finally, a consensus attention is gained by merging individual attention with the merging function. Experimental results show that our model outperforms various state-of-the-art baselines significantly for both the multiple-choice reading comprehension datasets.
Introduction
Reading comprehension is the ability of reading texts, understanding their meanings, and answering questions.When machines are required to comprehend texts, they need to understand unstructured text and do reasoning based on the text [1][2][3].It is a major task in the field of natural language processing and machine learning.
Recently, machine reading comprehension (MC) is increasingly drawing attention and several large reading comprehension datasets have also been released.For the several released datasets, the task is getting more and more difficult (from CNN/Daily Mail datasets to SQuAD and then to TriviaQA) as system performance has rapidly improved with each new released datasets.The CNN/Daily Mail datasets [4] is a cloze-style reading comprehension task, which aims to comprehend a given document and then to answer questions based on the given document, and the answer to each question is a single word inside of the document.The SQuAD [5] is a question-answering reading comprehension task, which further constrains answers often including nonentities and being much longer phrases to be a continuous subspan of the document.Clearly, the question-answering task is more difficult than the cloze-style task.The TriviaQA [6] is also a question-answering reading comprehension task, but the task in TriviaQA is more difficult than the task in SQuAD because answers in TriviaQA are independent of the evidence and belong to a diverse set of types.
Different from the above, the task based on the MCTest datasets [3] is a multiple-choice reading comprehension, each example of which consists of one document and four associated questions and each question gives four candidate answers and only one answer is correct among them.In this paper, we focus on such problem of answering multiplechoice questions in documents, and, at the same time, we also release a Chinese reading comprehension dataset for such multiple-choice task.To our knowledge, the dataset is the first Chinese reading comprehension dataset of this 2 Wireless Communications and Mobile Computing Document: "Ruins" is a derogatory term that it is irrelevant to cultural and aesthetic in many Chinese mind, and interpretation of the word "ruins" is only a "city and village are changed into desolate places by destruction or natural disasters" in the "Modern Chinese Dictionary"; There is no fault for the interpretation, but it is not enough if it is measured by world knowledge.In Europe, the meaning of "ruins" has been enriched and expanded since modern times.It has been endowed with the connotation of culture and aesthetics, and has become an academic concept.The of meaning of the "ruins" is changed from the Renaissance in Europe.
Question:
Please choice two incorrect options according to the content of the document: Choice: A. One of the purposes of this paper is to correct the misunderstanding of the term "ruins" in the modern Chinese dictionary.B. The Great Wall Ruins have condensed the vicissitudes of time in China and it have a "perception of the intoxicated" as the Acropolis ruins.C. Remains of the ruins often reveals the extraordinary wisdom and great efforts of the predecessors, which bring to the future generations with the shock and resonance of the soul.D. Awareness of the ruins is related to the aesthetic consciousness of countrymen, but also it is conducive to the popularity of the "repair the old as the old".E. This paper not only contains historical interest, but also infiltrated the concern of reality, and express the author's desire to enhance the cultural quality of the nation.
Box 1: Example for the multiple-choice reading comprehension for literature (the original data is in Chinese, we translate this sample in English for clarity).kind and is even more complex than MCTest datasets.The example of such dataset consists of one document and one associated question which gives five candidate answers.The specific details of this dataset are in Section 2. Frankly, the multiple-choice reading comprehension task remains quite challenging.For one thing, answers in the form of an optional sentence usually do not appear in the document; for another, finding the correct answer of the given question requires reasoning across multiple sentences.Hence, sentence comprehension is more important than word comprehension in the task of the multiple-choice reading comprehension.
To carry out the task of sentence comprehension, we propose a sentence-level attention model primarily inspired by the attention model for the Cloze-style reading comprehension [7,8].However, unlike the Cloze-style attention model, answers to multiple-choice questions are optional sentences.Karl et al. [9] train an encoder-decoder model to encode a sentence into a fixed length vector and subsequently decode both the following sentences.They also demonstrate that the low-dimensional vector embeddings are useful for other tasks.Pichotta et al. [10] present a sentence-level LSTM language model for script inference.The results show that the model is useful for predicting missing information in text.Similar to the above model, we also present a sentence representation model which uses LSTM network to learn vector representation for sentences.Moreover, we use sentence composition model to represent sentence vector because the model can express hierarchical sentences from words to phrases, and to sentences.In order to retain more information about two kinds of sentences representation model, we employ connection method to compose the final sentence vector.Then, we train a sentence attention model between optional sentences and sentences in the document.The machine is able to learn the relationships between the document and optional sentences by the attention-based neural network.
Experimental results show that our approach can effectively improve the performance of the task of multiplechoice reading comprehension.In the following text, Chinese reading comprehension datasets, related work, details of our model, and experiments will be described, and, afterwards, our experiments will be analyzed.
Chinese Reading Comprehension Datasets
In this paper, we focus on the multiple-choice reading comprehension task.Similar to the MCTest datasets, each example consists of one document and one associated questions.And each question gives five candidate answers.However, the dataset is more complex than MCTest datasets, and it is a literary reading comprehension dataset from test materials of final exam in senior high school.Box 1 shows an example of Chinese reading comprehension datasets.
For the dataset, the description of questions is basically fixed, as in the following: "Question".Therefore, the role of question is ignored in the Chinese reading comprehension task.The goal of the task is to understand the individual document and to select the most consistent options with the meaning of the document.Thus the Chinese reading comprehension can be described as a triple: where D is the document, C denotes the choice, and A is a set in which each element is marked as 0 or 1 according to the document meaning (if the option is consistent with the document meaning, it is labeled as 1; otherwise it is labeled as 0).The A can be described as the following: Question: "Please choose two incorrect options according to the content of the document: " In the training stage, we choose a 769-literary-readingcomprehension dataset which is collected from test materials of final exam in senior high school.In the testing stage, the dataset includes three parts: 13 Beijing college entrance examination papers (BCEETest), 12 simulation materials (SBCEETest1) which is provided by iFLYTEK company, and 52 test materials of final exam in Beijing senior high school (SBCEETest2).All of datasets are collected by the Chinese information processing group of Shanxi University.The statistics of training and testing data are shown in Table 1.
Related Work
Machine comprehension is currently a hot topic within the machine learning community.In this section we will focus on the best-performing models applied to MCTest and CNN/Daily Mail according to two kinds of reading comprehension tasks.
Multiple-Choice Reading Comprehension.
Existing models are mostly based on manually engineered features for MCTest [11][12][13].These engineered feature models are extremely effective.However, this research often requires significant effort on the auxiliary tools to extract the feature and its capacity for generalization is limited.
Yin et al. [14] proposed a hierarchical attention-based convolutional neural network for multiple-choice reading comprehension task.The model considers multiple levels of granularity, from word to sentence level and then from sentence to snippet level.This model performs poorly on MCTest.A possible reason that can explain this is that the dataset is sparse.However, neural model can address the extracted features problem, so it appeals to increasing interest in multiple-choice reading comprehension task.For sequence data, the recurrent neural network often is used.So we propose a recurrent neural network model for the multiple-choice reading comprehension.Our model uses the bidirectional LSTM to get contextual representations of the sentence.
Cloze-Style Reading Comprehension. Hermann et al. [4]
published the CNN/Daily Mail news corpus, where the content is formed by news articles and its summarization.Also, Cui et al. [7] released HFL-RC PD&CFT for Chinese reading comprehension datasets, which includes People Daily news datasets and Children 's Fairy tale datasets.On these datasets, many neural network models have been proposed for the Cloze-style reading comprehension tasks.Hermann et al. [4] proposed the attentive and impatient readers.The attentive reader uses bidirectional document and query encoders to compute an attention and the impatient reader computes attention over the document after reading every word of the query.Chen et al. [1] proposed a new neural network architecture for the Cloze-style reading comprehension.In contrast to the attentive reader, the attention weights of the model are computed with a bilinear term instead of simple dot product.Kadlec et al. [15] proposed the Attention Sum Reader, which uses attention to directly pick the answer from the context.The model uses attention as pointer over discrete tokens in the context document and then directly sums the word attention across all the occurrences.Cui et al. [7] presented the consensus attention-based neural network, namely, Consensus Attention Sum Reader, and released Chinese reading comprehension datasets.The model computes an attention to every time slice of query and makes a consensus attention among different steps.Cui et al. [8] also proposed the attention-over-attention neural network, namely, Attention-over-Attention Reader.The model presents an attention mechanism that places another attention over the primary attention, to indicate the "importance" of each attention.Dhingra et al. [16] proposed the gatedattention readers for text comprehension.The model integrates a multihop architecture with an attention mechanism which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader.
To summarize, all of them are attention-based RNN models which have been shown to be extremely effective for the word-level task.At each time-step, these models take a word as input, update a hidden state vector, and predict the answer.In this paper, we propose sentence-level attention model for the multiple-choice reading comprehension.Our work is primarily inspired by the attention model for the Cloze-style reading comprehension.
Sentence-Level Neural Network Reader
In this section, we will introduce our sentence-level neural network models for the multiple-choice reading comprehension task, namely, Sentence-Level Attention Reader.Our model is primarily motivated by that of Cui et al. [7], which aims to directly estimate the answer of optional sentence from the sentence-level attention instead of calculating the answer of entity from the word-level attention.The level structure of our model is shown in Figure 1.Firstly, the document is divided into several sentences = { 1 , 2 , . . . } and the sentence embedding is computed by embedding layer.Secondly, we use the bidirectional LSTM to get contextual representations of the sentence, in which the representation of each sentence is formed by concatenating the forward and backward hidden states.Thirdly, the sentence-level attention is computed by a dot product between the sentence embedding in the document and the optional embedding.Finally, the individual attention is merged to a consensus attention by the merging function.The following will give a formal description of our proposed model.
Sentence Representation.
The input of our model is the sentences in the document and options, and each sentence consists of word sequence.The sentence is translated into sentence embedding by embedding layer, which is composed of LSTM sentence model and sentence composition model [17] as illustrated in the embedding layer of Figure 1.
The LSTM sentence model is a single bi-LSTM layer followed by an average pooling layer.The bi-LSTM layer is used to get the contextual representations of words and the average pooling layer is used to merge word vectors into sentence vectors.On the other hand, we used the sentence composition model to compose sentence vector.The sentence vector is combined by the trained neural network model, which is trained by the triple consisting of single words and phrases vector (as triple( 1 , 2 , )).The sentence composition model is illustrated in Figure 2. We denote p as the final sentence vector.In order to retain more information about two kinds of sentences' representation model, we employ a multilayer neural network to compose the final sentence vector, ( 1 , 2 ) = 1 2 , where 1 is the sentence vector for LSTM sentence model, 2 is the sentence vector for sentence composition model, and is a parameter matrix.
In addition to the representation of sentences mentioned above, the context of sentence is also important for inferring Finally, we take h to represent the contextual representations of sentences.ℎ ∈ denote the sentence embedding of the option, where d denotes the number of options.
Sentence-Level Attention.
In attention layer, we directly use a dot product of h and h to compute the "importance" of each sentence in the document for each option.And we use the softmax function to get a probability distribution.For each sentence in the document, "attention" is computed as follows.
where variable () is the attention weight a tth sentence in document.
In merging layer, the consensus attention is calculated by a merging function as follows.
where is the top number of the attention weight and < .
Output Layer.
Finally, the answer is estimated by the softmax function.
= max ( * ) , = 1 . . . 5 (7) where indicate the weight matrix in the softmax layer and is a probability distribution of the answer.The prediction of answer labels (such as "1 1 0 1 0") is gotten by the probability.
Figure 1 shows the proposed neural network architecture.
Experiments
In this section we evaluate our model on the MCTest and our Chinese reading comprehension datasets.We find that although the model is simple, it achieves state-of-the-art performance on these datasets.
Experimental Details.
We use stochastic gradient descent with AdaDelta update rule [18], which only uses the firstorder information to adaptively update learning rate over time and has minimal computational overhead.To train model, we minimize the negative log-likelihood as the objective function.The batch size is set to 5 and the number of iterations is set to 25.
For word vectors we use Google's publicly available embedding [19], whose training dataset is 70 thousand literary papers.The dimension of word embedding is set to 200.While we are implementing the sentence-level attention reader, it is easy to overfit the training data.Thus, we adopt dropout method [20] for regularization purpose and handling overfitting problems.The dropout rate is 0.1 on Chinese reading comprehension datasets and 0.01 on MCTest datasets, respectively.Implementation of our model is done with theano [21].
The answer is predicted according to whether the option is consistent with the document meaning for multiple-choice task, so we only evaluate our system performance in terms of precision ( = right options/sum options).
Results on MCTest Dataset.
To verify the effectiveness of our proposed model, we test firstly our model on public datasets.Table 2 presents the performance of feature engineering and neural methods on the MCTest test set.The first four rows represent feature engineering methods and the last four rows are neural methods.As we can see the feature engineering methods outperform the neural methods significantly.One possible reason is that the neural methods suffered from the relative lack of training data.So we are going to analyze the related feature and add it to our neural network model in future work.For neural methods, the attentive reader [4] is implemented at word representation level and it is a deep model with thousands of parameters, so it performs poorly on MCTest.The neural reasoner [22] has multiple reasoning layers and all temporary reasoning affects the final answer representation.The HABCNN-TE [14] is convolutional architecture network.It can cut down on the parameter count, but the context representation can not be presented enough.Our method addresses the problems of the above methods.Firstly, the recurrent architecture network also cuts down on the parameter count and it can present the context representation at sentence level.Then, we use the max+avg method to reduce the impact of all snippets.Experimental results also demonstrate that our method performs better than the other three neural methods.
Results on Chinese Reading Comprehension Datasets.
We have set four baselines for Chinese reading comprehension datasets.One is the HABCNN-TE method which is the most optimal method on MCTest datasets and the other three are as follows.
(i) The first baseline is inspired by Cui et al. [7].We use the consensus attention-based neural network (called CAS Reader) for word of document and option.The model computes the attention of each document word directly, in respect to each option word at time t.The final consensus attention of option is computed by a merging function.
(ii) The second baseline uses a sliding window and matches a bag of words constructed from the document and the option, respectively (called Match Reader).This baseline is inspired from Zhang et al. [23].
(iii) The third baseline is the sentence similarity measure model (called SM Reader).The similarity is presented by the cosine similarity between the document sentence and the option sentence.The sentence representation is taken from Tai et al. [24].The experimental results are given in Table 3.
The results on three test sets show that our sentencelevel attention reader gives competitive results among various state-of-the-art baselines.We can observe that the accuracy in BCEETest outperforms the other test set.A possible reason can be that the college entrance examination is more standardized than that of the simulation.Also, we have noticed that the performance of the sentence-level model is better than the word-level model.For example, in BCEETest set, the SM Reader (sentence-level ) outperforms the Match Reader (word-level) by 3.4% and The Sentence-Level Attention Reader (sentence-level) outperforms the CAS Reader (word-level) by 4.9% in precision, respectively.In experimenting we find out that the number of related sentences with the option is very important.So we also evaluate different merging functions as CAS Reader.The results are shown in Table 4. From the results, we can see that the avg and sum methods outperform the max method.A possible reason can be that the max method is equivalent to one sentence of document instead of the original document and a lot of information is lost.However, doing it achieves the best performance in which all sentences in document are used in the model.In order to measure it, we also use the max+avg method as the merging function.The "max" denotes the top sentences and the "avg" denotes the average of top sentences.In comparison with the avg method, the accuracy of the max+avg method increased by around 2% on three datasets.And this result is consistent with error analysis in Section 5.5.We suspect that some sentences interfere with the final answer as negative factor.Figure 3 shows the experiment about top .We select randomly 5 options to do the experiment from the 13 Beijing college entrance examination papers (BCEETest).As we can see, the attention will not continue to increase in around 10.So is set to 10 in our model.As shown in Box 2. The bold word "Ruins" is a derogatory term that it is irrelevant to cultural and aesthetic in many Chinese mind, and interpretation of the word "ruins" is only a "city and village are changed into desolate places by destruction or natural disasters" in the "Modern Chinese Dictionary"; ere is no fault for the interpretation, but it is not enough if it is measured by world knowledge.In Europe, the meaning of "ruins" has been enriched and expanded since modern times.It has been endowed with the connotation of culture and aesthetics, and has become an academic concept....... ="One of the purposes of this paper is to correct the misunderstanding of the term "ruins" in the modern Chinese dictionary."Box 2: Example of related sentences with the choice.
Sentence Representation Model Analysis.
In this paper, we use two models for the sentence representation, which are LSTM sentence model and sentence composition model [17].Therefore, we have tested the contribution of the two models to the final model, respectively.The results are shown in Table 5.
The results on three test sets show that the precision of the fusion model is better than that of any single model.Therefore, we use the fusion model in sentence-level attention neural network.
Error Analysis.
To better evaluate the proposed approach, we perform a qualitative analysis of its errors.Two major errors are revealed by our analysis, as discussed below.
(i) The positioning feature word (such as "The second paragraph. ..") often appears in the options.To further analyze the locating property of our model, we also examine the dependence of accuracy on the positioning feature word.And all sentences are replaced by related sentences of the positioning feature word in document.The accuracy has increased by about 3% on these three datasets.The positioning feature word we use is shown as follows.
[The end of paper; The second paragraph; The end paragraph; The end of paper; The first paragraph] According to the above description, we will consider adding more features, such as location features, into our model in future work.
(ii) Our model may make mistakes when the option is expressed with emotion (such as "This paper not only contains historical interest, but also infiltrated the concern of reality and express the author's desire to enhance the cultural quality of the nation.").It is very difficult to calculate the attention between the option emotion and the document emotion.To handle such case correctly, our model will consider the emotion feature in future work.We have about more than 500 emotion feature words, like "thought provoking", "directly express one' s mind", and so forth.
Conclusion
In this paper, we introduce a sentence-level neural network model to handle the multiple-choice Chinese reading comprehension problems.The experimental results show that our model gives a state-of-the-art accuracy on all the evaluated datasets.We also use the max+avg method as the merging function and the accuracy of the max+avg method increased by about 2%.Furthermore, we analyze the positioning feature word and find that the accuracy increased by about 3%.
The future work will be carried out in the following aspects.First, we would like to extend our Chinese reading comprehension datasets and release it.Second, we are going to analyze the emotion feature and add it to our neural network model.
Figure 3 :
Figure 3: Experiment about the top .
Table 1 :
Statistics of multiple-choice reading comprehension datasets: train and three tests.
Table 3 :
Comparison of different reader model on three testing datasets.
Table 4 :
Results of different merging function.
Table 5 :
Results of two sentence representation models.related sentences with the choice ; the italic word has a little relation with the choice ; the "......" is not relation. | 2018-08-14T11:26:56.278Z | 2018-07-03T00:00:00.000 | {
"year": 2018,
"sha1": "dd2bac11365aa1da95727cf5a706e59b7e62cca6",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/wcmc/2018/2678976.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd2bac11365aa1da95727cf5a706e59b7e62cca6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
201283437 | pes2o/s2orc | v3-fos-license | Voxel-based correlation of 18F-THK5351 accumulation and gray matter volume in the brain of cognitively normal older adults
Backgrounds Although neurofibrillary tangles (NFTs) mainly accumulate in the medial temporal lobe with human aging, only a few imaging studies have investigated correlations between NFT accumulation and gray matter (GM) volume in cognitively normal older adults. Here, we investigated the correlations between 18F-THK5351 accumulation and GM volume at the voxel level. Material and methods We recruited 47 amyloid-negative, cognitively normal, older adults (65.0 ± 7.9 years, 26 women), who underwent structural magnetic resonance imaging, 11C-Pittsburgh compound-B and 18F-THK5351 PET scans, and neuropsychological assessment. The magnetic resonance and 18F-THK5351 PET images were spatially normalized using Statistical Parametric Mapping 12. Voxel-wise correlations between 18F-THK5351 accumulation and GM volume were evaluated using the Biological Parametric Mapping toolbox. Results A significant negative correlation (p < 0.001) between 18F-THK5351 accumulation and GM volume was detected in the bilateral medial temporal lobes. Conclusions Voxel-wise correlation analysis revealed a significant negative correlation between 18F-THK5351 accumulation and GM volume in the medial temporal lobe in individuals without amyloid-β deposits. These results may contribute to a better understanding of the pathophysiology of primary age-related tauopathy in human aging.
Introduction
Neuropathological studies have revealed that neurofibrillary tangles (NFTs) mainly accumulate in the medial temporal lobe (MTL) with human aging. Although neurodegeneration is also a feature of aging, in addition to tau pathology, only a few imaging studies have specifically investigated correlations between tau accumulation and brain volume [1,2]. Although 18 F-THK5351 was originally developed as a tau-specific tracer, recent studies have clarified the off-target binding to monoamine oxidase B (MAO-B) [3]. Because 18 F-THK5351 accumulation reflects the combination of astrogliosis, in addition to tau pathology, it is now considered the promising biomarker for detecting neuroinflammation [4]. Previously, using region of interest (ROI) analysis, we did not detect significant negative correlations between 18 F-THK5351 accumulation and gray matter (GM) volume [5]. In this study, we aimed to re-examine the correlations between 18 F-THK5351 accumulation and GM volume at the voxel level.
Participants
We recruited 47 cognitively normal older adults from the Brain Mapping by Integrated Neurotechnologies for Disease Studies (Brain/MINDS) project (grant number 18dm0207017h0005), who underwent structural magnetic resonance imaging and 11 C-Pittsburgh compound-B (PiB) and 18 F-THK5351 PET scans. All participants underwent cognitive testing that included the Mini-Mental State Examination (MMSE), global Clinical Dementia Rating Scale (CDR), and Wechsler Memory Scale-Revised Logical Memory II (WMSR LM-II). The inclusion criteria were the following: visually negative PiB PET results, a global CDR of 0, an MMSE ≥ 26, and performance within education-adjusted norms for WMSR LM-II, absence of neurological or psychiatric disorders, and no medications that affect cognition.
Data acquisition
All participants underwent 3D T1-weighted scans with a 3-T magnetic resonance imaging system (Verio; Siemens, Erlangen, Germany). PET scans were acquired using a Siemens/Biograph TruePoint16 Scanner (3D acquisition mode; 81 image planes; 16.2-cm axial field of view; 4.2-mm transaxial resolution; 4.7-mm axial resolution; 2-mm slice interval). For 11 C-PiB imaging, participants were injected with 555 ± 185 MBq of PiB prior to imaging and imaging was performed for a 20-min PET acquisition, 50 ± 5 min post-injection. For 18 F-THK5351 imaging, participants were injected with 185 ± 37 MBq of THK5351 prior to imaging and imaging was performed for a 20-min PET acquisition, 40 ± 5 min post-injection. PET/CT data were reconstructed using an iterative 3D ordered subset expectation maximization reconstruction algorithm.
Data preprocessing
The partial volume corrected 18 F-THK5351 PET images using PETPVE12 toolbox [6] were normalized using SPM12 (Statistical Parametric Mapping 12; Wellcome Department of Cognitive Neurology, London, England). Each participant's 18 F-THK5351 PET image was coregistered to its T1-weighted image and normalized with Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra. A transformation matrix was applied to each 18 F-THK5351 PET image, which had been coregistered to the T1-weighted images. After spatial normalization, standardized uptake value ratios (SUVRs) for 18 F-THK5351 PET images were calculated using the individual's positive mean uptake value of cerebellar GM as the reference region. Finally, each PET image was smoothed by an 8-mm full width at half maximum (FWHM) Gaussian kernel. The GM images segmented using SPM12 were also normalized and smoothed using an 8-mm FWHM Gaussian kernel.
Statistical analyses
The Biological Parametric Mapping (BPM) toolbox allows voxel-level comparisons across imaging modalities based on the general linear model to perform regressions [7]. Using the BPM toolbox, we evaluated the direct correlations between 18 F-THK5351 accumulation and GM volume at the voxel level. Results were considered significant at p < 0.001 with an extent threshold of 30 voxels (uncorrected for multiple comparisons).
Results
The participants' demographics are shown in Table 1. Mean age ± standard deviation was 65.0 ± 7.9 years and mean cognitive scores were 29.3 ± 1.1 for MMSE and 13.4 ± 2.9 for WMSR LM-II.
Localized 18 F-THK5351 accumulation was detected mainly in the basal ganglia, thalamus, MTL but slightly extended into the inferior temporal lobe, insula, posterior cingulate cortex/precuneus, and basal frontal lobe (Fig. 1). We found a significant negative correlation between 18 F-THK5351 accumulation and GM volume in the bilateral MTL, right parahippocampal gyrus (cluster size, 40 voxels; Z-score, 4.68; MNI coordinate [x, y, z],
Discussion
This is the first study to investigate direct correlations between 18 F-THK5351 accumulation and GM volume at the voxel level using BPM in cognitively normal older adults. We detected significant correlations between an increased 18 F-THK5351 accumulation and reduced GM volume in the MTL. Our findings may contribute to a better understanding of the pathophysiology of human aging.
We found significant 18 F-THK5351 accumulation mainly in the basal ganglia, thalamus, MTL slightly extending into the inferior temporal lobe, insula, Fig. 1 Mean SUVR images of 18 F-THK5351 in cognitively normal older adults. Localized 18 F-THK5351 accumulation was mainly evident in the basal ganglia, thalamus, and medial temporal lobe, which slightly extended to the inferior temporal lobe, insula, posterior cingulate gyrus/precuneus, and basal frontal lobe. SUVR, standardized uptake value ratio Fig. 2 Voxel-wise correlations between 18 F-THK5351 accumulation and gray matter volume in normal older adults. Significant negative correlations between 18 F-THK5351 accumulation and gray matter volume were detected in the bilateral medial temporal lobes (voxel threshold of p < 0.001 with a 30-voxel extent threshold). A anterior, P posterior, R right, L left posterior cingulate cortex/precuneus, and basal frontal lobe. Our finding corresponded to Braak stage III-IV, which is not consistent with previous neuropathological and tau PET studies that tau pathology was usually localized in the MTL in the cognitively healthy participants [8,9]. However, a recent large cohort study [10] showed elevated 18 F-THK5351 tau tracer retentions in Braak stage III-IV areas with normal amyloid status and raised the possibility of primary age-related tauopathy (PART) [11]. Our results also support their results and might reflect PART. PART is defined as NFT pathology mostly restricted to the MTL, basal forebrain, brainstem, and olfactory areas in the absence of β-amyloid in the aged brain. Cognitive function is usually normal to mildly impaired in PART, and severe cognitive decline is rarely seen. Although PART is suggested to be an agerelated phenomenon distinct from early Alzheimer's disease, its pathophysiology is still unclear [1]. Thus, further studies with tau PET are needed to better understand the pathogenesis of PART. The BPM analysis demonstrated a significant voxelwise negative correlation between 18 F-THK5351 accumulation and GM volume in the MTL, which was not detected in the previous ROI-based analysis [5]. Although ROI analysis is a common approach, it may not be able to accurately assess localized accumulation due to dilution effects. Because the BPM toolbox enables a direct comparison across imaging modalities at the voxel level, this voxel-based analysis may be more reliable than traditional ROI-based analyses. Our findings are consistent with neuropathological studies of PART patients showing that a higher Braak NFT stage is associated with hippocampal head atrophy [1]. Similar findings have been reported in recent 18 F-AV1451 tau PET studies, namely, that higher MTL tau is associated with MTL atrophy [2]. Although they managed to obtain these results using a predefined FreeSurfer ROI approach, it is possible that nonspecific 18 F-AV1451 accumulation in the choroid plexus adjacent to the MTL may not have been eliminated.
This study has several limitations. First, the number of participants was relatively small. Second, the study lacked pathologic confirmation of tau pathology. Third, the high affinity of 18 F-THK5351 to MAO-B [3,4] may contribute to the relatively higher Braak stage of our findings compared to PART type pathology which is usually III or lower [11]. MAO-B concentration increases during ongoing astrogliosis, which is considered as neuroinflammation changes that occur in response to brain injury and neurodegenerative disease [12]. Previous PET study in healthy subjects reported global MAO-B increases in the whole brain even with human aging [13], and the astrocytes seem to contribute to low-grade inflammation in the aged brain [14]. Because 18 F-THK5351 uptake reflects the combination of astrogliosis and tau pathology, the degree and extent of tracer retention could be higher than that of only tau pathology. Recently, it is reported that aggregation of misfolded proteins including TDP-43 and α-synuclein in addition to Aβ and NFTs observed in PART are common even in cognitively healthy elderly brain [15]. Since the mixed pathologies are frequently observed in the aged brain, they could evoke neuroinflammation and increase astrogliosis, resulting in accumulation of 18 F-THK5351. The early phase of age-related TDP-43 accumulation, known as "limbic-predominant age-related TDP-43 encephalopathy (LATE)," which tends to extend to limbic areas including the amygdala, could be part of the reasons [16]. In addition, the stereological cell counting studies showed declining of neocortical neuronal populations but no changes of the total astrocyte numbers in the aged human brains [17]. Therefore, the numbers of astrocytes would tend to concentrate in the atrophied MTL regions, suggesting the contribution of MAO-B in addition to tau pathology.
In summary, we found significant voxel-wise negative correlations between 18 F-THK5351 accumulation and GM volume in the MTL. These results may reflect the concept of PART and contribute to a better understanding of the neurobiology of aging. Further studies are needed to confirm whether our findings reflect PART pathology or not by taking an oral dose of MAO-B inhibitor selegiline [3] or using secondgeneration tau-specific tracers with much less off-target binding. | 2019-08-23T14:45:01.929Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "d82cc7c52ceb9eb8eec54160bf319e925c31c8a4",
"oa_license": "CCBY",
"oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/s13550-019-0552-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d82cc7c52ceb9eb8eec54160bf319e925c31c8a4",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3366424 | pes2o/s2orc | v3-fos-license | Low Temperature CVD Grown Graphene for Highly Selective Gas Sensors Working under Ambient Conditions
: In this paper we report on gas sensors based on graphene grown by Chemical Vapor Deposition at 850 °C. Mo was used as catalyst for graphene nucleation. Resistors were directly designed on pre-patterned Mo using the transfer-free process we recently developed, thus avoiding films damage during the transfer to the target substrate. Devices operating at room temperature and relative humidity set at 50% were tested towards NO 2 . The sensors resulted to be highly specific towards NO 2 and showed current variation up to 6%. The performances were compared with those of gas sensors based on graphene grown at 980 °C, which represents the usual growth temperature for such material. The findings show that by lowering the graphene growth temperature and consequently the energy consumptions the sensing benefits of these devices are still preserved.
Introduction
Among the many attractive properties graphene has, the strong stability, the highest surface-to-volume ratio (~2600 m 2 g −1 ) and the interaction with only the surface atoms, make graphene the ideal candidate for gas sensors operating in ambient conditions [1]. CVD graphene, in particular, reveals itself particularly promising in terms of high quality and large scale production [2]. However, for practical applications, the bottleneck related to the CVD technique is inherent to the graphene transfer from the catalyst substrate to the target one [3]. On this regard, we have recently reported a transfer-free process (TFP) that prevents any issue related to the graphene transfer [4]. In our previous work, graphene-based gas sensors prepared through TFP were found to be able to achieve extremely low limit of detection (LOD), in the range of a few hundred ppb of NO2, and resulted scarcely sensitive towards NH3 [4]. Here, we present the performances of the gas chemi-resistors based on graphene fabricated through a process specifically developed to lower the growth temperature down to 850 °C. This value is significantly much lower than the 980 °C usually adopted as graphene growth temperature [2,4,5]. The choice to reduce the growth temperature is motivated by the fact that a lower growth temperature can reduce the energy consumption during the graphene growth and facilitate large scale production of these gas sensors. We demonstrate that the sensors performance is not affected by lowering the material growth temperature, thus preserving the above mentioned benefits.
Sensors Preparation
The graphene-based gas sensors presented in this study were fabricated on 4" Si (100) wafer covered by thermally grown SiO2 (90 nm). A thin film of Mo (50 nm) was sputtered from a pure (99.95%) Mo target. Afterwards, dry etching was used to pattern the Mo layer, as described in our previous work [5]. The graphene growth on the patterned Mo catalyst was carried out in an AIXTRON BlackMagic Pro at 850 °C, using Ar/H2/CH4 as feedstock at a pressure of 25 mbar. The Mo catalyst was then etched away following the transfer-free process (TFP) we developed [5] and the graphene film laid on the SiO2. Evaporated Cr/Au (10/100 nm) electrical contacts were defined on the top of graphene film using a lift-off process.
Sensors Characterization
The devices were electrically characterized by a semi-automatic probe-station with an Agilent 4156C semiconductor parameter analyzer.
Three different tests were performed on the gas sensors in a Gas Sensor Characterization System (GSCS, Kenosistec equipment) setting temperature and RH at 22 °C and 50%, respectively.
Sensors Test-Protocol Description
The first test, in the following addressed as Test 1, consists of a single exposure at 1 ppm of NO2 10 min long, preceded and followed by 20 min of baseline and recovery phases, respectively, in N2 atmosphere.
The second test, Test 2, consists of 5 sequential pulses of NO2 at 1 ppm, similarly to Test 1. Finally, Test 3 is made of 12 sequential pulses of NO2 at different concentrations ranging from 1.5 down to 0.12 ppm, each step being 4 min long. The baseline and recovery phases lasted 20 min, respectively. Only the baseline preceding the first step was set 10 min longer than in the other steps in order to further stabilize the devices in the test chamber.
Results and Discussion
In Figure 1, the I-V characteristics of the fabricated devices (inset) are showed. Red and black lines are referred to devices based on graphene grown at 850 °C and 980 °C, respectively. The linear behavior of the two curves proves that the Ohmic contact was successfully realized between graphene and the Cr/Au contacts. The different resistance value can be ascribed to the differences in the material crystallinity.
Such prepared chemi-resistors were tested upon the abovementioned three different protocols. In Figure 2, the real-time current behaviors of the chemi-resistors upon exposure to a single ( Figure 2a) and five sequential pulses of NO2 (Figure 2b) I0 and I represent the current values at the inlet and outlet of the NO2 flow, respectively. For sensors named "850 °C", ΔI/I0 was estimated to be roughly equal to 6%. As comparison, ΔI/I0 for sensor named "980 °C" was around 7%, providing the first indication that the sensors performance is not significantly affected by lowering the temperature of graphene growth. Figure 2b also attests the substantial equivalence between the two devices behavior, showing the overall same kinetics upon Test 2. In Figure 3, this comparison is further addressed. For both sensors, the current recorded during each gas pulse of Test 2 is compared. The signal is normalized to the value I0 at the gas inlet of each step. It is worth to note that, in both cases, same trend of the signal and current variation decreasing are noticed. For instance, for both sensors, the difference between the first and second step is about 2%. On the other side, the other steps do not present appreciable differences between the two sensors' performances. Finally, the sensors were undertaken to Test 3 and the results are showed in Figure 4. Black and red lines refer to device "850 °C" and device "980 °C", respectively. Curves in Figure 4b, extrapolated from Figure 4a, show the plot ΔI/I0 as function of NO2 concentration, where I0 represents the current value at the gas inlet for each gas pulse. Therefore, within the error bar, Figure 4b definitively discloses the substantial comparability of the findings, although the different growth temperature for the graphene.
Conclusions
In this wok, we presented gas sensors based on graphene grown by CVD at 850 °C. Such sensors were tested towards NO2 under environmental conditions, i.e., room temperature and relative humidity set at 50%. The performances were compared with those of devices based on graphene grown at 980 °C, which is usually adopted as growth temperature for such material. The different tests carried out on both sensors definitely uncovered that the sensing behavior is not affected by lowering the growth temperature of graphene. Therefore, these results indicate that energy consumption can be significantly reduced during the large scale production of graphene and graphene-based sensors by CVD technique. | 2018-02-03T22:21:51.233Z | 2017-08-16T00:00:00.000 | {
"year": 2017,
"sha1": "532af2d2d8548a68743795218a57e3ac5a09eae5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3900/1/4/445/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c959377aed7b84bf7f5e29466e388194e7804536",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
256393333 | pes2o/s2orc | v3-fos-license | Impact of a global leader on pharmaceutical practice and policy around the world
This commentary describes the contributions of a Dutch pharmacist who contributed in a unique manner to the development of community pharmacy practice in Europe, to the evolution of practice-based research and to its publication. With an interest in pharmaceutical care and in clinical pharmacy, Dr. van Mil changed practice and policy in Europe over the last decades in a very visible way, here documented through a summary of some of his main written contributions. We write this to honour his memory and contribute to the preservation of his legacy.
The first reference to pharmaceutical care dates to 1975 [1], and successive definitions and operationalisation of the term have continued across the world. However, many of these were largely descriptive or pilot studies, with little impact on practice or simply refinements of terminology which did not transfer into policy which is essential to lead to changes in the scope of practice of pharmacists working in various health care settings.
A pragmatic approach, a strong character and a drive to make the world a better place are some of the phrases that could be used to describe Dr. Foppe van Mil, who died on 18 July 2020. This commentary summarises the most relevant achievements of a unique career that has had an impact on the practice, research or policy relating to pharmacy across the world, either working on his own or, more often, collaboratively with academic or practice-based teams. Table 1 summarises many of the projects with which Dr. van Mil was associated alongside some key publications that influenced practitioners, researchers and policy-makers, including his PhD thesis which was a landmark for pharmacy practice in the Netherlands, expanding its impact to many other countries [2].
He was involved in the first two pharmaceutical care research projects in Europe; in the Asthma Intervention Project (TOM), he was a postgraduate student, and in the Care of the Elderly study (OMA), he was a leading member of the research team [3,6]. Both projects took a holistic approach to patient care and involved international collaboration, two aspects that were characteristic of Foppe's conception of research. The major influence on Foppe's research involvement, and on his influence on his colleagues, was his experience and expertise as a community pharmacist; this made him very practical but also very demanding, because for him, research was about, and for, the benefit of patients. His reputation reached all countries, and even countries recognised as having an advanced practice, including the UK, benefited from visiting his practice in the Netherlands [21].
Foppe had a very visible contribution to the dissemination of research and pharmacy practice, by his various attributes and activities. He was an educator, but a practical one, who once committed, would work tirelessly to Europe It suggests changes in the pharmacy curriculum so that pharmacists can acquire new knowledge and skills. It encourages countries to standardise procedures adopted to guarantee every patient receives PhC. [7] Review article Describes European developments in the implementation of and research into PhC, focusing on the community pharmacy.
Europe
Paper by invitation of the Harvard Health Policy Review (USA). [8] Opinion paper Establishes the need and the rationale for using a classification of DRPs to avoid negative consequences for patients.
Germany
Motivated the adoption of such classifications in pharmacy practice in both primary and secondary care. [9] Terminology papers This sequence of publications establishes the foundations for understanding concepts of PhC, clinical pharmacy and medication review, highlighting the boundaries and similarities between concepts, while reflecting the evolution in practices.
Worldwide, with a special focus in Europe These definitions are used by academics and researchers worldwide to support teaching and educational activities.
World
Currently being used in Belgium, China, Germany, Ghana, Norway, Poland, Portugal, Serbia, Slovenia, Spain, Sweden, and Taiwan.
[ [14][15][16] Review paper Describes the healthcare system context, pharmacy practice and range of services offered in pharmacies in the Netherlands.
The Netherlands
Used widely as reference to frame various research or practice based studies undertaken in this counrty [17] Review paper Describes the healthcare system context, pharmacy practice and range of services offered in pharmacy in Peru.
Peru (South America) Used widely as reference to frame various research or practice based studies undertaken in this counrty [18] Research paper Describes the healthcare system context, pharmacy practice and range of services offered in pharmacy in Europe.
Europe
Provides a framework for countries to benchmark themselves against each other.
[ 19] deliver high quality. He was involved for many years in the Programme Committee for the Continuing Education Programme organised by the Community Pharmacy Section of the International Pharmaceutical Federation (FIP), through which annual sessions were held as presatellite meetings. These sessions aimed at the continuous professional development of pharmacists, particularly in the area of pharmaceutical care, and have led to the improvement of knowledge and practice skills of community pharmacists [22]. The aims, development and impact measurements of this programme were published subsequently [23]. FIP and the World Health Organization (WHO) collaborated to deliver a course on pharmaceutical care in Uruguay and other Latin American countries and chose Foppe to do this. He used a train-the-trainer model, in which theory-based sessions were followed by visits to the settings where the trainers practised to better understand their reality and adapt the learnings to their needs. Two of the main vehicles for the advancement and dissemination of knowledge in the field of pharmaceutical care in Europe are the European Society of Clinical Pharmacy (ESCP) and the Pharmaceutical Care Network Europe (PCNE). As a member of ESCP, Foppe delivered workshops and lectures over many years and helped to promote research through his contributions to the Communication Committee. His unique contribution was cofounding the PCNE (in 1994), and he was truly its backbone; irrespective of the challenges, whether organisational, fiscal or philosophical, Foppe persevered and his belief in the value of PCNE and consequently sustained everyone in it. The definition of pharmaceutical care, the classification of drug-related problems and the conception of medication reviews all depended on Foppe's initiative, determination and implementation. In addition through national organisations, such as the "Förderinitiative Pharmazeutische Betreuung" (Foundation Pharmaceutical Care) in Germany and at national pharmacy conferences, for example, in Poland [24] and at special occasions such as the award ceremony of the Royal Pharmaceutical Society of Great Britain, Foppe was an invited and valued contributor.
Foppe van Mil was an extraordinary person in the Dutch pharmacy practice space and beyond. Always vocal, always committed to innovation for the well-being of patients and public health. He was a very straightforward, yet unorthodox person, who combined in a very original and appealing manner practice and academic work, at the University of Groningen. His drive to make a difference, both in teaching and in research, was amazing and has always been well acknowledged. He was a recipient of the Innovation Prize of the Royal Dutch Association of Pharmacists (KNMP), a unique signature of excellence and leadership. This public recognition of innovative practice with visible benefits for patients contributed to further dissemination of the concept of pharmaceutical care and its implementation in practice there [2]. But many other institutions and organisations have publicly recognised him for his contribution leading to the advancements in research and practice. In Spain, during the first International Congress of Pharmaceutical Care (Atención Farmacéutica, San Sebastian 1999), he received together with Doug Hepler and Linda Strand (both from the USA) the Pharmaceutical Association of Gipuzkoa Award, as the judges considered these three individuals were, at that time, those with the most significant contributions to the advancement of pharmaceutical care internationally and whom, as such, stimulated the move to further the concept of pharmaceutical care in Spain, leading to its recognition and establishment in the law some years later.
In addition to Foppe's contribution to the dissemination of research and practice innovation to researchers and pharmacy practitioners through his lectures and workshops, he made a very significant contribution in knowledge translation through his efficient and effective editorship of the International Journal of Clinical Pharmacy (IJCP). Throughout his time as editor-in-chief, he encouraged students and practitioners to publish their work. This journal, originally named Pharm. Weekblad Table 1 Summary of practice, research projects and recognitions that have impacted pharmaceutical care practice or policy (Continued)
Title
Brief description Countries involved Impact References Research paper Describes the medication review practices in Europe, implementation and remuneration models associated.
Europe
Provides an assessment of the implementation of medication review in Europe, enabling benchmark and analysis of the strong points of some of the countries described in more detail. [20] Editor-in-Chief of the Int J Clin Pharm Published high-quality manuscripts that enhanced the visibility of pharmaceutical care and clinical pharmacy in various countries of the world.
Worldwide
These publications have provided evidence-based practice to motivate policy changes in many countries, including in some cases changes in the scope of practice or remuneration models for pharmacy.
Pharmacy practice commentary on social media (22 July 2020) -Scientific Edition, which was a scientific publication, became known first, as Pharmacy World & Science (PWS), and then in 2010, was renamed to IJCP. His work in this regard was transformative and leads to the remoulding of the journal into one with a substantial international impact. Foppe was a great supporter of evidence-informed pharmaceutical policy at a global level. In 2009, he helped publish a key editorial in "Pharmacy World and Science" regarding the starting of the journal "Southern Med Review", later renamed as "Journal of Pharmaceutical Policy and Practice", as he believed there was a paucity of journals focused on pharmaceutical policy, which were needed to complement the more practice-based ones [25].
At a later stage of his career, he edited a book aimed at helping practitioners worldwide to implement pharmaceutical care. This was indeed the main aim of his life, to transform standard pharmacy practice, or usual care as it is often called in randomised controlled trials, into pharmaceutical care and make this advanced way of constantly optimising medication usual practice. In this book, he gathered more than 40 worldwide reputed authors and covered all aspects believed to be essential for practice implementation, from disease-specific to health care setting-specific, to country-specific and of course not forgetting about university education and continuous professional development [26].
He will be greatly missed by all his friends, colleagues and followers throughout the world. | 2023-01-31T15:07:56.762Z | 2020-08-21T00:00:00.000 | {
"year": 2020,
"sha1": "c718a9967ab7d332dd58165785df0683a7d2aee8",
"oa_license": "CCBY",
"oa_url": "https://joppp.biomedcentral.com/track/pdf/10.1186/s40545-020-00253-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c718a9967ab7d332dd58165785df0683a7d2aee8",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Business"
],
"extfieldsofstudy": []
} |
225353754 | pes2o/s2orc | v3-fos-license | Assessing year-round habitat use by migratory sea ducks in a multi-species context reveals seasonal variation in habitat selection and partitioning
1 –––––––––––––––––––––––––––––––––––––––– © 2020 The Authors. Ecography published by John Wiley & Sons Ltd on behalf of Nordic Society Oikos This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Subject Editor: Jean-Michel Gaillard Editor-in-Chief: Miguel Araújo Accepted 24 July 2020 43: 1–18, 2020 doi: 10.1111/ecog.05003 43 1–18
Introduction
Long-distance migration of wildlife involves a delicate balance between the rewards of occupying favorable habitats year-round and the challenges of the extreme journeys required to access those habitats (Alerstam et al. 2003). High fidelity to specific breeding and wintering sites and migration routes, as well as reliance on evolutionary and hereditary cues, make long-distance migrants highly susceptible to mismatches between timing of migratory movements and availability of key resources (Robinson et al. 2009, Liedvogel andLundberg 2014). In addition, many species of migratory wildlife concentrate in favorable habitat sites during migration, making them vulnerable to localized perturbations (Berger 2004, Buehler andPiersma 2008). Migratory birds, which represent about 19% of all avian species, undertake some of the most complex and challenging migrations of any taxa (Alerstam et al. 2003) and are currently experiencing widespread population declines (Kirby et al. 2008). Despite these negative trends, however, habitat conservation for migratory birds often falls short of levels achieved for nonmigratory species (Runge et al. 2015).
Long-distance migrants present a challenge to traditional, site-based models of habitat conservation developed for non-migratory species (Martin et al. 2007, Singh andMilner-Gulland 2011). Some migrants rely on discrete habitat sites with varying characteristics, which often span multiple nations with varying conservation priorities (Berger 2004, Shillinger et al. 2008). In addition, because populations of migratory species can be affected by factors occurring throughout the annual cycle, improvements to habitat used during one period or season may not be sufficient to meet conservation targets if habitat during another part of the annual cycle is limiting (Marra et al. 2015). As a result of this complexity, conservation strategies for migratory species requires a detailed understanding of habitat use throughout the annual cycle in order to develop coordinated management across multiple sites and seasons (Wilcove and Wikelski 2008). Understanding the habitat requirements of migratory birds throughout the annual cycle can help prioritize sites and resources for conservation, as well as identify potential periods of vulnerability (Kirby et al. 2008, Allen andSingh 2016).
As part of improving habitat conservation for migratory species, conservationists may need to account for species co-dependency on the same habitat, which requires sound multi-species approaches to evaluating habitat requirements (Witting and Loeschcke 1995, Mittermeier et al. 1998, Brooks et al. 2006. Evaluating habitat needs across guilds of similar species allows managers to set conservation priorities at the landscape scale, focusing on areas of particular ecological significance to maximize the impact of conservation action (Moilanen et al. 2005, Block et al. 2011, Hindell et al. 2011, Raymond et al. 2015. Multi-species assessments can be useful for evaluating the effects of future habitat loss or environmental change on migratory pathways (Martin et al. 2007), and identifying representative species that can be targeted for monitoring (Lambeck 1997, Bonn andSchröder 2001). While migration strategies vary among birds, the physiological demands imposed by long-distance migration often lead to concentrations of multiple avian migratory species at locations with highly predictable resource availability and favorable geographic features (Roberts et al. 2001, Mehlman et al. 2005, and the resulting interspecific competition for shared resources may have population-level effects (Péron and Koons 2012). Thus, identifying sites or habitat requirements shared by multiple species of long-distance avian migrants at specific periods in the annual cycle can provide important insights into the biogeographic and conservation needs of migratory species that transcend species-specific habitat models (Donovan et al. 2002).
Sea ducks (tribe: Mergini) are a suite of migratory bird species that breed in boreal and arctic habitats and winter along the coasts of large water bodies. Although their nonbreeding ranges broadly overlap and they often congregate in mixed-species flocks during migration and in the winter, different species of sea ducks have evolved to exploit subtly different habitats and resources within shared habitat areas, and these differences may result in interspecific variation in key habitat areas and movement networks (Lamb et al. 2019). Sea ducks also present unique challenges for habitat conservation and management, as their habitat needs vary widely throughout the annual cycle from coastal aquatic non-breeding areas to terrestrial breeding sites far inland (Johnsgard 2010). The long-distance migrations required to travel between those sites must be delicately adjusted to the availability of resources in diverse habitats separated by thousands of kilometers, and often involve migratory bottlenecks in which large portions of the population occupy in shared staging areas (Oppel et al. 2009, Boere andPiersma 2012). Across North America, sea ducks are experiencing widespread population declines, leading to increased interest in identifying key habitats and conservation strategies (Bowman et al. 2015). However, since sea ducks spend much of the year in remote, uninhabited breeding and staging areas or offshore wintering sites, direct observations of their abundance and distribution are expensive, labor-intensive, and may be limited in space and time.
In eastern North America, a continental-scale, partnership-based project designed to track numerous individuals from five high-priority sea duck species (common eider Somateria mollissima, black scoter Melanitta americana, surf scoter Melanitta perspicillata, white-winged scoter Melanitta deglandi and long-tailed duck Clangula hyemalis) using satellite telemetry provides a solid foundation for multi-species assessments of annual-cycle habitat selection (Sea Duck Joint Venture Management Board 2014, Lamb et al. 2019). Data from this project has filled large gaps in understanding of sea duck breeding habitat in North America, revealing previously unknown distributional patterns of species (particularly scoters) at boreal and arctic breeding sites and informing the development of surveys in areas difficult to access using traditional methods (Reed et al. 2017). Contributed data also has shown how sea ducks use coastal wintering habitats, and potential spatial overlap with proposed wind energy installations (Loring et al. 2014, Beuth et al. 2017, Meattey et al. 2019. However, each of these assessments has focused on species-specific, single-season habitat selection. There remains a need to address sea duck habitat requirements throughout the annual cycle in a more comprehensive and integrated manner, both to understand how ecological risks are distributed and to optimize use of limited funding for management and protection (Sea Duck Joint Venture Management Board 2014).
We present a multi-species assessment of year-round habitat use patterns of sea ducks in eastern North America derived from year-round location data on individual sea ducks. Our main goals for the study were 1) to describe single-and multi-species patterns of habitat suitability throughout the annual cycle; 2) to assess between-species habitat partitioning and selection and 3) to identify key biophysical features selected by sea ducks during different periods of the annual cycle. Based on observed ranges of the focal species , we expected to observe a high degree of habitat partitioning during the breeding season, an intermediate level of partitioning during migration, and a relative lack of partitioning during winter with extensive use of similar habitats. Further, based on previous studies, we expected to see static physical habitat features (distance to shorelines, elevation and slope) as the strongest predictors of sea duck occurrence and species counts in both breeding and nonbreeding habitats, with greater reliance on preferred habitat features during migratory staging to facilitate rapid acquisition of energy. Our analyses provide a framework for identifying specific locations, time periods and habitat features for which targeted conservation can provide the greatest shared benefit to species with similar habitat requirements, and for developing coordinated range-wide, multi-species conservation strategies.
Material and methods
We used satellite telemetry to track sea ducks in eastern North America throughout the annual cycle (Lamb et al. 2019). After filtering location data to include only sedentary points, we measured occurrence by season and species across the study area. To relate sea duck occurrence to environmental features, we used a multivariate ordination of available sites based on their ecological characteristics to measure single-and multi-species habitat suitability and evaluate between-species habitat partitioning, and used linear models to examine the relationships of individual ecological covariates to multi-species occurrence.
Study area -we captured sea ducks at multiple areas along the Atlantic coast and Great Lakes of eastern North America during molting, staging and winter (August-March) between 2002 and 2017 (full details of capture locations and species ranges summarized in Supplementary material Appendix 1 Fig. A4). Since sampling the entire range of each species was not possible, we selected sampling locations to represent locations and time periods with particularly dense species concentrations, which occur primarily during nonbreeding. Capture efforts for long-tailed ducks, and surf scoters focused primarily on wintering sites, and transmitters were distributed proportionally based on concentrations of ducks observed during the Atlantic Winter Sea Duck Survey. Capture efforts for white-winged scoters occurred mainly on molting sites and the ones for black scoter focused on pre-breeding migration sites in the St Lawrence River, with additional captures of the three scoter species at other annual stages (Sea Duck Joint Venture Management Board 2014). Additional sampling of long-tailed ducks at Lake Michigan was added late in the project to account for gaps in observed data (Fara 2018). Sampling of common eider was limited to one of three eastern subspecies (S. m. dresseri) during breeding and wintering periods.
Transmitter deployment -we captured both sexes of subadult and adult ducks on water using a combination of decoys and netting. We captured the majority of sea ducks using mist nets (1.3 × 18 m 2 , 127-mm mesh), which we positioned either above the water on floating poles to catch ducks in flight (Brodeur et al. 2008), submerged as gillnets to catch birds during dives (Breault and Cheng 1990), or in the Great Lakes, suspended nets placed horizontally underwater that were raised to a vertical position ducks just before birds flew over the site to capture them in flight (Ware et al. 2013). Additional capture techniques included night-lighting and dip-netting for wintering ducks roosting on the water, netgunning and fishing nets (Lamb et al. 2019). Veterinarians experienced in avian surgery implanted 26-50 g coelomicimplant Platform Transmitter Terminals (PTT) (Microwave Telemetry, Columbia, MD, USA; Telonics, Mesa, AZ, USA; Geotrak, Apex, NC, USA; mention of commercial products does not imply US or Canadian Government endorsement) into the abdominal cavity following implantation techniques described by Korschgen et al. (1996). Individuals were selected for transmitter attachment based on body mass, such that transmitter mass represented less than 5% of overall body mass (Phillips et al. 2003). Transmitters followed varying duty cycles consisting of 2−4 h 'on' periods followed by 10−120 h 'off' periods, resulting in one location every 0.5-5 d (for specific duty cycles by deployment event, see Lamb et al. 2019, Supplementary material Appendix 1). We excluded data collected during the first 14 d following surgeries to minimize potential biases in assessments of habitat use patterns and movement dynamics due to surgery (Esler et al. 2000).
Spatial data processing -unprocessed satellite telemetry data vary in quality of location estimates based on the configuration and number of satellites used to obtain each location, and time intervals between location estimates also may vary. Data were initially processed using a hybrid Douglas-Argos filter to remove redundant and erroneous locations (Sea Duck Joint Venture 2015). We then used a switching state-space model (Jonsen et al. 2005) to simultaneously determine the most probable track for each individual given the observation error associated with each location (i.e. error correction) and produce a regular track from irregular data with varying uncertainty (i.e. interpolation). This modeling approach also allowed us to classify successive locations based on patterns in inter-location distances and turning angles. Although behavior varies at finer temporal scales, our ability to detect state changes was limited by the spatiotemporal scale of data collection, which would not have identified fine-scale changes such as foraging movements. We therefore restricted behavioral classification to two day-to-day states: sedentary, in which movements between successive locations were characterized by short distances and frequent directional change, and transient, in which locations were widely spaced and directional change infrequent, corresponding to multiday directed movements such as migration and dispersal.
To allow the model sufficient information to interpolate individual tracks, we removed all individuals that had fewer than 50 good-quality locations (Argos Location Classes 1-3, or < 1500 m error radius; typically, one month or less of data) prior to analysis, leaving 476 individuals (Supplementary material Appendix 1 Table A1). We did not interpolate over time periods of > 7 d between successive locations, because longer temporal gaps produce unrealistic movement trajectories (Jonsen et al. 2005). Based on the duty cycles of transmitters, the maximum programmed period between locations for a correctly-functioning unit was 120 h, with most units sampling more frequently; thus, 90% of locations were separated by ≤ 4 d, and 78% of locations were separated by ≤ 3 d. Average sampling intervals varied among species from 2.3 d (surf scoters) to 3.6 d (white-winged scoters).
We ran all models in the 'bsam' package (Jonsen et al. 2005, Jonsen 2016) in R (R Core Team) using a switching first difference correlated random walk model with a one-day timestep, 5000 burn-in samples for model training, and 5000 posterior samples for analysis. To reduce autocorrelation, we retained every 5th posterior sample and used a 0.1 smoothing parameter. Model outputs included probable daily locations, as well as a score from 1 to 2 (hereafter, b) indicating the average assignment of the location to either a transient (1) or sedentary (2) behavioral state across all retained posterior samples. We checked model fit and convergence using the 'diag_ssm' function in 'bsam', which includes trace plots, density plots, autocorrelation plots and shrink factor plots, and visually assessed the resulting tracks to ensure that state assignments corresponded to periods of migration and residency. Behavioral state assignments were proportionally similar across Argos location classes (Supplementary material Appendix 1 Fig. A6).
To assess habitat use, we removed all transient locations (b < 1.5) and assigned the remaining sedentary locations to one of four seasons -wintering, pre-breeding staging, breeding and post-breeding staging/molt -based on its timing within the annual cycle and relative to long-distance displacements, thus limiting our analysis to terrestrial and marine habitats. Although aerial habitats used for migrating between breeding, staging and wintering sites are undoubtedly important, measuring flight altitudes and atmospheric covariates was beyond the scope of the present study. Median start dates were 3 November for wintering, 22 April for pre-breeding migration, 1 June for breeding and 25 July for post-breeding migration and molt. Although most flight feather molt takes place following the start of post-breeding migration, breeding females may molt their flight feathers before departing the nest site; thus, although most molt locations are included in post-breeding migration, some may also be included in the breeding season. We then calculated 95% kernel density estimates for all species within each season using the ks package (Duong 2007) in R.
To examine variation in habitat use across the study area, we created a 100 km 2 hexagonal grid within each season's 95% kernel area to match the resolution of our habitat variables (≤ 0.1 degree, or approximately 10 km). Although suitable habitat likely exists outside the 95% kernel area of occurrence points, we chose not to predict habitat suitability beyond observed use areas in order to ensure that background habitat values accurately represented available habitat characteristics (Franklin 2010). For each season, we calculated habitat suitability for each species and overlaid maps to obtain an estimate of multi-species suitability (see 'Habitat suitability' section). We excluded dresseri common eider from our analysis of breeding season habitat selection, because their breeding habitats occur primarily on offshore islands (Goudie et al. 2000). Breeding eiders forage and raise chicks in the marine environment, and therefore rely on different prey resources than breeding scoters and long-tailed ducks occupying freshwater systems. Moreover, habitat used by breeding eiders remains open year-round, whereas access to inland arctic habitats is limited by freeze/thaw dynamics. Thus, the habitat characteristics driving eider occurrence during breeding are likely very different from those governing occurrence of the other four species (Johnsgard 2010).
Environmental covariates -we chose separate sets of predictor variables to predict occurrence in inland habitats (primarily used during breeding) and nearshore aquatic habitats (primarily used during migration and wintering time periods) (full details of environmental datasets are given in Supplementary material Appendix 1 Table A2). The variables we used were not strongly collinear (R 2 < 0.6 for all pairs). In all cases, we used long-term averages to represent dynamic covariates (Supplementary material Appendix 1 Table A2). Since sea ducks have strong interannual fidelity to both breeding (Phillips andPowell 2006, Takekawa et al. 2011) and non-breeding sites (Robertson and Cooke 1999, Lepage et al. 2020, we expect that habitat selection is driven largely by individual life histories and previous years' experience rather than year-to-year conditions. To represent potential predictors of occurrence in inland terrestrial breeding habitats, we used nine variables, including three dynamic climate variables (total annual precipitation, annual maximum temperature and annual minimum temperature [Worldclim2; Fick and Hijmans 2017]) and six static biophysical covariates (distance to marine coastline [World Vector Shoreline; Wessel and Smith 1996]; distance to lake > 50 km 2 and distance to large lake > 300 km 2 [North American Rivers and Lakes; U.S. Geological Survey]; elevation [ETOPO2; National Geophysical Data Center 2006]; slope [ETOPO2]; and landcover type [Landsat 2010; U.S. Geological Survey]). We collapsed eleven landcover types into five general categories: barren, polar (sub-polar/polar grassland-lichen-moss, sub-polar/polar shrubland-lichenmoss and sub-polar/polar barren-lichen-moss), taiga (subpolar taiga needleleaf forest), temperate/sub-polar (mixed forest, temperate/sub-polar broadleaf forest, temperate/subpolar needleleaf forest, temperate/sub-polar grassland and temperate/sub-polar shrubland), and wetland. We selected biophysical habitat variables to represent the nesting habitats of sea ducks, which generally occur in low-lying habitats with extensive wetland cover (Reed et al. 2017), while climate covariates represented factors potentially affecting vegetation structure, nest site availability and foraging conditions at the regional scale. We assessed dynamic climate variables using annual averages, rather than only during the period of occurrence, since precipitation and temperature prior to the breeding season can also affect vegetation structure and nest site conditions (Fu et al. 2014).
We measured environmental characteristics of nearshore aquatic non-breeding habitat using nine variables. Six of these variables were used to assess habitat across all locations, while three were available only for a subset of locations. ). We chose these variables to represent a suite of likely drivers of nearshore habitat variation, particularly the distribution of mollusks and prey populations. Because limited data are available on fine-scale variation in features such as currents and eddies, which have a high degree of shortterm variability in coastal areas (Kaltenberg et al. 2010), we used distance to coastline, bottom slope and depth as proxies for these processes. Net primary production, which integrates chlorophyll concentrations over a range of depths (Behrenfeld and Falkowski 1997), provides an index of aquatic productivity that influences the distribution of consumers at higher trophic levels. Salinity and temperature also influence the distribution of aquatic prey species depending on their osmotic and thermal tolerances. Positive values of depth indicate shallower areas.
Data on the remaining three static variables -bottom substrate, tidal current velocity and aquatic vegetation presence -were available only for some sections of our study area, including the Gulf of St Lawrence (bottom substrate only), Great Lakes (bottom substrate and submerged aquatic vegetation) and U.S. Atlantic coast (bottom substrate, seagrass presence and tidal current velocity). Despite the limited spatial extent of these datasets, we expected that these features would likely play a role in structuring sea duck prey distributions and/or foraging habitat suitability. We therefore excluded these covariates from overall habitat selection and suitability analyses, but separately modeled their relationships to sea duck occurrence and species counts over the measured subset of locations (see 'Relationships of sea duck occurrence and species counts to individual environmental covariates').
We standardized all variables using the seasonal 100 km 2 hexagonal grids used to calculate occurrence. We calculated distance values as the distance from the hexagon centroid to the feature of interest. For all other variables, we resampled the data using the mean value for each hexagon.
Habitat suitability -to map habitat suitability across the study area for individual species and the multi-species assemblage in ecological space, we conducted a multivariate ordination of all habitat variables using a Hill-Smith principal components analysis (PCA) (Hill and Smith 1976), which allows the inclusion of both categorical and continuous variables. For each grid cell, we evaluated species-specific habitat suitability using Ecological Niche Factor Analysis (ENFA: Hirzel et al. 2001), which measures the distance of the cell from the center of the species' distribution (from presenceonly data) in multivariate space. We then identified the top 10% and 25% of cells for each species based on suitability values, and overlaid these sets of cells to map multi-species suitability (i.e. the number of species for which each cell fell within either the top 10% or top 25% of suitability values across all available cells for that season).
Habitat selection and partitioning -to characterize species-level habitat partitioning across the suite of habitat variables available for the full study area, we compared niche position and breadth among species using an Outlying Mean Index (OMI) analysis (Dolédec et al. 2000) using the 'adehabitatHS' package (Calenge 2006). Briefly, OMI calculates the direction of maximum marginality of occupied sites (from presence-only data) relative to available habitat in ordination space, allowing comparisons of niche position and breadth among co-occurring species. The position of each habitat characteristic on the first axis of the OMI represents the marginality of occupied sites on that variable, with positive scores indicating higher-than-average values and negative scores indicating lower values. In ecological terms, greater positive or negative OMI values indicate more specialized niches, while values close to zero indicate more generalist use of available habitats. OMI does not assume specific resource selection functions, and allows differences in individual niche selection to be taken into account when describing the distribution of a group of animals. Using the same PCA ordination we used to map habitat suitability, we conducted OMIs for each season on all individuals, then we averaged the scores of individuals of each species on the first OMI axis to calculate niche position for that species, and calculated the 95% confidence interval of the mean as a measure of niche breadth. To assess habitat partitioning among species, we examined the overlap of individual species' niches, with less overlap indicating greater partitioning on that variable.
Importance of environmental covariates -since our ordination analyses were primarily descriptive, we also quantitatively assessed relationships of species counts to specific environmental covariates using generalized additive models. We modeled the number of species occurring in a given cell (response) as a function of various subsets of smoothed environmental covariates measured across the full study area along with a smoothed latitude-longitude interaction to account for spatial structure in the data (predictors), and compared models in an information theoretic framework. We ran zero-inflated generalized additive models in the 'mgcv' package (Wood 2011, Wood et al. 2016. We selected smoothing parameters using residual maximum likelihood, and used a binomial distribution with a logit link to model zero-inflation and a truncated poisson distribution with log link to model count data. We began with the global model and dropped covariates using backward stepwise selection until dropping additional covariates no longer reduced the Akaike's information criterion (AIC) value of the resulting model. We visually verified the fit of the final models using quantile-quantile plots. Full model selection results are given in Supplementary material Appendix 1 Table A3.
For covariates that were not available across the entire study area, we used forward stepwise selection to test whether these additional data improved the explanatory power of our models. For each additional variable, we ran the top model from the full dataset with and without the covariate on a reduced dataset including only grid cells for which the additional covariates were available. We considered additional variables to be useful if their inclusion improved the AIC value of the resulting model by ≥ 2 points (Burnham and Anderson 2004).
Habitat suitability: breeding -during the breeding season, suitable habitat areas for different species were relatively distinct in space and particularly latitude (Fig. 1a-d). Surf scoters habitat was primarily located to the east of Hudson Bay, white-winged scoter habitat to the south and west, black scoter habitat at higher latitudes on both sides of 7 the bay, and long-tailed duck habitat primarily to the north and northwest of the bay. Areas of high multi-species breeding suitability occurred primarily in the Barrenlands directly to the west of Hudson Bay, as well as in small areas of northern Quebec (Fig. 1e-f ). No grid cells were within the top 10% of highly suitable habitats for all four species (Fig. 1e).
Habitat selection and partitioning: breeding -during breeding, species partitioned habitat primarily according to precipitation, minimum temperature, proximity to marine coastlines, elevation and land cover type (Fig. 2). Longtailed ducks generally preferred areas nearer to coastlines, with lower temperatures, precipitation and elevation, than did scoters (Fig. 2). Among scoter species, surf scoters nested at greater elevations, precipitation levels and temperatures. White-winged scoters nested comparatively further from marine coasts in areas with greater minimum temperatures, while black scoters nested closer to lakes (Fig. 2). In general, most species selected for areas with lesser-than average precipitation, greater minimum and lesser maximum temperatures (i.e. narrower temperature ranges), and lower slopes (Fig. 2). Landcover classes were useful for distinguishing habitat niches: long-tailed ducks nested exclusively in areas with polar landcover types, black scoters occupied areas of polar and taiga cover, white-winged scoters used both taiga and temperate/subpolar cover, and surf scoters used exclusively temperate/subpolar landcover categories.
Habitat suitability: non-breeding -during winter, suitable habitats were relatively similar across all species (Fig. 3a-e). Habitats suitable for black and surf scoters extended over a greater latitudinal range (Fig. 3a-b), while white-winged scoters and long-tailed ducks had more suitable habitat areas in the Great Lakes (Fig. 3c-d) and common eiders were largely 8 limited to the northeastern portion of the multi-species range (Fig. 3e). Areas of multi-species overlap were concentrated along the mid-Atlantic coast of the United States ( Fig. 3f-g). The waters of southern New England, including Cape Cod Bay and Nantucket Shoals, occurred in the top 10% of available habitats for all five species (Fig. 3f ).
During migration, habitat suitability varied by season. Suitable pre-breeding staging habitats were concentrated in the St Lawrence Estuary, Gulf of St Lawrence, Great Lakes and James Bay. Suitable habitats for white-winged scoters and long-tailed ducks were located predominantly in the western part of the study area (Great Lakes and James Bay; Fig. 4c-d), while black and surf scoters and common eiders utilized predominantly eastern habitat areas (St Lawrence Estuary, Gulf of St Lawrence, Nova Scotia and northern New England; Fig. 4a, b, e). Highly suitable habitat for all species co-occurred in the southern portions of the Gulf of St Lawrence and along the eastern coasts of Nova Scotia and New Brunswick (Fig. 4f-g). In contrast, during post-breeding migration and molt, only common eiders extensively used the same habitat areas as during pre-breeding (Fig. 5e). The other four species (black, surf and white-winged scoters and long-tailed ducks) co-occurred along the shorelines of southern Hudson Bay and northern James Bay, particularly around Sanikiluak, as well as along the Labrador coast and southern Ungava Bay (Fig. 5a-d).
Habitat selection and partitioning: non-breedingthroughout the non-breeding period, sea ducks selected nearshore, shallow-water habitat areas (Fig. 6). Within the occupied habitat area, species showed the greatest amount of habitat partitioning on dynamic habitat variables (net primary production, sea surface temperature and salinity) and bottom slope (Fig. 6).
During winter, species partitioned habitat based on productivity, temperature and bottom slope (Fig. 6a). Black and surf scoters showed strong positive selection for net primary production, weak positive selection for sea surface temperatures, and moderate negative selection for bottom slope. In contrast, white-winged scoter, long-tailed duck and common eider showed weak negative selection for net primary production, strong negative selection for sea surface temperature, and weak positive selection for bottom slope. All species except long-tailed duck selected positively for salinity.
Migratory habitat selection varied between seasons and species. During pre-breeding migration, selection for habitat characteristics was generally weak (i.e. close to zero), with the exception of net primary production, which was strongly positive for all species. Habitat partitioning was limited, with considerable between-species overlap and within-species variation (Fig. 6b). All species showed weak negative selection for sea surface temperature, weak positive selection for salinity (with the exception of long-tailed ducks), and weak or varying positive selection for bottom slope (with the exception of long-tailed ducks). During post-breeding migration, sea ducks displayed relatively strong selection on habitat covariates, with a high degree of partitioning and limited overlap among species (Fig. 6c). All species except long-tailed duck selected strongly for net primary production. For sea surface temperature, salinity and bottom slope, selection varied among species from weak to strong and positive to negative.
Importance of environmental covariates -the best model for species counts during breeding included precipitation, minimum and maximum temperatures, distance to lake, distance to large lake, distance to coast, elevation, landcover and latitude-longitude interaction and explained 90.4% of total deviance (Table 1a). The best model for winter and prebreeding staging species counts across all sites included all covariates (distance to coast, sea surface temperature, salinity, net primary production, depth, slope and latitude-longitude interaction) and explained 78.5 and 76.2% of total deviance respectively (Table 1b-c). The best model for species counts during post-breeding migration included all covariates except for slope and explained 79.7% of total deviance (Table 1d).
For the subset of the study area where tidal current velocity, aquatic vegetation presence and bottom substrate were measured, models of sea duck species counts were improved during all seasons by the inclusion of the additional covariates (Table 1b-d). Tidal current velocity appeared in all highlysupported models for winter and pre-breeding and two of three highly-supported models for post-breeding, with a positive relationship to species counts (Table 2). Aquatic vegetation presence was a positive predictor in the top models for winter and pre-breeding and in one of three highly-supported models for post-breeding (Table 2). Substrate appeared in one of three highly-supported models for winter species counts, with sand, sediment, silt and hard bottom habitats favored over clay and gravel (Table 2).
Discussion
Examining habitat use in a seasonal, multi-species framework allowed us to identify covariates that affected occurrence across a suite of similar species, as well as assess differences in habitat associations among five species of sea ducks. We found that both habitat selection and interspecific habitat partitioning varied by season, with weak selection and strong partitioning during the breeding season, strong selection and partitioning during post-breeding migration and molt, moderate selection and partitioning during winter, and weak selection and partitioning during pre-breeding migration.
Habitat suitability -during the breeding season, most sea ducks in our study occupied nesting habitats in areas of subpolar vegetation near or above the northern limits of boreal forests. Individual species differed substantially in their use of breeding habitats. Some areas, including large portions of the Barrenlands west of Hudson Bay, were among the top 25% of suitable habitats for all four species. Our results expand on previous analysis of black and surf scoters by Reed et al. (2017), which also identified the Barrenlands as an area of high multi-species importance. The areas identified by our models as being of highest multi-species importance, including the Barrenlands and parts of northern Quebec, fall outside the boundaries of most North American breeding bird surveys including the Waterfowl Breeding and Population Habitat Survey (Roy et al. 2019); however, the Barrenlands are currently being targeted for additional pilot surveys to quantify the extent of use by nesting sea ducks (Reed et al. 2017). In our analysis, no sites were in the top 10% of suitable breeding habitats for all species. This suggests that optimally conserving key breeding habitats would require targeting habitats with high single-species suitability as well as multi-species hotspots. Hotspots of multi-species non-breeding habitat suitability were located in southern New England south to the Chesapeake Bay (winter); St Lawrence Estuary, the Gulf of St Lawrence, and southern James Bay (pre-breeding); and Hudson Bay and northern James Bay (post-breeding). These areas correspond to key locations identified in other studies (Silverman et al. 2013, Lamb et al. 2019). Multi-species overlap was high during winter, but decreased during migratory periods as migration routes diverged. Long-tailed ducks and white-winged scoters tended to use more inland habitats during pre-breeding migration. During post-breeding migration and molt, long-tailed ducks remained farther north than other species, and dresseri common eiders used unique coastal staging areas. Thus, while multi-species hotspots are useful for identifying key migratory habitats, single-species models may be necessary to fill gaps in important habitats.
It is important to note that we assessed habitat suitability only within areas used by tracked individuals, and not across the full range of each species. Although non-breeding areas used by tracked birds generally corresponded to known eastern North American non-breeding ranges of their respective species, breeding areas occupied by tracked birds differed in some cases from published ranges (Sea Duck Joint Venture 2015). Since many sea duck species have distinct subspecies or subpopulations that winter in either eastern or western North America, further study would be useful to clarify whether these wintering subpopulations also occupy distinct segments of the breeding range, which might help to explain why some known breeding areas were not used by individuals in our study.
Habitat selection and partitioning -overall, the four species included in our analysis of breeding habitat preferred to nest in relatively flat areas near large lakes. This supports a previous predictive model of scoter breeding habitat (Reed et al. 2017), in which lake proximity and area were key predictors of scoter breeding locations. Our results also suggest that sea ducks select for areas of lesser annual precipitation and smaller ranges of annual temperatures. These mild climate conditions could improve both nest cover and foraging opportunities on invertebrate prey during the breeding season, and may also reduce energy costs of incubation and danger of nest and duckling loss from exposure to cold and wet conditions (Mallory 2015).
During non-breeding, geophysical aquatic habitat features that were consistently important across species and seasons included shallow water depths and nearshore locations. These results are consistent with previous studies based on aerial survey data suggesting that non-breeding sea ducks are closely confined to nearshore waters (Silverman et al. 2013, Winiarski et al. 2014, Smith et al. 2019. The sea ducks we investigated generally selected for waters with relatively high productivity across all seasons. Most species also preferred lower temperatures, higher salinity and greater bottom slopes, although selection varied between seasons and species. Previous studies of survey data from the same five species during winter by Zipkin et al. (2010) and Silverman et al. (2013) showed similar species-specific relationships with sea surface temperature and bottom slope to the patterns we observed; however, their models did not include either salinity or net primary production.
Niche partitioning among species was strong during breeding. Of the four species included in our breeding habitat analysis, long-tailed ducks and surf scoters were the most dissimilar, with black and white-winged scoters occupying intermediate habitats. For several habitat variables, including temperature, precipitation and elevation, long-tailed ducks and surf scoters occupied the maximum and minimum values among the four species included in our analysis. A previous assessment of sea duck breeding habitat suitability by Reed et al. (2017) grouped all three scoter species and did not include long-tailed ducks; our work builds on this analysis by suggesting additional differentiation between scoter species on several variables. We also observed between-species habitat partitioning during winter, with black and surf scoters preferring higher productivity, higher temperatures and flatter bottom slopes than the other three species.
Our analysis suggests a contrast in habitat selection and partitioning between pre-and post-breeding migrations. 13 Niche specialization and habitat partitioning were greater during post-breeding migration and molt than during prebreeding migration. The strong and consistent selection for preferred habitats we observed during post-breeding migration and molt reinforces previous work suggesting the importance of this period of the sea duck annual cycle (Lamb et al. 2019), during which sea ducks molt their flight feathers and are temporarily flightless (Salmonsen 1968). To complete the molt, birds must elevate their rates of energy consumption despite their limited mobility (Scott et al. 1994). For female sea ducks that have successfully raised young, post-breeding migration follows the energetically-demanding nesting period (Mallory 2015); therefore, they are also using staging and molt sites to replenish depleted resources to complete long-distance migration to wintering areas. Thus, access to habitats with abundant resources is particularly crucial during post-breeding migration and molt, restricting the range of suitable habitats available. In contrast, pre-breeding staging areas are used for shorter periods of time and do not include a flightless period (Johnsgard 2010).
Habitat use during migration is also subject to environmental and phenological constraints. During pre-breeding migration, sea ducks time their northward movement to the spring thaw in order to arrive at breeding sites as soon as possible after snowmelt (Takekawa et al. 2011); thus, the timing and spatial extent of sea ice breakup may limit access to Table 1. Model selection values for sea duck occurrence and species counts during a) breeding, b) wintering, c) pre-breeding staging and d) post-breeding staging and molt, 2002-2017. The best model for the full dataset was chosen using backward stepwise selection, and additional non-breeding habitat covariates (tidal current velocity, aquatic vegetation presence and substrate) were added to the top model for the subsets of locations for which they were available. Bold text denotes final models that were highly supported. aquatic habitats. Indeed, factors contributing to the likelihood of ice cover (sea surface temperature, salinity and tidal currents) were more important predictors of habitat selection during pre-breeding migration than during post-breeding and molt. Previous work on several species of eiders has demonstrated that sea ice plays a key role in structuring pre-breeding migration patterns (Mosbech et al. 2006, Petersen 2009, habitat use at stopovers (Solovieva 1999, Oppel et al. 2009) and prey composition and availability along migratory routes (Lovvorn et al. 2015). In addition, while pre-breeding staging sites are typically located along flyways between breeding and wintering sites, some segments of sea duck populations undertake additional migratory movements in order to molt at preferred sites (Salmonsen 1968, Lepage et al. 2020) and show high site fidelity to molt locations (Phillips and Powell 2006, Lepage et al. 2020). The relatively stronger habitat preferences and selection exhibited by all species during the post-breeding migration in our study is consistent with patterns of avian migration in multiple taxa, in which post-breeding migrants prioritize maximizing energy intake , while pre-breeding migrants prioritize speed and timing to ensure early arrival at breeding sites (Morris et al. 1994, La Sorte et al. 2016. While we were able to characterize some important aspects of habitat partitioning in our study, additional differences in habitat use may have occurred below the spatial (100 km 2 ) and temporal (seasonal averages) scales of our analysis (Holm and Burger 2002, Mohd-Azlan et al. 2014, Péron et al. 2019. Individual species may respond differently to changes in density-dependent competition, resource availability and environmental conditions at fine spatiotemporal scales, which would further allow them to partition resources in shared habitat areas. Additional study of the behavioral dynamics of mixed flocks of sea ducks could provide interesting insights into whether and how resource partitioning may occur even in areas where there is no apparent spatial partitioning of habitat. Importance of environmental covariates -our analysis included several variables that were available for only some portions of the study area: tidal current velocity, aquatic vegetation presence and bottom substrate. Tidal current velocity and aquatic vegetation presence were positive predictors of species counts during all seasons. High tidal velocities may be associated with high primary productivity, and some sea duck species have been observed to forage in or near tidal currents (Holm and Burger 2002). Previous studies have shown strong associations between waterfowl and seagrass, which is either frequent or incidental in the diet of several sea duck species and also serves as habitat for other aquatic prey (Kollars et al. 2017). Our results further suggest that aquatic vegetation may be a particularly critical food source during pre-breeding migratory staging. Bottom substrate was also a positive predictor of winter species counts, supporting previous studies suggesting that bottom sediments have an important influence on prey availability and foraging habitat selection for wintering sea ducks (Loring et al. 2013, Žydelis andRichman 2015). Developing these three data layers at a continental scale could help to improve modeling and prediction of sea duck occurrence.
Conservation recommendations -analyzing migration patterns in a multi-species framework can be used to identify key areas for conservation and restoration (Lamb et al. 2019), however, choosing sites within these areas and implementing habitat management activities requires a detailed understanding of habitat use and preferred habitat features. Areas of high multi-species suitability identified in our study could be useful targets for conservation and monitoring. Notably, our analysis of breeding habitat shows multi-species hotspots outside the range of traditional breeding surveys, suggesting that accurate assessment of population and habitat change may require expanding survey areas. The importance of climate characteristics to breeding habitat selection further suggests that these areas may be vulnerable to future changes in climate conditions, which additionally emphasizes the urgency of monitoring in these habitats. In non-breeding habitat areas, our results suggest that management of aquatic habitats for sea ducks likely depends on the specific timing of species occurrence. Given the strong selection we observed during post-breeding migration, habitats preferred by multiple species during this period -shallow, nearshore environments with high productivity -may be high-impact targets for conservation or restoration. Conversely, weaker selection at prebreeding migratory staging sites suggests that factors other than the measured covariates, such as varying sea ice cover or ephemeral resources, may be driving patterns of habitat use. Given that occurrence and intensity of use do not always reflect habitat quality (Winker et al. 1995), further studies of individuals occupying suitable and less-suitable sites could be used to establish whether conserving preferred habitats provides fitness benefits to sea ducks, and during which seasons habitat conservation would be maximally beneficial. Finally, while large-scale telemetry studies such as ours can provide important insights into annual-cycle conservation strategies, they are often too cost-and labor-intensive to be practical for monitoring long-term impacts of future environmental and habitat change or success of conservation measures. Thus, understanding how telemetry data can contribute to population models that incorporate other more readily-available datasets (Oppel andPowell 2010, Arnold et al. 2018) is crucial to developing cost-effective and feasible annual-cycle monitoring strategies for waterfowl. | 2020-09-03T09:11:39.976Z | 2020-08-18T00:00:00.000 | {
"year": 2020,
"sha1": "7161c2a747e9cd8435e787e58098008bac9c4efc",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ecog.05003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "46ff5ae6c0898fe1010daff7def676214d8d93c5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
4590228 | pes2o/s2orc | v3-fos-license | Differences in opinions of occupational physicians on the required competencies by field of practice: results of an international Delphi study
Background The activities and work demands of medical professionals, including occupational physicians (OPs), fall into three categories: clinical, academic, and administrative. Work demands of an OP consist of these three categories and additional specialty specific roles and competencies. Research on the core competencies and skills required for OPs have identified high levels of consensus amongst OPs internationally, however these opinions have not been examined between areas of practice specific groups. Furthermore, it has been identified that to a large extent academics are often the group who define the skills required of OPs. The aim of this study is to compare the opinions of OPs grouped by field of practice on the common core competencies required for occupational health (OH) practice using results from an international survey. Methods An international modified Delphi study conducted among OPs, completed in two rounds (Rating-Round 1; Ranking-Round 2) using developed questionnaires based on the specialist training syllabus of a number of countries and expert discussions. Respondents were categorised as Physician, Manager/Physician, and Academic/Physician, based on self-reported job titles and place of work. Results There was good agreement between the Physician and Manager/Physician groups, with the Academic/Physician group deviating the most. The top three and bottom three principle domains (PDs) were in good agreement across all groups. The top three were clinically based and would be considered core OH activities. The PDs with considerable intergroup variance were Environmental Issues Related to Work Practice and Communication Skills, categories which may reflect direct relevance and relative importance to the job tasks of respective groups. Conclusion This study demonstrated general agreement between the three occupational groups. Academic/Physician opinions deviate the most, while good agreement is depicted between the Physician and Manager/Physician groups. The findings of this study can help identify potential gaps in training requirements for OPs and be used as a stepping stone to developing training programmes that are reflective of practice and tailored for those predominantly undertaking these specific roles. Electronic supplementary material The online version of this article (10.1186/s12909-018-1139-9) contains supplementary material, which is available to authorized users.
Background
The range of activities required of medical professionals is well recognised. Previous research has shown that professional work demands of physicians fall into three categories: clinical, academic, and administrative [1] and this diverse nature of work has been identified as an important driver in the selection of one's medical specialty [2]. Occupational Health (OH) practice is no exception. Work demands of an Occupational Physician (OP) consist of these three recognised categories and additional specialty specific roles and competencies, including for example assessment of workplace hazards, knowledge of health & safety and disability legislation, and environmental assessments, [3][4][5][6][7][8][9]. Research studies on OP competencies have been undertaken internationally [4,6,10]. Delclos et al. [4] demonstrated important differences in competencies and curricula between OH professional groups. In a European study, Macdonald et al. [6] demonstrated that advancements in OH practice were not aligned with the views of OPs on their training needs, which appeared to be focused on traditional core competencies. In a recent follow-up international study, it appears that consensus on common core competencies and priorities for OPs worldwide has not changed significantly, although it is acknowledged that the meaning and interpretation of what these competencies now encompass have evolved to accommodate the changing workplace and to include the concept of medical risk assessment [10]. A difference in opinion regarding competencies for OPs is recorded when examining the customer perspective, e.g. employers and employees [8]. Occupational Hazards to Health were consistently ranked as the most important competency by OPs [6,10], whereas customers of OH services considered Law and Ethics the most important skill for OPs [8].
Research is an increasingly important aspect of higher medical training for many doctors [1,[11][12][13][14], both in terms of teaching and educational supervision as well as research in context-specific areas [13][14][15]. Academic medicine plays an important role and several initiatives exist that attempted to enhance the number of clinical academics, the quality of the conducted research, while safeguarding service provision [15][16][17]. Concurrently working in clinical practice and academia, offers opportunities of multidisciplinary and translational work from practise to theory and vice versa, and to influence industry, policy and practice [18]. Clinical academics are often also involved in medical and non-medical teaching [18]. A recent report by the Medical Schools Council reported that "Clinical academics are responsible for delivering the undergraduate curriculum, inspiring and educating the next generation of doctors, and they contribute substantially to postgraduate medical training" [18]. In the UK, it has been reported that approximately 5 % of medical consultants work as clinical academics [18].
The importance of good management skills is becoming increasingly apparent within health care [5]. It has been identified that the former has effects, not only on financial management, but on the quality of care provided [5]. Differences in management skills and management approaches may impact on quality of service in different ways, including possible effects on the stress and/or wellbeing of their staff which, in turn, is directly related to the quality of care produced [19].
Occupational medicine (OM) can contribute significantly to good management in healthy enterprises. The OPs role is to protect and promote the health and work ability of workers. If OPs are to make a maximum contribution to employees' work ability and health and safety at work, they need to have the appropriate skills. OM is distinctive from other medical specialties in that it is in the main practised out with the National Health Service (NHS) in the UK and in industry. While previous studies have examined the professional development of OPs in terms of practice competencies and curricula, they have not examined intra-OP group differences, and specifically with regards to their job/predominant area of practice [4,6,10]. Furthermore, a report for the EEF (The Manufacturers' Organisation) and the Health and Safety Executive reported that "it is still largely academics who define the skills of the occupational physicians who will be employed by industry" [8]. Given the evolution of OH practice, with increasing focus on service provision and reduction of emphasis on academic activities, we aim to compare the perspectives of occupational physicians' on their required competencies by field of practice, using results from an international survey [10]. The specific focus of this paper is on the differences expressed between OPs undertaking predominantly clinical, predominantly managerial and predominantly academic roles, the three main categories within the scope of OH practice. The findings of this study can help identify differences in opinions and potential gaps in training requirements for OPs.
Delphi questionnaires
An international modified Delphi study was conducted among OPs in various countries around the world [10]. The study was completed in two rounds using developed questionnaires based on the specialist training syllabus of a number of countries, expert panel reviews and conference discussions. Details of the conducted Delphi study are described elsewhere [10]. The first round, was the 'rating' round and respondents were asked to indicate the relative importance of the included items [10]. The 'rating' questionnaire comprised of 12 principal domains covering the different topic areas of OH practice and within these were subsection items detailing specific competencies relating to that domain. The round 2 'ranking' questionnaire retained the same 12 principal domains from the first round but included new subsection items suggested by round 1 respondents [10].
Respondents were asked to rank the principle domains and their subsection items. Both questionnaires were circulated to the same key contacts and they were asked to distribute the online questionnaire using a SMART survey link to their networks [10]. Specialist OPs that received the link were invited to participate irrespective of whether they had taken part in round 1 or not. Information on the study was embedded in the beginning of each electronic survey and participants were required to complete a consent question to proceed with survey completion.
Participants were asked to choose their area of practice (more than one option allowed) and to provide their job title. These were screened and respondents were categorised into three main job categories after assessment of the self-declared job titles and place of work provided by respondents. If respondents were solely involved in clinical OH practice, they were categorised as Physicians; if they had a management title, they were labelled Manager/Physicians and if they had an academic role the Academic/Physician category was applied. Areas of current OH practise comprised work in a healthcare setting, for example a hospital (healthcare), public/private sector organisations (industry), participation in teaching or research (academic) or any work sector not covered by these (other).
Ethics approval to undertake this study was provided by the University of Glasgow, College of Medical, Veterinary & Life Sciences Ethics Committee [200130150].
Round 1
Respondents were first asked whether OPs should be competent in the Principle domains, answering Yes, No, or Not Relevant. If they answered Yes, they were then asked to give each domain subsection item a separate score from 1 to 5 relating to the importance of the subject. A score of 1 indicated that the item was of least importance and 5 it was absolutely necessary. The data was analysed using SPSS Statistics 21 [20]. The 12 principal domains were treated separately. For each item in each of these subsections, the scores were averaged for all respondents and per job title grouping. Analyses were carried out to identify possible variations in responses between OPs identified as having different roles/titles. These three different groups comprised of Physicians; Manager/Physicians; Academic/Physician and Trainees. Comparisons of the relative importance to respondents of the 12 principal domains per job grouping were made with the Wilcoxon signed rank test.
Round 2
Responses to the second questionnaire were analysed by summing the rank orders to produce a mean score for each item within each of the 12 domains. As some sections had as many as 12 items and some as few as two the mean scores were standardised to a 1-10 scale, to allow comparison of the relative importance of items in different subsubsections [10]. Subsection mean standardised scores were subsequently weighted using a scale from 1 to 12 based on the ranking order of their respective principle domain [10].
Demographics
In total 336 responses were received in round 1 of the Delphi survey and 232 in round 2 [10]. Removing missing values for job description, there were 332 and 232 respondents in Rounds 1 & 2, respectively for this analysis. In both Rounds 1 & 2 the majority of respondents were Physicians (69% (n = 228_ and 71% (165), respectively), followed by Manager/Physicians, (18% (n = 60) and 19% (n = 43), respectively), then Academic/Physicians (12% (n = 39) and 10% (n = 23), respectively), with the Trainee group contributing to less than 2% (n Round1 = 5; n Round2 = 1) of the responses. Respondent demographic details, area of practice, and years of experience by job group are demonstrated in Table 1. The sample size of the 'Trainee' group was insufficient to draw valid conclusions and was hereafter excluded from the analysis.
Across all analysis groups the majority of OPs were male. Similarly, the majority of OPs were in the 45-64 age range for all professional groupings (Table 1). In terms of experience, the Academic/Physician' group had the most years of experience, followed by Manager/ Physician, Physician and last the Trainee group. Area of practise follows the pattern of the groupings, with the majority of the Physician and Manager/Physician groups being between the healthcare and industry sectors, the majority of the ' Academic/Physician' being in the academic sector.
Statistical analysis did not identify any statistical significant differences in the distributions of gender (p > 0.05), age group (p > 0.05), job practice (p > 0.05), or years of experience (p > 0.05) between round 1 and 2 respondents within each occupational group.
Round 1-rating
All 12 domains were considered important (90% and above 'yes' response) for all the professional groups (see Additional file 1: Table S1). A small number of OPs indicated that some categories were not relevant competencies for their field of practice. 'Teaching and Educational supervision' were not considered relevant competencies for 5% of the Physician group. 'Management skills' was similarly not considered a relevant competency for 5% the Academic/Physician group either. 'Research methods' was considered not relevant for 7% for the Manager/Physician group.
Respondents were asked to rate each subsection of the principle domains on a five-point Likert scale ranging from least important (rating: 1) to absolutely necessary (rating: 5). For each principle domain the subsection scores were averaged and then an overall principle domain average was estimated ( Table 2). For all three groups 'Good Clinical Care' was the principle domain with the highest score in terms of importance (Table 2). 'Health Promotion' scored the lowest for the Physician group and was the lowest along with 'Teaching and Educational supervision' for the Manager/Physician group. 'Teaching and Educational supervision' was also the lowest scoring group for the Academic/Physician group. Physicians and Manager/Physicians had the best agreement when principle domains are ranked based on the average rating score (Table 2), while Academic/Physicians diverged from the other two groupings.
Intra-group rating
Comparison of the relative importance to respondents within each group of the 12 principle domains were made by the Wilcoxon signed rank test. The results of these tests are shown in Tables 3, 4, and 5. The results show that for all three groups, Good Clinical Care was always considered most important when compared with the other principle domains. For the Physician and Manager/Physician groups the domain of Health Promotion was always considered least important, while 'Environmental issues related to work' and ' Assessment of disability & fitness for work' were often not considered as important as the other domains (Tables 3 and 4). For the Academic/Physician the domain almost always considered least important was the ' Assessment of disability & fitness for work' (Table 5), while the 'Environmental issues related to work' and 'Health Promotion' domains were often considered less important than the other domains. . There were no statistically significant differences in the distributions of gender, age group, job practice, years of experience between the respondents of the first and second rounds [10]. The standardised mean rank score was used to obtain the overall rank of the principle domains (Additional file 1: Table S2) and the radar chart in Fig. 1 demonstrates the agreement/disagreement observed in the principle domain ranking for the three groups. Points closest to the centre are of increasing importance. Figure 1 shows that there is good agreement between the Physician and Manager/Physician groups, and the group deviating the most is the Academic/ Physicians. The top three and bottom three principle domains are in good agreement across all groups. The principle domains with the most variance were 'Communication Skills' (ranked fourth for the Physician and the Manager/Physician groups but eighth for the Academic/Physician group); 'Environmental Issues related to Work Practice' (ranked in 6-7th place for Physicians, 9th for Manager/Physicians and 4th for Academic/Physicians); 'Health Promotion' (ranked 8th for the Physician and Manager/Physician groups but 5th for the Academic/Physician group); and 'Clinical Governance/Clinical Improvement' (ranked 9th for the Physicians and the Academic/Physicians but 7th for the Manager/Physicians).
The principle domain subcategory ranking reflects what each group considered as the highest principle domain. For Physicians and Academic/Physicians the subcategory 'B1. Understand and apply the principles of risk assessment-i.e. recognition of potential hazards in the work environment, evaluating risks and providing advice and information on control measures' was ranked the highest subcategory (Additional file 1: Table S3). Furthermore, these two occupational groups had four subcategories from the 'General principles of assessment & management of occupational hazards to health' (i.e. B1, B3, B7 and B2) domain featured in their top five subcategories. For the Manager/Physician group the subcategory ' A2. Take and analyse a clinical and occupational history including an exposure history in a relevant, succinct and systematic manner' was ranked as the most important category. The top five subcategories for the Manager/Physician group were from the 'Good Clinical Care' domain (A2, A1, A4, A3), apart from one 'C1. Assessing and advising on impairment, disability and fitness for work' (Additional file 1: Table S2). The top ten subcategories demonstrate considerable variability. Four out of the ten (B1, A2, B2, C1), all feature within the top ten subcategories in various rankings within the three groups, and six subcategories (B5, A4, A3, A8, A5, and A6) only feature in the top ten of one group (Additional file 1: Table S3).
Summary of findings
In this study, we compared OPs views on competencies by field of practice. By consensus, all the competency domains were regarded as important by respondents with 'yes' scores of 90% and over in all 12 identified domains. In general, there was good agreement between the Physician and Manager/Physician groups and the group deviating the most was the Academic/Physicians. The top three and bottom three principle domains were in good agreement across all groups. All groups were in agreement on the top 3 ranked principle domains. These were all clinically based and what would be considered core OH activities, namely 'General principles of assessment & management of occupational hazards to health' , 'Good Clinical Care' and ' Assessment of disability and fitness for work'. There was similar agreement in the bottom 3 ranked principle domains, which included management and academic related competencies ('Management skills' , 'Teaching & Educational Supervision' , 'Research methods'). The principle domains with considerable intergroup variance were 'Environmental Issues Related to Work Practice' and 'Communication Skills'. The variance in the first of these domains may reflect the direct importance of environmental issues for understanding and assessing exposures and impacts for research (Academic/ Physicians) and practice (Physicians) purposes. OPs with managerial roles are very likely to be further removed from direct pathways of exposure and impact, and this is translated by the lower rank. The latter can be explained by the fact that the Physicians and Manager/Physicians are in more 'front facing' roles regularly engaging with a range of stakeholders (including employees, employers, administrative staff and other clinicians within the multidisciplinary OH team), for which effective communication will be essential/inherent. Furthermore, being fundamentally clinical, and with communication and the clinicianpatient relationship increasingly recognised as important determinants of patient satisfaction [21] and improved clinical outcomes [22,23], these two groups require to demonstrate advanced competence and skills in this domain and to ensure continuing development of this skill throughout their careers [24][25][26][27]. The analysis of the subsection priorities demonstrated that for Physicians and Academic/Physicians 'Understanding and applying the principles of risk assessment was ranked the highest subcategory, whereas the Manager/Physician group had a more operational, and service delivery orientated focus with 'Taking and analysing a clinical and occupational history including an exposure history in a relevant, succinct and systematic manner' ranked as the most important category.
No obvious bias was observed amongst managers and academics towards the competency set most applicable to their practice, with 'Management skills' ranked 10th by Manager/Physician groups and 'Research methods' ranked 12th by the Academic/Physician group.
The principle domain of 'Research Methods' was the least important category across all three OP professional groups ( Table 2) and was considered not relevant for 7% of the Manager/Physician group. A recent study by Heikkila et al. (2015) of Finnish doctors demonstrated that the 'opportunity to carry out research' was one of the least favourable motives for choosing a specialty and this was less important for female doctors [2]. Hoving et al. [28] showed that "physicians are inclined to use evidence based medicine and believe that the use of evidence based medicine improved the quality and attractiveness of their work" and Salter and Kothari [29] state that practice based on sound research is often considered best practice, and allows for decision making to be part of a "logical, explicit, transparent, and measurable process". Our findings reflect that while respondents consider direct involvement in research to be of low priority, they acknowledge the importance of an evidence base in their clinical practice. This supports the need to maintain a strong academic base for the speciality.
Strengths and limitations Strengths
Previous studies have assessed the professional development of OPs in terms of practice competencies and curricula, but to our knowledge this is the first study to examine inter-OP group differences by job/field of practice. Furthermore, consensus was been derived from OPs working across a range of countries both developed and developing and with a large spectrum of expertise, ranging from newly specialised OPs to highly experienced (years of expertise ranging from 0 to 50 years of experience). Furthermore, we previously demonstrated that rank comparisons between continents were highly correlated as well by age, gender and years of experience [10].
Limitations
The relatively low response rate is a limitation of our study and earlier comparable studies have reported similar challenges [30][31][32]. Language barriers may have been an influencing factor. The stronger European response [10] may also bias the wider representativeness of our findings but the questionnaire was distributed across a range of networks and the sources of response were beyond our control. The low number of trainee responses did not allow any meaningful conclusions to be drawn and could feasibly be explained by the distribution networks, with members more likely being established OPs. A comparison of trainees views on competency requirements with those experienced in different areas of occupational health practice could provide invaluable information on potential knowledge gaps in current curricula. It is important to acknowledge that there was a high degree of crossover in OH practice and the categorisation process was based on self-reported job titles and not on specific job roles and tasks. Although management functions may be the primary role of a Manager they may also be involved in some teaching or research activities. Likewise, Academic/Physicians may predominantly be undertaking research and teaching work but they may have some management and clinical roles as well.
Comparison with previous studies
Given its novel perspective, there are no published studies for direct comparison with our study. Our findings however are consistent with those of other studies notably the earlier Macdonald et al. [6,13]. European study where 'occupational hazards to health' was the highest ranked principle domain. In Macdonald et al. [6] however, 'research methods' , were considered a higher priority, ranked as fourth. 'Law & ethics' , although ranked out with the top 3 in our study, ranked second highest in Macdonald et al. [6] and highest in a study of UK customers views on required OH competencies [15]. The intra-group comparison results of our study, showed that 'environmental issues related to work practice' were often least important within respondents of a group and 'health promotion' was least important for the Physician and Manager/Physician group but varied for the Academic/Physician group. This is in line with the findings of the European study [6], which showed that environmental medicine was significantly less important and health promotion showed greater variation across the subsections. It is difficult to compare the findings of Delclos et al. [4] study due to design and classification differences. In that study, the competency skill sets reported most commonly were administrative/management (health & safety, legal, regulatory considerations) then professional practice (ethical considerations) followed by research [4].
Conclusion
This study has compared perspectives on the competencies required of OPs, by field of practice. It has observed that there is general agreement between the three OP professional groups. The group diverging the most in opinion is the Academic/Physicians, while good general agreement is depicted between the Physician and Manager/Physician groups. Recognition of these perspective differences by OP professional group is important for those directly involved in defining and developing competency standards, particularly if these tasks continue to a large extent to be undertaken by academics [8]. The findings of this study can help to fine tune, following basic OP training, more focussed and tailored training programmes with emphasis on the bespoke skills and competencies (for example, more advanced communication training for the Physicians and Manager/Physicians and more comprehensive Environmental impact training for the Academic/Physician and Physician groups) required for these specialised fields that are reflective of practice. The international perspective facilitates scope for common training/curricula development at national, regional and international level. Further research opportunities could include qualitative intergroup evaluations exploring the reasons for their higher and lower priority choices. | 2018-04-04T00:23:05.117Z | 2018-04-02T00:00:00.000 | {
"year": 2018,
"sha1": "013ba711748ccf9a1f1b13a5406944e7a8ab1221",
"oa_license": "CCBY",
"oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-018-1139-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab916567719a0765290e8686b19357feb86860c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
119259818 | pes2o/s2orc | v3-fos-license | Current and Nascent SETI Instruments
Here we describe our ongoing efforts to develop high-performance and sensitive instrumentation for use in the search for extra-terrestrial intelligence (SETI). These efforts include our recently deployed Search for Extraterrestrial Emissions from Nearby Developed Intelligent Populations Spectrometer (SERENDIP V.v) and two instruments currently under development; the Heterogeneous Radio SETI Spectrometer (HRSS) for SETI observations in the radio spectrum and the Optical SETI Fast Photometer (OSFP) for SETI observations in the optical band. We will discuss the basic SERENDIP V.v instrument design and initial analysis methodology, along with instrument architectures and observation strategies for OSFP and HRSS. In addition, we will demonstrate how these instruments may be built using low-cost, modular components and programmed and operated by students using common languages, e.g. ANSI C.
Background
By far the most common type of SETI experiments are searches for narrowband continuouswave radio signals originating from astronomical sources. These searches are based on a number of fundamental principles, many first described by Frank Drake in the early 1960s (Drake 1961). Paramount among them is the fact that sufficiently narrow signals are easily distinguishable from astrophysical phenomena, and would thus be a reasonable choice for a deliberate beacon from an advanced intelligence. The spectrally narrowest known astrophysical sources of electromagnetic emission are masers, with a minimum frequency spread of about one kHz. Additional support for the possible preference for narrowband interstellar transmissions by ETI include the immunity of narrowband signals to astrophysical dispersion and consideration of similarities to our own terrestrial radio communication systems. Further encouragement is provided by the existence of a computationally efficient matched filter for searching for narrowband signals, the Fast Fourier Transform. More recently, searches have begun targeting other signal types, such as broadband dispersed radio pulses (Siemion et al. 2010).
Optical SETI, the name usually ascribed collectively to SETI operating at optical wavelengths, was first proposed in 1961 by Schwartz and Townes (Schwartz & Townes 1961) shortly after the development of the laser. Searches have been conducted for both pulsed emission, e.g. Howard et al. (2004), and continuous narrowband lasers, e.g. Reines & Marcy (2002). Pulsed optical SETI rests on the observation that humanity could build a pulsed optical transmitter (using, for example, a U.S. National Ignition Facility-like laser and a Keck Telescope-like optical beam former) that could be detectable at interstellar distances. When detected, the nanosecond-long pulses would be a factor of ∼1000 brighter than the host star of the transmitter during their brief flashes (Howard et al. 2004). Such nanosecond-scale optical pulses are not known to occur naturally from any astronomical source (Howard & Horowitz 2001).
Extant SETI Searches
Our group is involved in a variety of ongoing searches for signatures of extraterrestrial intelligence, spanning the electromagnetic spectrum from radio to optical wavelengths. The most publicly well known of these is our distributed computing effort, SETI@home (Anderson et al. 2002). Launched in 1999, SETI@home has engaged over 5 million people in 226 countries in a commensal sky survey for narrow band and pulsed radio signals near 1420 MHz using the Arecibo radio telescope. SETI@home is currently operating over a 2.5 MHz band on the seven beam Arecibo L-band Feed Array (ALFA). Participants in the project are generating the collective equivalent of 200 TeraFLOPs/sec and have performed over 1.4 × 10 22 FLOPs to date.
Another of our radio SETI projects, the Search for Extra-Terrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) (Werthimer et al. 1995), is now in its fifth generation and is currently being conducted in a collaboration between UC Berkeley and Cornell University (Table 1). In June 2009 we commissioned SERENDIP V.v, the newest iteration of the three-decade old SERENDIP program 1 . This project utilizes a high performance field programmable gate array (FPGA)-based spectrometer attached to the Arecibo ALFA receiver to perform a high sensitivity sky survey for narrow-band signals in a 300 MHz band surrounding 1420 MHz. The SERENDIP V.v spectrometer analyzes time-multiplexed signals from all seven dualpolarization ALFA beams, commensally with other telescope users, effectively observing 2 billion channels across seven 3 arc-minute pixels. A copy of this instrument is currently deployed by the Jet Propulsion Laboratory on a 34-m Deep Space Network (DSN) dish, DSS-13, in Barstow, California.
Our optical pulse search (Lampton 2000) is based at UC Berkeley's 30-inch automated telescope at Leuschner Observatory in Lafayette, California. The detector system consists of a custom-built photometer, employing three photomultiplier tubes (PMTs) fed by an optical beamsplitter to detect the concurrent (within ∼1 ns) arrival of incoming photons across a wavelength range λ = 300-650 nm. This "coincidence" detection technique improves detection sensitivity by reducing the false alarm rate from spurious and infrequent pulses observed in individual PMTs. PMT signals are fed to three high speed amplifiers, three fast discriminators and a coincidence detector (Figure 1), where detections are measured by a relatively slow (1 MHz) Industry Standard Architecture (ISA) counter card. The photometer features a digitally adjustable threshold level to set the false alarm rate for a particular sky/star brightness. During a typical observation, the telescope is centered on a star and detection thresholds are adjusted so that the false alarm rate is sufficiently low. Currently we record three types of events: single events, when an individual PMT output is greater than the voltage threshold originally set; double events, when any two of the PMTs output exceeds the threshold in the same nanosecond-scale time period; and triple events, when all three PMTs concurrently exceed threshold. Voltage thresholds are set so that false triple events are very rare and false double events occur only a few times in a 5 minute observation. A duplicate of this instrument is in place at Lick Observatory near San Jose, California (Stone et al. 2005).
New Instruments
Historically, the level of technology and engineering expertise required to implement a SETI instrument was quite high. As a result, SETI programs have been limited to just a handful of institutions. Our group is developing two new instruments possessing several advantages over previous generations of SETI instrumentation -an Optical SETI Fast Photometer for optical SETI and the Heterogeneous Radio SETI Spectrometer for observations in the radio. Both are constructed from widely available modular components with relatively simple interconnects. The designer logs into the instrument using Linux and programs in C, obviating the need for cumbersome interfaces (e.g. JTAG) and languages (e.g. VHDL/Verilog). Further, these instruments are scalable and easily upgradable by adding additional copies of commercially available parts (compared with the money and time-consuming upgrades of previous instruments that involved complete redesigns of PC boards and ASICs). Collectively these advances will enable much wider participation in SETI science.
Our next generation Optical SETI Fast Photometer (OSFP) is based on the same front-end optics and photodetectors as the original Berkeley OSETI instrument, but adds a flexible digital back-end based on the Center for Astronomy Signal Processing and Electronics Research (CASPER, see below) DSP instrument design system. The programmable FPGA-based digital back-end will allow us to improve sensitivity by implementing sophisticated real-time detection algorithms, capture large swaths of raw sampled voltages for diagnostics or centroiding and perform efficient rejection of interference based on pulse profiles.
Our newest radio SETI instrument, the Heterogeneous Radio SETI Spectrometer (HRSS), is also CASPER-based. HRSS will take advantage of the wide bandwidth capabilities of a high-speed analog-to-digital converter (ADC) paired with an FPGA to digitize, packetize, and transmit coarse channelized spectral regions to flexible, off-the-shelf CPUs and graphics processing units (GPUs) for fine spectroscopy and RFI rejection. This architecture will not only provide for economical entry into cutting edge SETI research (Table 2, below), its use of standard C programming on CPUs and GPUs will enable the DSP instrument internals to be accessible for students with only modest instrumentation experience. The HRSS architecture is highly scalable and inexpensive, paving the way for future spectrometers with very large bandwidths (many GHz) covering many beams simultaneously. The complete instrument system for both HRSS and OSFP, including digitization and packetization hardware, digital signal processing (DSP) algorithms and control software, will be made publicly available for students and researchers worldwide.
Open Source Hardware Infrastructure
All of the instruments discussed here take advantage of the open source, modular DSP instrumentation framework developed by the Center for Astronomy Signal Processing and Electronics Research (CASPER) (Werthimer et al. 2011). This international collaboration seeks to shorten the astronomy instrument development cycle by designing modular, upgradeable hardware and a generalized, scalable architecture for combining this hardware into a signal-processing instrument. Employing FPGAs, FPGA-based chip-independent signal processing libraries, and packetized data routed through commercially available switches, CASPER instrument architectures look like a Beowulf cluster, with reconfigurable, modular computing hardware in place of CPU compute nodes. Thusly, a small number of easily replaceable and upgradeable hardware modules may be connected with as many identical modules as necessary to meet the computational requirements of an application, known colloquially as "computing by the yard." Such an architecture can provide orders of magnitude reduction in overall cost and design time and closely tracks the early adoption of state-of-the-art IC fabrication by FPGA vendors.
The Berkeley Emulation Engine (BEE2) system was CASPER's first attempt at providing a scalable, modular, economic solution for high-performance DSP applications (Chang et al., 2005). The BEE2 system consists of three hardware modules: the main BEE2 processing board, a highspeed ADC board for data digitization and an iBOB board primarily responsible for packetizing ADC data onto the Ethernet protocol. Communication between hardware modules takes place over standard 10 Gbit Ethernet (10 GbE) links, allowing for the relatively simple integration of commercial switches and processors.
The current generation Virtex-5-based ROACH board (Reconfigurable Open Architecture for Computing Hardware) replaces, but interoperates with, both the BEE2 and IBOB boards. ROACH includes a single Xilinx Virtex-5 FPGA (SX95T, LX110T, LX155T), four 10 GbE-CX4 ports, 2 ADC ports, up to 8GB of DDR2 memory, 72Mbit of QDR and an independent control and monitoring PowerPC processor. ROACH remains compatible with all current and next generation ADC boards.
All CASPER boards may be programmed via a set of open-source libraries for the Simulink/Xilinx System Generator FPGA programming language. These libraries abstract chip-specific components to provide high-level interfaces targeting a wide variety of devices. Signal processing blocks in these libraries, such as polyphase filterbanks, Fast Fourier Transforms, digital down converters and vector accumulators, are parameterized to scale up and down to arbitrary sizes, and to have selectable bit widths, latencies and scaling.
SERENDIP V.v
Over the last 30 years SERENDIP spectrometer development has closely tracked the Moore's Law growth in the electronics industry, with new spectrometers processing ever-larger bandwidths while achieving finer spectral resolution. SERENDIP V.v is the most powerful spectrometer yet built as part of the SERENDIP project. SERENDIP V.v was installed at Arecibo Observatory in June 2009 and operates commensally with other experiments on the ALFA multi-beam receiver. Currently, the spectrometer multiplexes beam-polarizations through a single-beam 200MHz digital signal processing chain via a computer-controlled RF switch.
The SERENDIP V.v system architecture and dataflow are shown in Figure 2. ALFA signals for all 14 beam-polarizations are fed into an RF switch, with a single output fed into a high-speed ADC sampling at 800 Msps. An iBOB board mixes the sampled signal down to baseband, decimates to a 200 MHz bandwidth and transmits the serialized data stream to a BEE2 via a high-speed digital link. Processing on the BEE2 is split into four stages, each of which occupies a separate FPGA on the board. The data stream is 1: coarse channelized via a 4096pt polyphase filter bank (PFB), 2: matrix transposed by a corner turner to facilitate a second stage of channelization, 3: fine channelized using a conventional 32768pt Fast Fourier Transform (FFT) and finally 4: thresholded, in which each fine frequency bin (1.49 Hz wide) is compared against a scaled coarse-bin average to pick out fine bins of interest. Local averages are calculated per PFB channel by averaging the same data being fed to the FFT in parallel with the transform. This way, the total power in each PFB bin can be accumulated while the FFT is being computed (via Parseval's theorem).
The thresholding process triggers "hits" for fine/FFT bins that are greater than or equal to the threshold power. For practical reasons, the number of hits reported per coarse/PFB bin is capped via a software-adjustable setting, usually set to report fine bins between 15-30 times the average power. The reported hits are assembled into UDP packets on-board the BEE2 and transmitted to a host PC. The host PC combines spectrometer data with meta-information, such as local oscillator settings and pointing information, and writes the complete science data stream to disk. To-date, SERENDIP V.v has commensally observed for approximately 900 hours. Analysis efforts are underway, in parallel, at both UC Berkeley and Cornell.
While both SERENDIP V.v and SETI@home operate simultaneously and commensally on the same RF signal, SERENDIP V.v differs in the key respect that the computationally intensive Fourier Transform is performed internally, rather than through distributed computing. This forces the SERENDIP V.v spectrometer to use a much simpler search algorithm than SETI@home employs. However, since the SERENDIP spectrometer is collocated with the telescope, it has access to a much larger bandwidth. SERENDIP and SETI@home are thus complementary, in that together they can look with both a panoramic gaze across many MHz and with microscopic precision near the 21cm "watering hole." Fig. 2.-: SERENDIP V.v instrument architecture. Analog signals from the ALFA receiver, mixed down to IF, are fed to a computer-controlled switch. One copy of the input is relayed to the SETI@home data recorder and a time-multiplexed beam is sent to the SERENDIP V.v spectrometer. The spectrometer samples the incoming IF signal at 800 Msamples/sec, digitally down converts the data to a complex baseband representation, performs a two-stage channelization (yielding ∼1 Hz spectral resolution) and outputs over-threshold frequency channels to a host PC.
Heterogenous Radio SETI Spectrometer
The HRSS instrument system bridges our previous radio SETI programs by connecting open source FPGA-based signal processing hardware and software to an easily-programmable GPUequipped multicore CPU back-end, thus achieving an economical student-friendly SETI instrument. The low cost, scalable architecture used in HRSS will enable more widespread deployment than previous instruments, potentially increasing both the sky and frequency coverage of the radio SETI search space. With previous instruments, difficulty in programming the hardware precluded implementing intricate algorithms directly into the real-time data flow. The flexibility of the CPU/GPU back-end of HRSS will readily enable arbitrarily sophisticated algorithms in the real-time processing pipeline, including dynamic interference rejection and immediate follow-up.
The prototype for HRSS is the existing Packetized Astronomy Signal Processor (PASP) (McMahon 2008), based on the CASPER iBOB. This reconfigurable FPGA design channelizes two signals, each of 400 MHz bandwidth (digitizing at 800 Msps), packetizes, and distributes the channels to different IP addresses using a runtime programmable schema. Figure 3 shows a block diagram of the PASP instrument. Two signals (e.g. two polarizations) are fed into an iBOB using a dual ADC board. Each polarization is sent through a PFB, which channelizes the streams. Individual channels are buffered into packets and sent out over 10 GbE links to a cluster of servers via a 10 GbE switch or a single backend server directly connected to the iBOB.
The PASP design is highly reconfigurable. The number of channels, number of IP addresses, and the packet size can all be easily adjusted in the instruments Simulink design. This design can support a variety of back-end processing options simply by adjusting these three parameters. The number of channels adjusts the size of the sub-band for each processing element. Dividing the 400 MHz band into 16 channels creates large 25 MHz sub-bands, which may require a faster server, but this can be balanced by increasing the number of channels and thereby reducing the size of the sub-bands and processing demand. The number of IP addresses also controls the bandwidth each backend server receives. In a 16-channel design with only 8 IP addresses, each IP will receive 2 channels. In a server with multiple processing elements (e.g. multiple CPU cores or GPUs), these channels can be processed in parallel. The FPGA portion of HRSS largely consists of a port of the PASP design to the new CASPER ROACH board, taking advantage of the larger FPGA, enabling a larger bandwidth and improved interface. The iBOB can be difficult to interface with, requiring a JTAG connection to reprogram the board and a very limited shell program to interact with the FPGA. In contrast, ROACH provides a full Linux OS.
Additional software running on connected CPUs/GPUs will finely channelize the sub-bands and identify possible events for further processing. This software will initially be developed in ANSI-C to allow maximum portability. Once the C-based system is fully prototyped, we will optimize for GPU hardware and specialized languages to extract more processing power from servers with graphics capabilities. We are investigating both OpenCL and CUDA as target languages. CUDA will provide excellent performance, but can only be compiled for NVIDIA GPUs. OpenCL is designed to compile for generic CPU and GPU platforms, but it may not provide the performance efficiency of an architecture-specific language like CUDA. Figure 4 shows an example configuration of HRSS with a cluster of servers on the backend. The figure shows a PASP configured for 64 channels and 16 IPs, a 10GbE switch, and a cluster of backend servers. The reconfigurability of the PASP design makes the required size and computing power of the backend processing cluster highly elastic, scaling from a single server to a cluster of high-powered servers.
As shown in Table 2, HRSS is extremely cost effective compared to other SETI spectrometers with ∼1 Hz spectral resolution. HRSS is less expensive than SERENDIP V.v, primarily because it uses a newer single-FPGA ROACH board paired with commodity computing hardware instead of an iBOB with a 5-FPGA BEE2 board. The HRSS architecture can be easily scaled up to process a 1.5 GHz dual polarization signal on a single ROACH board using currently available dual 3 Gsps ADCs and multiple fine channelization nodes. The cost of each additional 125 MHz/dual polarization module is about a factor of three less than the first module.
Optical SETI Fast Photometer
The forthcoming Optical SETI Fast Photometer (OSFP) is based on the same front-end optics and photodetectors as the original Berkeley OSETI instrument, but adds a flexible digital back-end based on the CASPER DSP instrument design system. This instrument will significantly improve our sensitivity to pulsed optical signals, and lower some of the barriers to wider engagement in optical SETI searches. The digital back-end for the instrument, Figure 5, will be constructed from modular CASPER components; direct sampling PMT outputs with two dual 8-bit, 1500 Msps ADC boards and using a single ROACH board for DSP. This board features a variety of interfaces for connection to a control computer, accommodating a variety of experiment parameters. For high threshold, low event rate searches, the ROACH's 100 Mbit Ethernet should be sufficient for data acquisition. For low thresholds, or characterization of instrument PMTs, the ROACH's 10GbE interfaces can be used for transferring many events and/or large swaths of raw sampled voltages.
The programmable FPGA-based digital back-end will allow us to improve sensitivity by implementing sophisticated real-time detection algorithms. In our existing system at Leuschner observatory, the detection algorithm is very simple -all three PMT signals must be above a programmable threshold to trigger an event. With OSFP, one can implement more sophisticated detection algorithms. For example, a multistage trigger could be implemented that requires the sum of the three digitized PMT outputs to exceed a threshold as well as the requirement that the signal levels in the three streams be similar to each other. The ability to perform significant computations on the data streams in real time is a crucial aspect of this design. We envision searching for multiple signal types simultaneously, including weak pulse trains with repetition times from ns to ms and violations of Poisson statistics in photon arrival times (indicating a non-astrophysical source). False positive signals can also be efficiently rejected based on pulse profiles (a capability sorely lacking in the current threshold-based instrument).
The large amount of DRAM available on the ROACH board will enable buffering of raw PMT waveforms and triggered write-to-disk based on high-confidence events. Such a capability will enable detailed analysis of an event, including precise determination of pulse arrival times using centroiding. Upon detection of a coincidence event, a user-adjustable section of the corresponding waveform, along with microsecond time-tagging provided by a GPS 1 pulse per-second system, will be packetized and transmitted to a host computer over one of the ROACH's Ethernet interfaces. A parallel, streaming DSP design will enable the instrument to operate at 100% duty with reasonable waveform buffers. Should a significant event be detected, the software system will automatically alert the observer to the possible signal detection, and will optionally automatically cause the telescope to continue observing the same sky coordinates where the telescope was pointed when the reported flash arrived.
In anticipation of the real-time computing capabilities of OSFP, we have performed preliminary simulations of several pulse detection algorithms. These simulations model the entire front-end of the system, including the optical beam splitter, PMTs, and ADCs. In initial work, it appears the optimal algorithm involves thresholding the cross-correlation of each pair of PMT waveforms, but we continue to evaluate tradeoffs in sensitivity and false alarm rate. Future simulations will allow us to improve our algorithms by incorporating more elaborate detection criteria.
Use of CASPER hardware and gateware for this instrument guarantees an upgrade path when faster ADCs become available, eventually allowing full Nyquist sampling of the PMT bandwidth. All optics and detector components for the existing front-end are available off-the-shelf from Hamamatsu and Edmunds Industrial Optics. The entire assembly can be constructed without special tools, and complete instructions and parts lists are available at http://seti.berkeley.edu/opticalseti. tubes (PMTs) using optical beamsplitters. The PMT outputs are digitized directly using 1.5 Gsamp/sec ADCs and transmitted to a ROACH board for processing. Onboard the ROACH, voltage samples are copied into a 4 Gb DRAM ring buffer that feeds a programmable event detection circuit. When triggered, the ring buffer contents are read out to a host computer, capturing raw event data. The digital logic for the instrument is fully compatible with the CASPER open-source instrument design tool flow.
Acknowledgments
The Berkeley SETI projects are funded by grants from NASA and the National Science Foundation, and by donations from the friends of SETI@home. We acknowledge generous donations of technical equipment and tools from Xilinx, Fujitsu, Hewlett Packard and Sun Microsystems. | 2011-09-06T10:33:06.000Z | 2011-09-06T00:00:00.000 | {
"year": 2011,
"sha1": "6af4ece2a817e278b55676a393e36eea6021522d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6af4ece2a817e278b55676a393e36eea6021522d",
"s2fieldsofstudy": [
"Physics",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
16036974 | pes2o/s2orc | v3-fos-license | Graded Majorana spinors
In many mathematical and physical contexts spinors are treated as Grassmann odd valued fields. We show that it is possible to extend the classification of reality conditions on such spinors by a new type of Majorana condition. In order to define this graded Majorana condition we make use of pseudo-conjugation, a rather unfamiliar extension of complex conjugation to supernumbers. Like the symplectic Majorana condition, the graded Majorana condition may be imposed, for example, in spacetimes in which the standard Majorana condition is inconsistent. However, in contrast to the symplectic condition, which requires duplicating the number of spinor fields, the graded condition can be imposed on a single Dirac spinor. We illustrate how graded Majorana spinors can be applied to supersymmetry by constructing a globally supersymmetric field theory in three-dimensional Euclidean space, an example of a spacetime where standard Majorana spinors do not exist.
Introduction
One of the key ingredients to a deep understanding of the mathematical concept of spinor fields has been the complete classification of all possible types of reality conditions that can be imposed on spinors in a given spacetime. If spinors are treated as ordinary fields, this classification of possible reality conditions, normally referred to as Majorana conditions, has been given in [1]. However, though this classification of Majorana conditions nicely extends to spinors treated as Grassmann odd valued fields, as is the case for example in supersymmetric theories, it turns out not to be complete. To see this, note first that the components of such Grassmann odd valued spinor fields are given by anticommuting supernumbers. Since a Majorana condition relates a spinor to its complex conjugate, extending the notion of a Majorana condition to such anticommuting spinor fields implies that one first has to extend the notion of complex conjugation to supernumbers. There is, however, an ambiguity in defining this extension, leading to at least two inequivalent notions of complex conjugation of supernumbers. These we will refer to as standard complex conjugation [2] and pseudo-conjugation [3], respectively. While standard complex conjugation essentially leads to the classification of Majorana conditions as given in [1], we show that pseudo-conjugation makes it possible to define a genuinely new type of Majorana spinor, which we will refer to as graded Majorana.
It should be pointed out that the existence of such reality conditions in the special case of four-dimensional Euclidean space has already been discussed in [4,5,6]. In this paper we will show how this special case is part of the wider and more general scheme of graded Majorana spinors which, as we shall see, are entirely complementary to standard Majorana spinors.
Pseudo-conjugation
Let us first briefly comment on the properties of standard complex conjugation and pseudo-conjugation, respectively. While the operation of standard complex conjugation on supernumbers is an involution, pseudo-conjugation in contrast is a graded involution. Denoting the operation of standard complex conjugation by * and pseudo-conjugation by ⋄ we thus have z * * = z, z ⋄⋄ = (−1) ǫz z. (2.1) Here ǫ z = 0 if z is an even (commuting) supernumber, and ǫ z = 1 if z is odd (anticommuting). It is this property of pseudo-conjugation which will enable us later to define a new kind of Majorana spinor. Additionally, standard complex conjugation and pseudo-conjugation, respectively, satisfy the properties (z + w) * = z * + w * , (z + w) ⋄ = z ⋄ + w ⋄ , (2.2a) Note that both types of conjugation reduce to ordinary complex conjugation on ordinary numbers. A general supernumber can be expanded in the generators ζ i , i = 1, . . . , N , of a Grassmann algebra as Here the coefficients z 0 , z i , . . . are ordinary complex numbers. With respect to standard complex conjugation the generators will be taken to be real, i.e., ζ * i = ζ i . However, imposing a similar reality condition on the generators using pseudo-conjugation will be inconsistent with Eq. (2.1). Instead, without loss of generality, we will impose This requires the number N of Grassmann generators to be even or-as one normally considers in the context of supersymmetric theories-infinite. Note that ζ * ⋄ i = ζ ⋄ * i , from which it follows that standard complex conjugation commutes with pseudo-conjugation on arbitrary supernumbers.
As we shall see, it will be convenient to split the supernumber z into a sum of two parts (2.5a) Using this splitting we define an invertible map f on even supernumbers z which can be shown using the fact that z ⋄ 1 = z * 1 and z ⋄ 2 = −z * 2 . It follows that imposing a pseudo-reality condition z = z ⋄ on an arbitrary even supernumber z is equivalent to imposing the standard reality condition f (z) = f (z) * on the supernumber f (z) =z.
In Section 4 we will consider how pseudo-conjugation may be used to impose reality conditions on spinors, the components of which are taken to be anticommuting supernumbers. However, we first need to recall some results about Clifford algebras, as discussed in [1].
Clifford algebras in d-dimensions
The Clifford algebra in d spacetime dimensions is given by with d = t + s. The γ µ are represented by 2 ⌊d/2⌋ × 2 ⌊d/2⌋ matrices, which may be chosen such that Defining A = γ 1 · · · γ t we then have In even dimensions we can introduce the matrix which satisfies (Γ 5 ) 2 = 1 and is, up to proportionality, the unique matrix which anticommutes with all γ µ , µ = 1, . . . , d. As ±γ µ * form an equivalent representation of the Clifford algebra, there exists an invertible matrix B such that where η can be shown to depend on the signature of the metric, see Table 1.
Note that in even dimensions, where t − s will also be even, we always have a choice of η = ±1, whereas in odd dimensions η is fixed. B is unitary and satisfies the condition where ǫ depends on the signature of the metric as well as on the value of η as displayed in Table 1. Note that B is only defined up to an overall phase. The charge conjugation matrix C is defined by Using the properties of A and B one finds that C † C = ½ and The last two equations can be combined to give Additionally we have that These two relations will be important when considering super Poincaré algebras in different signatures, see Section 5.
In even dimensions, as there is a choice of η = ±1, let us define B ± such that Here η ± = ±1 and ǫ ± is the value of ǫ corresponding to η ± in a given signature.
Correspondingly we define C ± = B T ± A. Interestingly B + and B − are related by where λ is an arbitrary phase factor. This relation seems to have been overlooked in the literature. To prove Eq. (3.13) note that hence B −1 − B + anticommutes with all the gamma matrices and as such must be proportional to Γ 5 . Unitarity of both B ± and Γ 5 restrict λ such that |λ| 2 = 1.
Standard and symplectic Majorana conditions
Let us first consider signatures in which there exists a matrix B for which ǫ = +1, i.e. B * B = ½, see Table 1. We may use this matrix B to impose the standard Majorana condition Note that imposing such a condition will not be consistent if ǫ = −1 since In those signatures where there are only matrices B for which ǫ = −1 one normally introduces a pair (or more generally an even number) of Dirac spinors ψ (i) , i = 1, 2, and imposes the symplectic Majorana condition where ǫ ij = −ǫ ji with ǫ 12 = +1. This condition reduces the degrees of freedom of the pair of spinors down to that of a single spinor with no reality condition imposed. Therefore, since a second spinor is initially introduced in order to impose the symplectic Majorana condition, the number of degrees of freedom is not in effect reduced.
Graded Majorana conditions
We shall now show that in signatures in which there exists a matrix B for which ǫ = −1, i.e. B * B = −½, we can-by making use of pseudo-conjugation-define an alternative Majorana condition that, unlike the symplectic one, does not require duplicating the number of fields, but instead can be imposed on a single spinor. We propose the condition Now, since the components of ψ are anticommuting supernumbers, we have from Note that here we have used B ⋄ = B * since B is a matrix of ordinary complex numbers. As pseudo-conjugation is a graded involution we will refer to spinors satisfying Eq. (4.3) as graded Majorana spinors.
To be complete we also note here that, in those signatures for which there exists a matrix B for which ǫ = +1, pseudo-conjugation may be used to define a graded symplectic Majorana condition In the next section we will show how reality conditions using standard complex conjugation and pseudo-conjugation, respectively, can be thought of as equivalent in terms of the number of constraints they impose on a spinor.
Equivalence of reality conditions
Just as the standard Majorana condition of Eq. (4.1) is covariant under Lorentz transformations so, too, is the graded Majorana condition of Eq. (4.3). For the purpose of analyzing the number of constraints, however, we shall also consider more general reality conditions that may not necessarily be so. Let us introduce 2 ⌊d/2⌋ × 2 ⌊d/2⌋ matrices M and N satisfying M * M = +½ and N * N = −½, respectively (where we require d > 1 for the matrix N to exist). Then consider reality conditions of the form ψ = M −1 ψ * and ψ = N −1 ψ ⋄ , encompassing the standard and graded Majorana conditions, respectively. In particular these conditions shall be replaced with the corresponding Majorana conditions, Eqs. (4.1, 4.3), as long as the appropriate matrices B exist.
In order to show that the number of constraints imposed on a spinor is the same for both ψ = M −1 ψ * and ψ = N −1 ψ ⋄ we will use an argument analogous to that for an even supernumber as discussed in Section 2. Consider the split of Eqs. (2.5a, 2.5b) applied to each of the components of the spinor ψ, resulting in (4.5a) Using the fact that ψ * 1 = ψ ⋄ 2 and ψ * 2 = −ψ ⋄ 1 it is easily seen that the following two equivalences hold In those signatures where there exists a matrix B such that shows how a graded Majorana condition imposed on the spinor ψ can be restated as a symplectic Majorana condition imposed on the split fields ψ 1,2 of Eq. (4.5a). Note, however, that the symplectic Majorana condition is being imposed on the internal supernumber structure of a single spinor. Conversely, in those signatures where there exists a matrix B such that B * B = ½, we see from Eq. (4.6) that the standard Majorana condition is equivalent to a graded symplectic Majorana condition being imposed on the split fields ψ 1,2 . Also in this case the symplectic condition is imposed on the internal supernumber structure of a single spinor.
Let us now define the quantitỹ where µ is some non-zero, ordinary complex constant. The relationship of Eq. (4.8) may be inverted to give ψ in terms ofψ. To see this note that if we splitψ as in Eqs. (4.5a, 4.5b) we havẽ where we have used that ψ * 1 = ψ ⋄ 2 and ψ * 2 = −ψ ⋄ 1 . We then find where ∆ ≡ (µ * ) 2 ½ + µ 2 (M * N ) 2 . For ∆ to be invertible we must choose µ such that ±iµ * /µ is not an eigenvalue of M * N , which is always possible. Hence, we find for ψ in terms ofψ We can now show that a reality condition on ψ using pseudo-conjugation is, in terms of the number of constraints imposed, equivalent to a reality condition onψ using standard complex conjugation. From Eqs. (4.9a-4.10b) and the fact that ψ * 1 = ψ ⋄ 2 and ψ * 2 = −ψ ⋄ 1 we have Now, combining Eqs. (4.6, 4.7) with Eq. (4.12) we find that As there exists an invertible map between ψ andψ, this proves that a reality condition using pseudo-conjugation imposes the same number of constraints as does a reality condition using standard complex conjugation 1 .
Dirac equation and spinor actions
If η = +1, see Table 1, the Dirac equation for the corresponding Majorana spinors is not consistent with a mass term [1]. It will therefore be necessary to distinguish between the Majorana conditions corresponding to the two possible cases η = ±1. Consider first the standard Majorana condition. If η = −1 the spinor will simply be referred to as Majorana (M ). If however η = +1 the spinor will be called pseudo-Majorana (M ′ ). Similarly, for the graded Majorana condition, the spinor will be called graded Majorana (gM ) if η = −1 and pseudo-graded Majorana (gM ′ ) if η = +1. See Table 1 for a summary. Consequently pseudo-Majorana spinors must be massless to be consistent with the Dirac equation and the same is true for pseudo-graded Majorana spinors. Now one should note that the Dirac equation for Majorana spinors cannot always be derived from an action. Whether or not this is possible depends on the respective Majorana condition used and on the symmetry properties of Cγ µ and C. The Lagrangian for both standard and graded Majorana spinors will be of the form (4.14) In the case of standard Majorana spinors one easily finds that for the action to be non-vanishing one has to require Cγ µ to be symmetric, and, if massive, we further require the charge conjugation matrix C to be antisymmetric [7]. In the case of graded Majorana spinors the same conditions apply. Note that in Minkowski spacetimes we have (Cγ µ ) T = ǫCγ µ , therefore an action involving graded Majorana spinors (ǫ = −1) will vanish. In Euclidean or other signatures, however, this need not be the case. In Euclidean signatures, for example, an action involving standard Majorana spinors is non-vanishing only if d = 0, 1, 2 mod 8, whereas an action involving graded Majorana spinors is non-vanishing only if d = 2, 3, 4 mod 8. If instead we consider parity violating Lagrangians of the form we require CΓ 5 γ µ to be symmetric and, in the case of massive spinors, we also require CΓ 5 to be antisymmetric (note that d must be even for Γ 5 to exist). Now in Minkowski spacetimes we have (CΓ 5 γ µ ) T = −ǫ(−1) d/2 CΓ 5 γ µ . Therefore such an action involving graded Majorana spinors will be non-vanishing in Minkowski spacetimes only if d = 0 mod 4, whereas in the case of standard Majorana spinors we require d = 2 mod 4. Finally let us consider the Dirac action for a pair of symplectic Majorana spinors. In this case we have (4.16) For the action to be non-vanishing we require that Cγ µ be antisymmetric, and in the massive case we additionally require C to be symmetric.
properties under the Lorentz group in order to be regarded as a spinor. In the cases where t − s = 2 mod 4, both ψ andψ can be chosen to transform as spinors.
Standard and graded Majorana-Weyl conditions
Note that in even dimensions, where we have a choice of matrices B ± for η = ±1, it is possible to simultaneously impose the two corresponding reality conditions. Such spinors will be massless due to the fact that a pseudo-(graded) Majorana condition has been imposed. There are four possible cases which we shall analyze separately. If t − s = 0 mod 8 we can impose both M and M ′ conditions, giving Using Eq. (3.13) we see that a consequence of these two conditions is that ψ = λΓ 5 ψ.
Again we have as a consequence of these equations that ψ must satisfy the Weyl condition, Eq. (4.18), with helicity λ = ±1 for consistency. We refer to such spinors as graded Majorana-Weyl (gM W ). If t − s = 2 mod 8 we can impose both gM and M ′ conditions The Weyl condition, Eq. (4.18), is no longer satisfied due to the mixed nature of the Majorana conditions. Instead, a consequence of these two conditions is where for consistency we must have λ = ±i. Note that, although ψ is not a true Weyl spinor, if we split ψ = ψ 1 + ψ 2 as in Eqs. (4.5a, 4.5b) then the combinations ψ 1 ± iψ 2 are Weyl. However, the physical interpretation of the condition in Eq. (4.21) remains unclear.
If t − s = 6 mod 8 we have both M and gM ′ conditions. This case is very similar to t − s = 2 mod 8. Table 2 summarizes which reality conditions may be imposed in each of the most interesting spacetimes.
Four-dimensional Euclidean space
It is worth mentioning here that when working in even dimensions it is common to use the Weyl representation for spinors. The Weyl representation can be defined in full generality for arbitrary signature in any even dimensional spacetime, however it is perhaps most familiar in four-dimensional Minkowski space where the use of two-component spinors with dotted and undotted indices is quite standard. Here, however, we shall briefly discuss the case of four-dimensional Euclidean space, demonstrating how the reality conditions imposed in [4,5,6] fit into the general scheme of graded Majorana spinors.
The four-dimensional Euclidean gamma matrices are taken to be Here i = 1, 2, 3 and σ i are the standard Pauli matrices. We choose the matrices B ± in this representation to be where ε = iσ 2 . We see from the form of Γ 5 that the four-component Dirac spinor decomposes into left-and right-handed two-component spinors, φ and χ, as The graded Majorana conditions, ψ = B −1 ± ψ ⋄ , are then simply Note that with this choice of the matrices B ± imposing both graded Majorana conditions implies χ = 0 and hence the resulting spinor will be a left-handed graded Majorana-Weyl spinor. If we had chosen the opposite relative sign between B + and B − the resulting spinor would have been right-handed.
Introducing indices a, b, . . . = 1, 2 for left-handed spinors, and a ′ , b ′ , . . . = 1, 2 for right-handed spinors, we find for Eq. (4.25) upon displaying the indices explicitly These expressions may be compared to the reality conditions imposed in [4,5,6]. Note that in this signature pseudo-conjugation does not change the index type from primed to unprimed. This is due to the fact that the left-handed and right-handed components of Spin(4) do not mix under conjugation [4,6], a situation which can be contrasted with, for example, four-dimensional Minkowski space where conjugation acts to interchange the left-handed and right-handed components of Spin(1, 3) [8].
Real forms of the super Poincaré algebra
We shall now investigate how these new reality conditions can be imposed to give real forms of super Lie algebras, which will subsequently allow the derivation of supersymmetric field theories involving graded Majorana spinors. Let us define the graded commutator [K, L] = KL − (−1) ǫ K ǫ L LK, where ǫ K = 0 if K is even and ǫ K = 1 if K is odd (and similarly for L). The generators of the general N = 1 super Poincaré algebra satisfy where all other commutators vanish. Here the even generators M µν and P µ , generating rotations and translations, respectively, form the Poincaré subalgebra, and Q α are the odd supersymmetry generators forming a 2 ⌊d/2⌋ component spinor. We choose (γ µ ) α β to correspond to the components of the gamma matrices and C αβ to correspond to the components of the charge conjugation matrix C. Note that with these index conventions C −1 = (C −1 ) αβ . We have σ µν = (1/4)(γ µ γ ν − γ ν γ µ ) and k appearing in Eq. (5.1d) is a constant phase factor which will be determined when considering a specific real form of the algebra. Note that if there is no matrix C available such that γ µ C −1 is symmetric, see Eq. (3.10), it is not possible to write down such an N = 1 algebra. One may, however, instead consider an N ≥ 2 algebra.
The general element of the super Poincaré algebra is given by Here ω µν , x µ are even supernumbers and θ α are odd supernumbers forming a Dirac conjugate spinor. In order to define a real form of the algebra these coefficients must be constrained by reality conditions such that the algebra still closes. This can be achieved by using standard complex conjugation or pseudo-conjugation, respectively. To impose reality conditions using pseudo-conjugation we require that there exists a matrix 3 B = (B αβ ) for which ǫ = −1. A consistent choice of reality conditions is then given by For consistency with Eq. (5.3) the coefficient of Q β on the right hand side of the above equation must satisfy which is easily checked using the fact that (σ µν ) * = Bσ µν B −1 . Further we see from this that the condition (θ α ) ⋄ B αβ = θ β is Lorentz covariant. Finally let us consider Eq. (5.1d). We have For the algebra to close under the reality conditions, Eq. (5.3), the coefficient of P µ on the right hand side of the equation must be real with respect to pseudoconjugation. Using Eq. (3.11) we find Hence, provided we choose k such that k = k * η t+1 , the algebra closes under the reality conditions, Eq. (5.3), which therefore give a real form of the algebra. One can alternatively use standard complex conjugation in order to define a real form of the algebra Eq. (5.2). A consistent choice of reality conditions on the coefficients is, in this case, given by provided, of course, B is now such that ǫ = +1. That the super Poincaré algebra also closes under these conditions can be proven analogously to the case of pseudo-conjugation. In this case however we find k = −k * η t+1 .
In even dimensions we have the possibility of imposing two Majorana conditions on the coefficients θ α . Due to the resulting Weyl condition if t − s = 0, 4 mod 8 we must, in these signatures, replace Eq. (5.1d) with which is possible provided that both Γ 5 γ µ C −1 and γ µ C −1 are symmetric (note that here C is a particular choice of C ± = B T ± A). It is then possible to define a real form of the algebra by imposing M W or gM W conditions on the Dirac conjugate spinor (θ α ) with corresponding reality conditions on the ω µν 's and x µ 's. For example, let us consider t − s = 4 mod 8. The algebra will close if we impose the gM W condition along with the conditions (ω µν ) ⋄ = ω µν and (x µ ) ⋄ = x µ . If t − s = 2, 6 mod 8 we may consistently impose both a graded and a standard Majorana condition on the coefficients θ α . However, the physical interpretation of such mixed reality conditions remains unclear.
Three-dimensional Euclidean field theory
In order to illustrate the applications of graded Majorana spinors to supersymmetric field theories let us construct a simple example in three-dimensional Euclidean space (i.e., t = 3, s = 0). From Table 1 we see that ǫ = −1 and so no standard Majorana spinors exist. We choose the gamma matrices to be the standard Pauli matrices γ i = σ i = (σ i ) α β , i = 1, 2, 3, and we take B = ε = (ε αβ ). Here α = −, + are two-spinor indices and the quantity ε αβ is the invariant antisymmetric tensor with ε −+ = +1. We use ε αβ to raise indices, with the convention ψ α = ε αβ ψ β . Indices will be lowered using ε αβ , ε −+ = +1, with the convention ψ α = ψ β ε βα . If we define J i = − 1 2 ǫ ijk M jk , then the N = 1 super Poincaré algebra can be rewritten as Writing the general element of the algebra as X = ϕ i J i + x i P i + θ α Q α we obtain a real form by imposing reality conditions (ϕ i ) ⋄ = ϕ i , (x i ) ⋄ = x i and (θ α ) ⋄ B αβ = θ β . Exponentiating the algebra gives the super Poincaré group, SΠ, from which we form the coset space SΠ/SO (3), where SO (3) is the rotation group generated by the J i . Following the method discussed in [9] we consider a coset representative so that (x i , θ α ) are coordinates on the coset space. We hence have SΠ/SO(3) = Ê 3|2 where reality is defined with respect to pseudo-conjugation as given above.
The left action of SΠ on the coset representative induces a transformation on the coordinates (x i , θ α ) → (x i + δx i , θ α + δθ α ). Using this we can find the differential operator representation of the generators of the superalgebra. In particular we have, An invariant vielbein (E i , E α ) and spin-connection Ω i on Ê 3|2 can be constructed from the coset representative as (5.14) We find that Ω i = 0, and so the inverse vielbein determines the covariant derivatives, which turn out to be For an even superscalar field Φ(x, θ), satisfying Φ ⋄ = Φ, let us consider the action It is easily seen that [Q α , D β ] = 0, from which it follows that this action will be invariant under supersymmetry transformations δΦ = β α Q α Φ. We can expand Φ in component fields as Φ(x, θ) = A(x) + θ α ψ α (x) + 1 2 θ α θ α F (x). (5.17) The condition Φ ⋄ = Φ yields A = A ⋄ , F = F ⋄ and ψ α = (B −1 ) αβ (ψ β ) ⋄ . Hence we see that ψ is a graded Majorana spinor. The action I can be rewritten in terms of the component fields. Upon elimination of the auxiliary field F via its equations of motion, and integrating out the θ coordinates, I becomes This is the action for a real scalar field coupled to a graded Majorana spinor in three-dimensional Euclidean space. For an example of a supersymmetric action involving Dirac spinors in this signature see [10]. Note that, as Cγ µ is symmetric in this signature, a supersymmetric action containing a symplectic action of the form of Eq. (4.16) does not exist.
Conclusions and Outlook
We have seen how the classification of possible reality conditions on Grassmann odd valued spinors should be extended by what we call a graded Majorana condition. In contrast to the symplectic Majorana condition which, in order to be imposed, requires an even number of spinor fields, the graded Majorana condition can be imposed on a single spinor. In fact, as we showed in Section 4.3 the graded Majorana condition imposes the same number of constraints on a spinor as does a standard Majorana condition.
In order to illustrate the use of graded Majorana spinors in supersymmetric field theories we constructed an action involving such spinors in the case of three-dimensional Euclidean space. In globally curved space an example of the use of graded Majorana spinors is obtained by considering field theories on the supersphere S 2|2 = UOSp(1|2)/U (1), as investigated in [11]. Graded Majorana spinors could also play an important role in the construction of supergravity theories. In this context, an interesting example of a spacetime where no standard Majorana spinors exist is 11-dimensional Euclidean space. It will be very interesting to investigate whether the existence of graded Majorana spinors may account for a physically sensible supergravity theory in this spacetime. | 2014-10-01T00:00:00.000Z | 2005-01-31T00:00:00.000 | {
"year": 2005,
"sha1": "4d302ad0cb16a03d17ea3890fe73a4f683efdb41",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/hep-th/0501252",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d302ad0cb16a03d17ea3890fe73a4f683efdb41",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221079135 | pes2o/s2orc | v3-fos-license | Inhibition of PIKfyve kinase prevents infection by Zaire ebolavirus and SARS-CoV-2
Significance The membrane fusion proteins of viral pathogens as diverse in their replication strategies as coronaviruses and filoviruses depend, for their functional activity, on proteolytic processing during cell entry. Endosomal cathepsins carry out the cleavages. We have constructed chimeric forms of vesicular stomatitis virus (VSV) bearing the fusion proteins of Zaire ebolavirus (ZEBOV) or SARS coronavirus 2 (SARS-CoV-2) and shown that two small-molecule inhibitors of an endosomal lipid kinase (PIKfyve) inhibit viral infection by preventing release of the viral contents from endosomes. Both inhibitory compounds cause distension of Rab5 and Rab7 subcompartments into small vacuoles. One of them (Apilimod) also inhibits infection of cells by authentic SARS-CoV-2. The results point to possibilities for host targets of antiviral drugs.
Virus entry is a multistep process. It initiates when the virus attaches to the host cell and ends when the viral contents reach the cytosol. Genetically unrelated viruses can subvert analogous subcellular mechanisms and use similar trafficking pathways for successful entry. Antiviral strategies targeting early steps of infection are therefore appealing, particularly when the probability for successful interference through a common step is highest. We describe here potent inhibitory effects on content release and infection by chimeric vesicular stomatitis virus (VSV) containing the envelope proteins of Zaire ebolavirus (VSV-ZEBOV) or severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (VSV-SARS-CoV-2) elicited by Apilimod and Vacuolin-1, small-molecule inhibitors of the main endosomal phosphatidylinositol-3-phosphate/phosphatidylinositol 5-kinase, PIKfyve. We also describe potent inhibition of SARS-CoV-2 strain 2019-nCoV/USA-WA1/2020 by Apilimod. These results define tools for studying the intracellular trafficking of pathogens elicited by inhibition of PIKfyve kinase and suggest the potential for targeting this kinase in developing small-molecule antivirals against SARS-CoV-2.
COVID-19 | SARS-CoV-2 | ZEBOV | APILIMOD | Vacuolin-1 M embrane-enveloped viruses deliver their contents to cells via envelope protein-catalyzed membrane fusion. Binding of virus to specific host cell receptor(s) triggers membrane fusion, which can occur directly at the plasma membrane or following endocytic uptake. Viruses that require endocytic uptake can use different initial trafficking routes to reach the site of membrane fusion. In endosomes, acidic pH serves to trigger conformational rearrangements in the viral envelope proteins that catalyze membrane fusion, as seen for influenza A virus and vesicular stomatitis virus (VSV). For Zaire ebolavirus (ZEBOV), proteolytic processing of the envelope protein by host cell proteases (1) is necessary to expose the receptor binding domain prior to engagement of Niemman−Pick disease type 1C (NPC1 or NPC Intracellular Cholesterol Transporter 1)-the late endosomal−lysosomal receptor protein (2). Proteolytic processing is also required for severe acute respiratory syndrome coronavirus (SARS-CoV) (3,4), and for the current pandemic SARS-CoV-2 (5). Lassa fever virus (LASV) uses a different mechanism, binding alpha-dystroglycan at the plasma membrane (6), for internalization with a subsequent pH-regulated switch that leads to engagement of lysosomalassociated membrane protein 1 for membrane fusion (7). Lymphocytic choriomeningitis virus (LCMV) also uses alpha-dystroglycan (6) and is internalized in a manner that depends on endosomal sorting complexes required for transport proteins (8), although it remains unknown whether a second receptor is required.
A hallmark of the endolysosomal system is controlled dynamic trafficking of vesicular carriers among its various subcompartments. Phophoinositides are markers for defining the identity of these subcompartments, because they are restricted in their distribution to specific intracellular membranes (reviewed in ref. 9). Although it is one of the least abundant of the phosphoinositides in cells, PI (3,5)P2 is particularly important for endomembrane homeostasis. It is produced by PI-3P-5-kinase (PIKfyve), which phosphorylates the D-5 position in phosphatidylinositol-3-phosphate (PI3P) to yield phosphatidylinositol 3,5-bisphosphate (PI(3,5)P2) (10). First cloned as mammalian p235 (11), PIKfyve is a 240-kDa class III lipid kinase, present on the cytosolic face of endosomal membranes (12,13) as part of a ternary complex with the PI(3,5)P2 5-phosphatase Sac3 and ArPIKfyve (14).
Ablation of PIKfyve function by genetic (12,15) or pharmacological means (16)(17)(18)(19)(20) causes endosomal swelling and vacuolation of late endosomes and endolysosomes. It is thought that these changes result from decreased membrane fission and concomitant interference in endosomal traffic (13,21). Small-molecule inhibitors of PIKfyve, all of which have some structural resemblance to each other, have been studied as potential drugs for treating cancer and autoimmune diseases. These inhibitors include Apilimod (19), Vacuolin-1 (18), a series of 30 Vacuolin-related
Significance
The membrane fusion proteins of viral pathogens as diverse in their replication strategies as coronaviruses and filoviruses depend, for their functional activity, on proteolytic processing during cell entry. Endosomal cathepsins carry out the cleavages. We have constructed chimeric forms of vesicular stomatitis virus (VSV) bearing the fusion proteins of Zaire ebolavirus (ZEBOV) or SARS coronavirus 2 (SARS-CoV-2) and shown that two small-molecule inhibitors of an endosomal lipid kinase (PIKfyve) inhibit viral infection by preventing release of the viral contents from endosomes. Both inhibitory compounds cause distension of Rab5 and Rab7 subcompartments into small vacuoles. One of them (Apilimod) also inhibits infection of cells by authentic SARS-CoV-2. The results point to possibilities for host targets of antiviral drugs. molecules (22), YM201636 (16), and WX8 chemical family members (20). Physiological effects of these compounds in cells include inhibition of autophagy (17,22,23), reduced generation of IL-12/ IL-23 (24), and reduced dendritic cell infiltration in psoriasis (25).
Apilimod also inhibits infection by several viruses, including ZEBOV. Although it does not alter the pH of endosomes nor inhibit cathepsin B or L (26), Apilimod blocks entry of ZEBOV and other pathogenic filoviruses (27). Several groups reported that Apilimod prevents colocalization of VSV-ZEBOV pseudoviruses with the ZEBOV endosomal receptor NPC1, but does not prevent colocalization with early endosomal antigen 1 (EEA1) (5,27,28). Apilimod also inhibits entry of pseudotyped viruses bearing the spike proteins of Middle East respiratory syndrome CoV, SARS-CoV, and SARS-CoV-2, as well as of authentic mouse hepatitis virus particles (5).
Here, we have studied the effects of Apilimod on infection of VSV-eGFP-SARS-CoV-2 and VSV-eGFP-ZEBOV chimeras and showed that Apilimod blocks infection of both, with a concentration that inhibits response by 50% (IC 50 ) of ∼50 nM. Apilimod and Vacuolin-1 also prevented entry and infection of VSV-MeGFP-ZEBOV and many of the internalized VSV-MeGFP-ZEBOV virions colocalized with NPC1 in the distended, vacuolated endosomes. This suggests that blocking PIKfyve kinase has the same downstream effects on these viruses, even though VSV-eGFP-SARS-CoV-2 does not require interaction with NPC1 for membrane fusion. Apilimod also inhibits infection by authentic SARS-CoV-2 strain 2019-nCoV/USA-WA1/2020 virus, with an IC 50 slightly lower than the IC 50 for the VSV-eGFP-SARS-CoV-2. We suggest that Apilimod, which has passed safety tests in previous human clinical trials for nonviral indications (24,25,29,30), is a potential starting point for developing small-molecule entry inhibitors of SARS-CoV-2 that could limit infection and disease pathogenesis.
Results
Apilimod Inhibits Infection of VSV-MeGFP-LCMV and VSV-ZEBOV. We inoculated SVG-A cells with VSV chimeras expressing the viral matrix protein (M) fused to eGFP (MeGFP). The chimeras include VSV (VSV-MeGFP, which initiates fusion at pH < 6.2), VSV-V269H GP (VSV-MeGFP-V269H, a variant of VSV GP that initiates fusion at pH < 5. potently inhibit VSV-MeGFP-ZEBOV infection (Fig. 1C). These results agree with results obtained by others with Apilimod (26,31) in different cell types infected with murine leukemia virus (MLV) virus pseudotyped with ZEBOV GP or with Ebola virus itself (26,27,32). Apilimod was a less effective inhibitor of VSV-MeGFP-LCMV infection, and Vacuolin-1 had no effect at the concentration used. In contrast, Apilimod and Vacuolin-1 failed to prevent infection by VSV-MeGFP, VSV-MeGFP-V269H, VSV-MeGFP-RABV, or VSV-MeGFP-LASV (Fig. 1C). IN1 (33), an inhibitor of the phosphoinositide kinase Vps34, the main endosomal generator of PI3P, also interfered with VSV-MeGFP-LCMV and VSV-MeGFP-ZEBOV infection (Fig. 1C). All of these viruses require low pH to trigger viral membrane fusion with the endosomal membranes, and, as expected, infection was fully blocked by Bafilomycin A1, which inhibits the vacuolar type H + -ATPase (V-ATPase) acidification activity (Fig. 1C). experiments, we deemed RNP delivery, as monitored by single-cell fluorescence microscopy imaging (experimental protocol summarized in Figs expected, Bafilomycin A1 blocked entry of all viruses (images in Fig. 2C and quantification in Fig. 2D). U18666A, a potent inhibitor of NPC1 (Fig. 4F), showed that NPC1-Halo remained active as a cholesterol transporter. Using live-cell spinning disk confocal microscopy ( Figs. 3 and 4), we monitored the presence of virus particles in the fluorescently tagged endosomes by colocalization with the fluorescent spots from the virus-incorporated MeGFP. We monitored entry by carrying out the experiments in the presence of cycloheximide, thus ensuring that any MeGFP fluorescent signal at the nuclear margin originated only from MeGFP molecules carried by incoming viral particles (Fig. 3 B and F). All cells were maintained at 37°C throughout all phases of the experiment to ensure normal and undisturbed intracellular trafficking. All control experiments performed in the absence of inhibitors showed arrival of VSV-MeGFP, VSV-MeGFP-V269H, or VSV-MeGFP-ZEBOV virus particles to early (Rab5c and EEA1) (Figs. 3E and 4E) or late (Rab7a or NPC1) endosomes and lysosomes (Figs. 3I and 4 C and E). MeGFP released from all viruses appeared at the nuclear margin, showing effective RNP release. NPC1, the receptor for VSV-MeGFP-ZEBOV entry, is required for fusion from endosomes (2). The successful VSV-MeGFP-ZEBOV infection observed in the absence of drug in cells expressing NPC1-Halo alone or in combination with mScarlet-EEA1 indicates that NPC1-Halo is capable of facilitating infection and that VSV-MeGFP-ZEBOV trafficked to NPC1-Halo−containing endosomes.
Apilimod and Vacuolin-1 treatment of the SVG-A cells led to enlargement and vacuolization of their endosomes and lysosomes tagged with fluorescent EEA1, Rab5c, Rab7a, or NPC1 (Figs. [3][4][5], in agreement with earlier PIKfyve ablation studies (13,21). VSV-MeGFP and VSV-MeGFP-V269H (fluorescent dots, white) reached all tagged species of enlarged endolysosomes and successfully penetrated into the cytosol, as indicated by MeGFP at the nuclear margin (Fig. 3 E and I). VSV-MeGFP-ZEBOV also trafficked to all tagged species of enlarged endolysosomes (Fig. 3 E and I), often reaching one of the numerous NPC1-containing vacuoles enriched in EEA1 (Figs. 4E and 5 B and C). VSV-MeGFP-ZEBOV in EEA1-containing endosomes increased in the presence of Apilimod, as also reported for VLP ZEBOV (27). While able to reach NPC1containing functional endosomes in cells treated with Apilimod Apilimod Blocks Infection of VSV SARS-CoV-2. Using a recombinant VSV expressing soluble eGFP (VSV-eGFP) where the glycoprotein (GP) was replaced with that of ZEBOV GP (VSV-eGFP-ZEBOV) or SARS-CoV-2 S (VSV-eGFP-SARS-Cov2), we inoculated MA104 cells with these chimera viruses and tested the effects of Apilimod on infection by flow cytometry (Fig. 6A). We found potent inhibition of VSV-eGFP-SARS-CoV-2 infection by Apilimod and confirmed that the compound also inhibits VSV-eGFP-ZEBOV infection (Fig. 6B). The dose-response curves indicated similar effects for VSV-eGFP-ZEBOV and VSV-eGFP-SARS-CoV-2 (IC 50 s of ∼50 nM), in contrast to the absence of any detectable inhibition of VSV-eGFP infection, used here as a negative control.
Discussion
Coronaviruses, filoviruses, and arenaviruses have different replication strategies and unrelated surface GPs that engage different receptor molecules during entry (1, 2, 5-8). Coronavirus and filovirus surface GPs share a requirement for entry-associated proteolytic processing for activation as fusogens (1). Filoviruses require passage through low-pH compartments where cathepsins are active. Coronaviruses may enter directly by fusion at the plasma membrane or following receptor-mediated endocytosis. Cell entry of SARS-CoV and SARS-CoV-2 depends on the protease TMPRSS2 in conjunction with ACE2 (34)(35)(36)(37), and, when TMPRSS2 is present, the entry pathway becomes insensitive to cathepsin inhibition (34,37,38). The common inhibition of viruses from all three groups by Apilimod is a consequence of perturbing their shared entry pathway. Moreover, it is not the cathepsin activity itself that these compounds affect, judging from the outcome of the assays with Apilimod and Vacuolin-1 showing they inhibit VSV chimeras bearing the surface GPs of ZEBOV and LCMV and, to a lesser extent, LASV. Apilimod also inhibits infection of cells by VSV-SARS-CoV-2 as well as by authentic SARS-CoV-2; neither compound blocks infection by wild-type VSV. For VSV-ZEBOV, we have shown that the virus reaches a compartment enriched in NPC1, the ZEBOV coreceptor, and often also enriched in EEA1, but that it nonetheless fails to release internal proteins into the cytosol. Apilimod does not inhibit cathepsin (26), but Apilimod (39) and Vacuolin-1 (17,23) can interfere with cathepsin maturation, as evidenced by an increase in procathepsin in treated cells; they do not influence endosomal pH (18,26,40), although other studies report that Apilimod decreases cathepsin activity (41) and Vacuolin-1 increases pH (17,23). Irrespective of this discrepancy, both Apilimod and Vacuolin-1 inhibit PIKfyve (17,19), a three-subunit complex (14) with a PI-3P−binding FYVE domain (10,11) that recognizes the endosomal marker, PI-3-P. Functional ablation of this enzyme by genetic means (12,15) gives rise to the same cellular phenotype as treatment with either compound (17)(18)(19). The similar dose-response curves for Apilimod inhibition of the ZEBOV and SARS-CoV-2 chimeras (IC 50 of ∼50 nM) and of authentic SARS-CoV-2 virus (IC 50 of ∼10 nM) are in good agreement with the IC 50 of ∼15 nM for Apilimod inhibition of PIKfyve in vitro (19). Thus, perturbing normal endosomal trafficking by inhibiting PIKfyve activity suggests it is the mechanism by which Apilimod and Vacuolin-1 block entry of such a diverse set of viral pathogens. One of the most striking consequence of PIKfyve inhibition, and hence of PI-3,5-P 2 restriction in endosomal membranes, is the swelling of endosomes into small, spherical vacuoles-the phenomenon that gave Vacuolin-1 its name (18). Our imaging data with VSV-MeGFP-ZEBOV chimeras show that the virus particles accumulating in these structures, many of which also contain the NPC1 coreceptor (2,42), often appear to be relatively immobile and adjacent to the endosomal limiting membrane. One possible explanation is that, when a virion reaches these distended endosomes, it can bind or remain bound to the limiting membrane, but not fuse. Another is that virions may fuse with smaller intraluminal vesicles in the endosomal lumen (43), but that PI-3,5-P2 depletion prevents back-fusion of these vesicles with the endosomal limiting membrane and inhibits release into the cytosol of the viral genome.
Inhibition of infection by authentic SARS-CoV-2 shows that the blocked release of the viral genome from a vacuolated endosome is independent of the shape, size, and distribution of spike protein on the virion. The assay we used to determine effects on infectivity of authentic virus measured release of virions after multiple rounds of infection, rather than entry, which we monitored in the VSV-SARS-CoV-2 experiments by detecting eGFP synthesis in the cytosol. Nevertheless, the IC 50 of Apilimod in experiments with authentic virus is remarkably similar to (or even more potent than) that obtained with chimeric VSV-SARS-CoV-2.
Although cathepsin L inhibitors block SARS-CoV and SARS-CoV-2 infection in cell culture (4,5), they have less pronounced effects when tested in animals (44). This may because another protease, TMPRSS2 on the surface of cells in relevant tissues, appears to prime SARS-CoV (44) and SARS-CoV-2 (37) spike proteins for efficient entry. As the effectiveness of Apilimod and Vacuolin-1 does not depend on cathepsin inhibition, their capacity to block entry of several distinct families of viruses is likely to be independent and downstream of the protease that primes their surface GP for fusion. Phase I and phase II clinical trials have shown that Apilimod is safe and well tolerated (24,25,29,30). The trials were discontinued because of lack of effectiveness against the autoimmune condition for which the drug was tested. We suggest that one of these compounds, or a potential derivative, could be a candidate broad-spectrum therapeutic for several emerging human viral pathogens, including SARS-CoV-2.
Genome Editing. Individual cell lines of SVG-A were gene edited in both alleles using the CRISPR-Cas9 system to incorporate fluorescent tags into the N terminus of Rab5c (TagRFP), Rab7a (TagRFP), EEA1 (mScarlet), or the C terminus of NPC1 (Halo). The NPC1-Halo expressing cells were further gene edited to incorporate mScarlet-EEA1 creating SVG-A cells simultaneously expressing mScarlet-EEA1 and NPC1-Halo.
A free PCR strategy (49,50) was used to generate small guide RNAs (sgRNA) with target sequences for either Rab5c, Rab7a, NPC1, or EEA1 (Table 1). The genomic DNA fragment of Rab5c, Rab7a, NPC1, or EEA1 genes fused with either TagRFP, Halo, or mScarlet were cloned into the pUC19 vector (donor constructs) which then served as homologous recombination repair templates for the Cas9 enzyme-cleaved genomic DNA. Donor constructs were obtained by a ligation of PCR amplification products from the genomic DNA fragments, TagRFP, Halo, and mScarlet sequences. Primers F1-R1 and F3-R3 amplified ∼800 base pairs of genomic sequences upstream and downstream of the start codon of Rab5c, Rab7a, or EEA1 or the stop codon of NPC1, respectively (Table 1). Primers F1 and R3 contain sequences complementary to the pUC19 vector linearized using the SmaI restriction enzyme (lowercase in the primer sequences). The TagRFP sequence containing the GGS peptide linker was amplified using primers F2-R2 from a TagRFP mammalian expression plasmid used as a template. The F2 primer contains complementary sequences to the 3′ end of the F1-R1 fragment, while the F3 primer contains complementary sequences to the 3′ end of the TagRFP sequences.
PCR products (fragments F1-R1, F2-R2, and F3-R3) were subjected to electrophoresis in 1% agarose and gel purified using a purification kit from Zymogen. The PCR fragments were cloned into the linearized pUC19 vector using the Gibson Assembly Cloning Kit (E5510S; New England Biolabs).
SVG-A cells (1.5 × 10 5 cells) were cotransfected with 0.8 μg of Streptococcus pyogenes Cas9, 0.8 μg of free PCR product coding for the target sgRNA, and 0.8 μg of pUC19 vector using Lipofectamine 2000 reagent (Invitrogen) according to the manufacturer's instructions. Transfected cells were grown for 7 d to 10 d and sorted for TagRFP, Halo, or mScarlet expression using fluorescence-activated cell sorting (FACS) (SH-800S; Sony). Prior to FACS, NPC1-Halo cells were labeled for 15 min with Janelia Fluor 647 (JF647). Single cells expressing the desired chimera were isolated, clonally expanded, and then screened by genomic PCR for TagRFP, Halo, or mScarlet insertion into both alleles (primers listed in Table 2).
Infection Assays. SVG-A cells were plated at about 30 to 40% confluency into 24-well plates and incubated for 1 d at 37°C and 5% CO 2 . At the start of the experiment, cells were incubated with the indicated drug or dimethyl sulfoxide (DMSO) at 37°C for 1 h. Following this, cells were incubated for 1 h at 37°C with VSV, VSV-MeGFP-V269H, VSV-MeGFP-RABV, VSV-MeGFP-LASV, VSV-MeGFP-LCMV, or VSV-MeGFP-ZEBOV in drug-or DMSO-containing infection medium (α-MEM, 50 mM Hepes, 2% FBS). Cells were then washed to remove nonadsorbed viruses and further incubated at 37°C in medium containing the drug or DMSO, with experiments ending at the indicated times by fixation with 3.7% formaldehyde in phosphate-buffered saline (PBS). Fluorescent intensity from 20,000 single cells from a single round of The sgRNAs with target sequences specific for Rab5c, Rab7a, NPC1, or EEA1 were generated using a free PCR strategy using a U6 promoter-containing primer and reverse primers including Rab5c, Rab7a, NPC1, or EEA1 specific target nucleotide sequences (at the positions indicated with N). infection was determined by flow cytometry using a BD FACSCanto II equipped with DIVA software package.
MA104 cells were pretreated for 1 h with the indicated concentration of Apilimod or DMSO. Pretreated cells were inoculated with VSV-eGFP, VSV-eGFP-ZEBOV, or VSV-eGFP-SARS-CoV-2 at a multiplicity of infection (MOI) = 1 (based on titers in MA104 cells) in the presence of Apilimod or DMSO for 1 h at 37°C. At 6 to 8 h postinfection, cells were collected and fixed in 2% paraformaldehyde (PFA) and then subjected to flow cytometry. The percentage of GFP cells was determined using FlowJo software (Tree Star Industries).
Vero E6 cell monolayers were pretreated for 1 h at 37°C with serial dilutions of Apilimod at the indicated concentrations. Next, SARS-CoV-2 was diluted to an MOI of 0.01 focus-forming units per cell in Apilimod-containing medium and added to Vero E6 cells for 1 h at 37°C. After adsorption, cells were washed once with PBS, and medium containing the respective concentration of Apilimod was added. Cells were incubated for 24 h at 37°C, at which time cell culture supernatants were removed and used for determination of viral titer by focus forming assay.
SARS-CoV-2 Focus-Forming Assay. Cell culture supernatants from virusinfected cells were diluted serially 10-fold, added to Vero E6 cell monolayers in 96-well plates, and incubated at 37°C for 1 h. Subsequently, cells were overlaid with 1% (wt/vol) methylcellulose in MEM supplemented with 2% FBS. Plates were harvested 30 h later by removing overlays and fixed with 4% paraformaldehdye in PBS for 20 min at room temperature. Plates were washed and sequentially incubated with 1 μg/mL CR3022 anti-spike antibody (51) and HRP-conjugated goat anti-human IgG in PBS supplemented with 0.1% saponin and 0.1% bovine serum albumin (BSA). SARS-CoV-2−infected cell foci were visualized using TrueBlue peroxidase substrate (KPL) and quantitated on an ImmunoSpot microanalyzer (Cellular Technologies). Data were processed using Prism software (GraphPad Prism 8.0), and viral titers are reported as percent inhibition relative to mocktreated SARS-CoV-2−infected cells.
Entry Assay and Intracellular Traffic. SVG-A cells plated on glass #1.5 coverslips at about 30 to 40% confluency 1 d prior to experiment were treated with drug or DMSO for 1 h at 37°C. Following this, cells were incubated at 37°C with VSV, VSV-MeGFP-V269H, VSV-MeGFP-RABV, VSV-MeGFP-LASV, VSV-MeGFP-LCMV, or VSV-MeGFP-ZEBOV in drug-or DMSO-containing infection medium. After this, cells were washed, then further incubated in medium containing the drug or DMSO at 37°C, with the experiment ending at the indicated time by fixation for 20 min at room temperature with 3.7% formaldehyde in PBS. This was followed with a 10-min incubation of 5 μg/mL Alexa647-labeled wheat germ agglutinin in PBS to label the outline of the cells.
Cells were imaged using a spinning disk confocal microscope with optical planes spaced 0.3 μm apart (52). The entry assay scored the presence of MeGFP at the nuclear margin in each cell. Trafficking of viruses to endosomal compartments was observed using live-cell imaging using the spinning disk confocal microscope. Chemical fixation tends to eliminate the large endolysosomal vacuoles generated by Vacuolin-1 or Apilimod and reduces the colocalization with viral particles contained within. Time series with images taken every 3 s for 3 min in a single optical plane with the appropriate fluorescent channels (52) were acquired from nonfixed samples imaged at the end of the experimental period. For experiments containing NPC1-Halo, the Halo-tagged cells were labeled with either 250 nM JF549 or JF647 dye in media for 30 min at 37°C. Following labeling, cells were washed three times with media. The microscope was operated using the Slidebook 6.4 software package (3I), and images were displayed also using this software.
Statistical Tests. To compare the means from cells with different treatments, one-way ANOVA and post hoc Tukey test analysis were used to take into account unequal sample sizes as indicated in the Figs. 1, 2, and 6 figure legends.
Data Availability. The VSV virus chimeras are available from the corresponding author S.P.W. upon request.
All study data are included in the article and SI Appendix and Movies S1-S3. | 2020-08-09T13:06:12.107Z | 2020-08-06T00:00:00.000 | {
"year": 2020,
"sha1": "c6b307e452753c2399d61eda08941329115fe2da",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/117/34/20803.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b989eaa6c93f8323372d31384260dfcd4150090",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248178111 | pes2o/s2orc | v3-fos-license | Ensuring accurate stain reproduction in deep generative networks for virtual immunohistochemistry
Immunohistochemistry is a valuable diagnostic tool for cancer pathology. However, it requires specialist labs and equipment, is time-intensive, and is difficult to reproduce. Consequently, a long term aim is to provide a digital method of recreating physical immunohistochemical stains. Generative Adversarial Networks have become exceedingly advanced at mapping one image type to another and have shown promise at inferring immunostains from haematoxylin and eosin. However, they have a substantial weakness when used with pathology images as they can fabricate structures that are not present in the original data. CycleGANs can mitigate invented tissue structures in pathology image mapping but have a related disposition to generate areas of inaccurate staining. In this paper, we describe a modification to the loss function of a CycleGAN to improve its mapping ability for pathology images by enforcing realistic stain replication while retaining tissue structure. Our approach improves upon others by considering structure and staining during model training. We evaluated our network using the Fr\'echet Inception distance, coupled with a new technique that we propose to appraise the accuracy of virtual immunohistochemistry. This assesses the overlap between each stain component in the inferred and ground truth images through colour deconvolution, thresholding and the Sorensen-Dice coefficient. Our modified loss function resulted in a Dice coefficient for the virtual stain of 0.78 compared with the real AE1/AE3 slide. This was superior to the unaltered CycleGAN's score of 0.74. Additionally, our loss function improved the Fr\'echet Inception distance for the reconstruction to 74.54 from 76.47. We, therefore, describe an advance in virtual restaining that can extend to other immunostains and tumour types and deliver reproducible, fast and readily accessible immunohistochemistry worldwide.
Introduction
Haematoxylin and Eosin (H&E) is the default stain in pathology and is frequently used for a wide variety of tasks and endures in approaches for their automation, like segmentation of glands in the prostate [11], classification of early pancreatic cancer [10] and staging of colorectal cancer [3]. Deep networks can perform these tasks on H&E alone, suggesting that a broad scope of information is available from H&E; however, it indiscriminately stains both tumour and non-tumour cells. Therefore, it can be challenging to diagnose cases with ambiguous histology or poor tumour differentiation with the unaided eye, as humans have difficulty distinguishing the subtle changes in colouring. In these cases, immunohistochemistry (IHC) is employed. It visualises the location and quantities of specific molecules that are present in tissue using an antigen-antibody reaction [9]. It is widely used in diagnosis, especially in challenging cases, because tumours can express specific antigens that can be highlighted by IHC [2]. This allows the tumour cells to be differentially distinguished from non-tumour cells.
However, IHC is complex, consumes more tissue, is more expensive than H&E and is hard to reproduce. This suggests a need for a more accessible, economical and reproducible method to attain an improved level of contrast. Through virtual stain translation, deep learning can accomplish this by converting between H&E and the desired IHC stain. Progress has already been made in this space, such as a virtual SOX10 stain for melanoma [8], a virtual PAS stain for renal slides [12] and virtual immunofluorescent images [1].
Deep learning models typically belong to one of two groups. They are either generative or discriminative [15]. Discriminative models typically perform tasks like classification or image segmentation. They usually predict the probability that a given input belongs to one class or another [4]. The role of generative models is not to predict a class or label but to learn the distribution of the input data and then recreate new examples. Nearly all state-of-the-art approaches to virtual stain translation use some form of deep generative network.
In recent years Generative Adversarial Networks (GANs) have become particularly popular for image translation [6]. GANs were introduced by Goodfellow et al. in 2014 [5]. They are network architectures that apply an adversarial loss function during training. They consist of two subnetworks. The first is termed the Generator (G); Its goal is to generate data in the target domain Y, either using input data from a domain X or random noise. The next is the discriminator (D); Its goal is to differentiate between domain Y and generated examples. The generator and discriminator are typically deep networks, but, in theory, they can be any function approximator that is differentiable [6]. The generator is trained to reproduce the input dataset. The discriminator is usually a binary classifier that attempts to distinguish generated examples from the actual ground truth. The two models are trained in competition, aiming for the generator to make synthetic images that the discriminator cannot effectively distinguish from real ones. The training process is essentially a min-max optimisation problem [4] and should terminate when the generator's loss has reached a minimum and the discriminator has reached a maximum. By enforcing the comparison between ground truth data and the generated samples, GANs can learn to create new examples of many data types, including images. This capacity for realistic approximation of the input data is powerful and engaging and explains their use in prior virtual staining attempts. However, in pathology, particularly when used for virtual stain translation, this ability to produce a highly realistic image is also perilous. The weakness exists because GANs can potentially fabricate tissue and cell structures that may not exist in the input data but look so natural that the discriminator does not detect them. This is a significant liability for medical use when a diagnosis critically depends on precise and faithful reporting of structural details in the tissue. Figure 1 shows several source H&E images and the corresponding IHC image from a serial section, along with virtual stains generated by two deep networks. Column c) shows examples where a GAN has recreated the IHC version of the source tissue. Comparing the tissue shown in the red circles with the source tissue in column a) demonstrates that the GAN has wholly changed the tissue structure.
One type of Generative Adversarial Network, a CycleGAN, is designed for image translation [16] and can alleviate this issue. It is trained using an adversarial process involving generators and discriminators, but a CycleGAN has two pairs. The first generator translates an image from the source domain to the target. Its discriminator is guided to become increasingly adept at recognising fakes in the target domain. This forces the generator to produce realistic images as it competes with the discriminator. The next generator in the cycle translates the inferred target image from the first to an image in the source domain. Its discriminator attempts to distinguish this image from the ground truth, again constraining the generator to produce realistic images from the source domain. The final step is that a loss term is added to the network to apply a mean absolute error calculation between the output of the second generator and the source image, thus forcing the network to make them alike. This is known as cycle consistency loss [18]. For the source and output image to be identical, both generators must maintain the structural detail throughout the process. This ensures that the first generator maintains the structure when translating from the source to the target domain and mitigates the drawback of invented structural details. Column (d) of figure 1 shows examples of a CycleGAN recreating the IHC version of the source tissue. Comparing the tissue structure of the virtual patches with the source tissue in column (a) reveals that it has exactly reproduced the structural detail. However, an unaltered CycleGAN still has no reason to enforce detailed reproduction of the correct staining as long as the image behaves realistically. Comparing the tissue highlighted by the orange circles with the source H&E in column (a), the structure is identical. However, the stained areas are considerably different from the ground truth IHC image in column (b).
One characteristic of real H&E and IHC stains is that they can vary significantly in intensity and hue depending on the preparation and age of the stain and the type of light source used when imaging, among many other factors. This variation in stain colour means that colour normalisation is required to ensure the input data matches the training data when deep networks are used to infer data gathered on a different day or from a different lab. Several such methods are used, such as the Macenko method [13] which uses singular value decomposition in optical density space to estimate vectors that express the stain ratios of each pixel. This vector can then be adjusted to match the desired target. Or Reinhard's method [14], which converts the slide to LAB colour space and then shifts the mean and standard deviation of each channel of the source image to match the target. Another approach is the Vahadane structure-preserving colour normalisation method [17], which uses sparse non-negative matrix factorisation to determine a matrix of stain ratios in optical density space. The inverse of this can be computed and used to calculate staining intensity. Converting the RGB image to a stain colour space allows all pixel values to be altered equally in each channel and leaves the relative values unaltered, preserving the structure. Each has its advantages and drawbacks, but whichever is chosen, some process must be used to normalise the stain colour to ensure the slides belong to the same colour domain before use in training or inference. Additionally, implementing a technique to convert from RGB whole slide images to a stain colour space allows for an analysis of the intensity of each virtual stain and the determination of its similarity to the ground truth, which is particularly useful when assessing virtual IHC. This paper introduces a new technique to produce a quantitative value representing the accuracy of a virtual IHC image based on colour deconvolution, intensity thresholding and the Sorensen-Dice coefficient.
We have observed that GANs can produce realistic pathology images with an adequate quantity of correctly normalised input data. However, they have a significant weakness when used for pathology data: the potential to invent tissue. A CycleGAN solves the problem of invented structural details. But fails to ensure accurate reproduction of the tissue colouring. This is the problem for which this paper suggests a solution. We propose an amendment to the CycleGAN loss function where the goal is to balance the network's focus between stain and structural reproduction and allow for the creation of reliable virtual IHC images. This can deliver fast, affordable and more readily accessible immunohistochemistry, using a process that could be extended to many types of cancer and stains.
Dataset
To train a neural network to translate from H&E to our chosen stain, we required a ground truth dataset of structurally paired H&E and AE1/AE3 slides. There was no available dataset with these attributes; therefore, we generated one. We did this by cutting serial sections at a close separation to ensure the tissue was congruent between slices with minimal deformation. We then stained the first slice with H&E and the second with our desired IHC stain pan-cytokeratin AE1/AE3. The Glasgow Tissue Research Facility supplied the source tissue along with the NHSGGC Biorepository (ethics no 22/ws/0207). They fully anonymised the slides and metadata before use in this study. The Edwards group in the Institute of Cancer Sciences at the University of Glasgow cut pairs of serial sections at 2.5-micron intervals from eight CRC resections, resulting in sixteen total sides. The first section of each pair was stained with H&E and the second with pan-cytokeratin AE1/AE3. The Edwards group and histology services at the Beatson Institute for Cancer Research carried out the AE1/AE3 IHC staining. Prepared slides were scanned with a Hamamatsu S60 slide scanner, with an LED light source, at 20x magnification.
A test dataset was also required to validate the virtual IHC slides and compare model performance. Therefore, we had two additional slides cut and stained using the same process as above but performed on a different day to ensure the stain preparation was distinct; this effectively simulated real variations in staining. The Glasgow Tissue Research Facility cut the sections and carried out H&E staining, and members of the Edwards group performed the AE1/AE3 IHC staining. We then carried out post-processing steps in the same manner as the primary dataset, and we reserved these slides as unseen test data for network evaluation and comparison.
Our raw dataset had not yet satisfied two quality and accuracy requirements for training a deep network for virtual IHC. The first was that the H&E and AE1/AE3 tissue structure had to be identical between sections, matched right down to the cellular level; The second was that the training, validation and test input data all had to belong to the same colour domain as the training data. This meant that we had to address the variation in colour profile across slides.
We settled on using the Vahadane stain normalisation method to standardise the colour profile across slides. We amended this method in several ways; these were mainly to improve reliability and avoid slide artefacts affecting the resulting stain matrix. Our first modification to the technique was to sample pixels across the entire slide to compute our stain matrix rather than just a few points. We created a tissue mask by thresholding a low-resolution image of the slide in LAB colourspace. We could then subsample points from all of the tissue, and it became more reliable to compute the stain matrix on slides with unusual artefacts. The next and most significant difference in our application of this method is that we used non-negative matrix factorisation rather than sparse non-negative factorisation. The original Vahadane method employed sparse factorisation as they speculated that a pixel is composed of exclusively one stain or the other. However, we propose that it more accurately reflects the true biology if a ratio of the stains expresses the light absorbed by the tissue at any one pixel.
We applied this modified Vahadane stain normalisation method to our raw slide dataset and finally carried out image registration and alignment using a mutual information-based image similarity metric. We shall describe this process in detail in another paper. This pipeline provided a normalised and aligned set of paired source and target patches to train our deep networks. This dataset can be made available upon request.
GAN based Virtual Immunohistochemistry
For use as a reference in comparison with our model, we implemented a virtual IHC stain translation network based on a Generative Adversarial Network (GAN) as described in section 1. The details of the GAN loss function are given in equation 1. It demonstrates that we trained the discriminator D to maximise the probability of assigning the correct label to both real examples and samples from G. We simultaneously train the generator G to minimise log(1 − D(G(z))) to fool the discriminator into assigning the real label to generated images. The discriminator must learn to make its output as realistic as possible. (1)
CycleGAN for Virtual Immunohistochemistry
For use as a control, we implemented an unaltered CycleGAN as is described in the original paper from 2017 by Zhu et al. [18]. The architecture of the network and training method is as described in this paper and section 1. The details of the Cycle Consistency Loss proposed by Zhu et al. are given in equation 2, where G is the generator from the first pair and F is the generator from the second pair.
Mid-cycle loss CycleGAN for Virtual Immunohistochemistry
Our modification to the unaltered CycleGANs loss function was that we added a term to minimise the differences between the generated image and the real image at the midpoint of the cycle, as is shown in equation 3. We refer to this term as the "mid-cycle loss". In the unaltered CycleGAN, the cycle loss is multiplied by a scaling factor to weight it with regard to the adversarial loss, usually called lambda 1 [18]. We also added a weighting coefficient to the mid-cycle loss, lambda 2.
We propose that mid-cycle loss will balance the network and maintain the desired stain. Simultaneously, the cycle consistency loss should enforce the structure, and the adversarial loss ensures the images be as realistic as possible.
Our network architecture for the virtual IHC CycleGAN with mid-cycle loss is given in figure 2. Displayed within are the AE1/AE3 Generator (G AE ) and AE1/AE3 Discriminator (D AE ), and the H&E Generator and Discriminator (G HE ) and (D HE ). The generators and discriminators are configured to train using adversarial loss, and the two generators are linked for Cycle Consistency Loss. We used an encoder network with nine 2D convolutional layers followed by batch normalisation and leaky rectified linear units for the generators. The output feature maps halved in dimensionality after each layer going from 256x256x64 to 1x1x512 by the time the latent space was reached, starting with 64 filters per layer increasing by doubling until a limit of 512 was reached. The decoder reversed this pattern restoring the original dimensions using nine 2D Convolutional Transpose Layers, this time halving the number of filters at each layer, ranging from 512 to a lower limit of 64 by the second last layer. The last layer of the decoder was a Convolutional2D transpose layer with a tanh activation function and three output filters to reproduce the desired RGB image.
The discriminator networks consisted of three sets of convolutional, batch normalisation and rectified linear unit layers followed by a dense layer for probability output. A modified loss function was used in the final network with our Where λ1 and λ1 are adjustable coefficients to weight each loss, with λ1 set to 10 and λ2 set to 50.
The network was trained using the Adam optimiser with a beta 1 value of 0.5 and a cosine-decay learning rate scheduler with an initial value of 1e-3, decreasing to 0 by the end of training.
Our best performing network was trained for 200 epochs, with a batch size of 8. The training dataset consisted of 93,119 pairs of H&E / AE1/AE3 patches. This was divided into an 80/10/10% training, validation and test split. The final network selection was based on a combination of the lowest overall loss as given by equation 4, the highest Staining Dice Coefficient and the lowest Fréchet Inception distance.
Analysis
To assess the performance of our virtual IHC networks, we had to push beyond a solely visual assessment and adopt methods to generate quantitive values. These would provide concrete evidence of improvement in inference based on alterations to the loss function or network hyperparameters. We developed a method to assess the accuracy of a virtual stain based on stain deconvolution and the Sorensen Dice Coefficient, which we termed the Staining Dice Coefficient. The second method was the Fréchet Inception distance, which is widely used in the assessment of generative networks [7]. We will describe our implementation of these two methods in the following section.
The Staining Dice Coefficient (SDC)
We required a way to quantitatively assess the accuracy of the virtual stains produced by our networks. We speculated that the best method to evaluate the stain reproduction was to separate the stains digitally and then assess how each stain compares to the target image. To separate the stains required colour deconvolution. There were existing methods for stain separation, often used for stain normalisation, such as those discussed in section 1. We settled on the Vahadane structure-preserving colour normalisation method [17]; this method uses sparse non-negative matrix factorisation to determine a matrix of stain components in optical density space. The inverse of this can be computed and used to calculate staining intensity, converting the RGB image to a stain colourspace. This method was ideal for our needs. It permitted us to separate the RGB images into their stain components both in the real and virtual images and then use this stain colourspace to determine its correlation with the ground truth and, therefore, the accuracy of the virtual stain.
We made several amendments to the Vahadane method mainly to improve reliability over various slide types and avoid artefacts affecting the resulting matrix; however, one significant difference is that we only used non-negative rather than sparse non-negative factorisation. This was because Vahadane uses sparse factorisation as they conclude that a pixel either has one stain or the other. However, we propose that it more accurately reflects the true biology if the tissue could have components of both stains at any one pixel. We also sampled pixels from across the whole slide to compute our stain matrix rather than at just a few points. We did this by computing a tissue mask using a low-resolution image of the slide in LAB colour space. We could then subsample pixels over all of the tissue, and it became more reliable to compute the stain matrix on slides with unusual artefacts. This method could decompose the RGB whole slide images into a stain colour space, where each channel represents the intensity of staining at each pixel.
Once we had a reliable method to separate the slides into their component stains, we used this to create an evaluation metric for our virtual immunohistochemical images, the Staining Dice Coefficient (SDC). The SDC of a virtual slide was calculated by thresholding the staining intensity in each of the AE1/AE3 and Haematoxylin channels. We repeated this for the target and the virtual IHC images. These binary masks were suitable for calculating a Sorensen-Dice coefficient, often termed the "Dice Coefficient" for each stain. This statistic is often used to gauge the correlation between two samples. We intend to represent the accuracy of the overlap of the virtual stain with the ground truth. The staining dice coefficient was computed using the binary masks of staining intensity along with the formula for the Sorensen-Dice Coefficient given in equation 5. X is the binary mask of a stain in the target slide, and Y is the binary mask of the staining intensity in the virtual slide.
Fréchet Inception distance
The Staining Dice Coefficient evaluated the accuracy of overlap of stained areas. Still, we needed a second metric to scrutinise the features and, therefore, the structural detail of the generated images. The Fréchet Inception distance (FID) is a widely used metric for deep generative networks that fits this requirement. It was initially proposed by Heusel et al. in 2017 [7].
The FID uses the Inception V3 model during computation, with pre-trained weights from the imagenet dataset. The Inception model is loaded, and the output classification layer is removed, and this leaves the last global pooling layer as an output feature vector. This output is a vector of length 2048, and it is used to capture the features of an input image.
This configuration can then be employed to evaluate the quality of a generated image. A 2048 long feature vector is predicted for a collection of real images from the target domain to provide a reference for the features of authentic images. Next, a collection of feature vectors is calculated for the generated images, resulting in two groups of feature vectors, one for genuine and one for generated images.
The FID score can then be calculated to measure the effective distance between the distributions of the two collections. The score is calculated using the formula given by Heusel et al. shown in equation 6.
The FID score is referred to as d 2 in the paper to show that it is a distance and has squared units [7]. µ 1 and µ 2 refer to the feature-wise mean of the real and inferred images. σ 1 and σ 2 refer to the covariance matrix for the real and inferred feature vectors. T r refers to the linear algebra trace operation, which is the sum of elements along the main diagonal of a square matrix.
Now we have a quantitive way to summarise how similar the two collections are in terms of statistics. Where a lower FID score indicates that the two collections of images are more alike or have a more equivalent distribution. A perfect score would be 0.0, indicating that the two collections are identical. Combined with the Staining Dice Coefficient, this provides a method to assess the correctness of stained regions in virtual IHC images and the authenticity of the image features.
Dataset Generation
To assess the usefulness of GANs [5] in virtual immunohistochemistry of colon tumours, we used a training dataset made up of consecutive pairs of serial sections, with one stained with H&E and the other with pan-cytokeratin AE1/AE3. Restained slides, in which the same section is stained first with H&E then by IHC, were unavailable, so patches were aligned computationally.
We applied a stain normalisation technique over our paired whole slide dataset to mitigate the variations in colour profile and stain intensity present in our training and test datasets. Initial attempts used the Reinhard stain normalisation technique, which moves the mean and standard deviation of channels in the LAB colour space of a source image to match a target image [14]. However, when used over entire slide images, this shifts the background values towards the mean, making them discoloured. Therefore, to use the Reinhard method effectively, the tissue must be separated from the background and normalised independently. This can cause image artefacts around the borders of the tissue, and while mainly successful, it is not desirable for deep learning. Therefore we sought a structure-preserving colour normalisation technique.
We found that the Vahadane [17] technique of structure-preserving colour normalisation was superior to the Reinhard method and incorporated it as our chosen method of colour deconvolution for stain normalisation. However, when used with real-world slides, its described method of patch sampling was susceptible to large artefacts distorting the calculation of the stain matrix and throwing off the colour of the normalised slide. We countered this by calculating the stain matrix based on a subsample of pixels from all tissue in the whole slide image. This allowed for robust automated stain matrix estimation and normalisation across large datasets. Figure 1 shows our initial experiments with a generic Generative Adversarial Network (GAN) [5]. It produced remarkably credible pathology images. The GAN generated images were very similar to the actual tissue and, in isolation, would pass a pathologist's inspection as authentic. However, the inferred slides tended to have areas that were realistic in appearance but that were fictional in terms of staining or cell structure as it would often change the number and shape of cell nuclei between the source and target image. A pathology tool that invents areas of staining or structure is not suitable and indeed dangerous in its realism and the trust that it could therefore instil. For example, small areas of altered tissue could change the resulting diagnosis, especially for processes such as tumour bud scoring, where the presence or absence of a single tumour cell can make a significant difference. Once we identified the inclination of GANs to invent features, it led us to look for alternative network architectures that promoted consistency of tissue in addition to realism.
CycleGAN Based Virtual IHC
The CycleGAN architecture is well suited to meet the requirements for structural consistency. However, cycle loss alone is insufficient to constrain a deep generative network to reproduce the correct staining ( Figure 4). A CycleGAN will very accurately recapitulate the cell morphology of input tissue during virtual stain translation. However, they have a similar weakness to GANs in that they can omit or incorrectly fabricate the stain colour. Some examples of this can be seen in figure 4. Here, inaccurate staining produced by an unaltered CycleGAN can be seen in panels (b) and (e) compared to the actual AE1/AE3 stained tissue in panels (a) and (d). Therefore, we identified routes to modify the CycleGAN to reproduce the stain correctly.
CycleGAN with Mid-Cycle Loss Based Virtual IHC
We reasoned that we could constrain the CycleGAN's virtual IHC stain to match the target in a comparable manner to the process for structure preservation. This was achieved by adding a new term to the CycleGAN loss function, designated the "mid-cycle" loss. The mid-cycle loss term modifies the network to compare the generated virtual stain to the ground truth at the mid-point of the training cycle. For it, we chose an error minimisation function that compares overall similarity, mean absolute error. This concentrates on the aggregate of errors across the image, and therefore for this use case, it focuses on the overall stain reproduction. This is successful globally because the cycle loss concentrates on reproducing the structural detail. A weighting term is applied to the structure-preserving cycle loss and stain-preserving mid-cycle loss components. These can be adjusted to fine-tune the network's prioritisation to become highly effective at H&E to IHC stain translation while retaining the cellular detail from the source image. This The addition of mid-cycle loss to the CycleGAN improved both the training, validation and test metrics. The CycleGAN with mid-cycle loss produced superior visual results and had lower training and validation losses that were more stable. In addition to these values, we used two other metrics to determine the best model weights for inference.
We held back ten 5000x5000x3 source and target patches for use as validation data. These were already aligned and stain-normalised from our training slides. They were split into smaller patches with a shape relevant to their intended use. For inference in our CycleGAN, they were of shape 512x512x3, and for use with a pre-trained Inception V3 network, they were of shape 299x299x3. These could then be utilised to evaluate the trained models with our chosen validation metrics to select the best epoch for inference.
The first metric of interest was the staining dice coefficient. This was calculated from the binary masks of stained pixels in the inferred and target patches. We generated the masks from a threshold of stain intensity values obtained by deconvolving the RGB images into a stain colour space. This allowed the determination of virtual staining accuracy by comparing the overlap in stained areas in the virtual image and the real target patch.
The second validation metric was the Fréchet Inception distance (FID). This was calculated using the following process. First, the source 5000x5000x3 H&E patches from the validation dataset were input into our CycleGAN, and we generated a virtual AE1/AE3 IHC slide. This was split with the same patch coordinates as our target validation patches. Next, an Inception V3 model was prepared by loading pre-trained weights from the ImageNet dataset. Then the classification head was removed, and the prior 2048 wide pooling layer was used as an output feature vector. This vector can be used to represent the features present in an image. We then used this configuration to compute two collections of feature vectors-one for our real target patches and one for our inferred virtual patches.
The Fréchet Inception distance is then calculated from these two collections. The precise method of calculation is given in section 2.3.2. However, an important observation is that the closer the two feature vectors are in distribution, the lower the FID and the higher the visual correspondence between the real and virtual patches.
The staining dice coefficient and Fréchet Inception distance gave a quantitive way to evaluate the output virtual IHC images and directly compare the performance of an unaltered CycleGAN and one that includes mid-cycle loss during training. In combination, they allow the evaluation of the accuracy of the stained areas and the realism of the output features. Figure 5 panels (a) and (b) show graphs for the AE1/AE3 and H&E dice coefficients for the unaltered CycleGAN. The dice coefficients for the CycleGAN with mid-cycle loss are higher for both stains, showing improved accuracy in the overlap of virtual stained areas with the real target tissue. Additionally, the training runs of the CycleGAN with mid-cycle loss were less erratic in improving and reached stability faster than the unaltered CycleGAN, perhaps indicating a more stable and superior training process.
In figure 5 panel (c), a graph of the Fréchet Inception distance shows the difference between the virtual and target patches for the unaltered CycleGAN and our CycleGAN with mid-cycle loss. The mid-cycle loss network is again less erratic and quickly reaches a much lower FID. This demonstrates an improved training process and that the resulting features of the mid-cycle loss CycleGAN are closer to the target patches than the unaltered CycleGAN.
Finally, we evaluated our CycleGAN with mid-cycle loss against an unaltered CycleGAN using test data. We created a new dataset from two new pairs of serial sections to ensure that it was entirely unseen data. We assessed the two CycleGANs using the staining dice coefficient and Fréchet Inception distance. This allowed us to evaluate the correctness of the stained areas between the virtual and target tissue and the realism of the output features. The resulting test scores on the two leading networks are available in figure 6.
Discussion
This work demonstrates that virtual immunohistochemistry is a viable alternative that can successfully recapitulate the characteristics of physical IHC. We have generated large numbers of virtual IHC slides and have confidence in the ability of a deep network to reproduce IHC staining on the tumour core, in small clusters and for well-defined single cells. However, training on consecutive slices might not be the most suitable approach for achieving high accuracy on single tumour cells. Over the 2.5-micron interval of our sections, we observed significant changes in tissue morphology. Nevertheless, the accuracy attained using deformably realigned sections exceeded expectations and is viable when there is no alternative to restaining the same slide.
The implemented modifications to the Vahadane structure-preserving colour normalisation technique performed well and were robust across large datasets; it has advantages in that it is computationally simple enough to run over whole slide images for a large dataset within a reasonable period. Additionally, normalising the input H&E intensity in the stain colour space ensured that the slides belonged to the same domain without introducing boundary artefacts present in other methods. Normalising the target, AE1/AE3 Pan-Cytokeratin slides produced an idealised output stain that the network could learn to replicate.
Our proposed technique for assessing virtual IHC staining accuracy, the staining dice coefficient, provides access to a quantitive metric to evaluate the effectiveness of deep networks specialised to translate stains for brightfield microscopy. This means that when combined with the Fréchet Inception distance, networks can be trained, iterated over, compared and improved with concrete metrics relating to the accuracy of stained areas and the realism of the resulting virtual images. Quantitive metrics for evaluating stain accuracy and structural detail are critical when developing deep networks for pathology. We hope to use these to guide the development of future network design by altering network hyperparameters and observing the effect on these metrics. A tangible value for assessment should help steer hyperparameter selection and improve the accuracy and reliability of deep depth learning applications in virtual immunohistochemistry.
The modifications we have suggested to the CycleGAN loss function exhibit the ability to enforce realistic staining characteristics in a virtual immunohistochemical slide and improve all evaluated metrics compared to an unaltered CycleGAN. Its success lies in calculating the mean absolute error between the generated IHC patch and its corresponding target IHC patch at the mid-point of the cycle. This, in combination with the original CycleGAN cycle consistency loss, can train a network to produce a realistic virtual IHC stain. We hope this will be accurate enough for use in real-world cancer pathology after refinement using improved and expanded datasets. A CycleGAN with our Virtual IHC specific loss function overcomes the tendency of standard GANs to fabricate tissue structures and should allow for near real-time access to immunohistochemistry and the increased contrast that it can provide. Virtual IHC will allow for extensive use, improving the accuracy of diagnosis and unlocking more opportunities for research while reducing the cost and complexity of its adoption. | 2022-04-15T12:12:08.282Z | 2022-04-14T00:00:00.000 | {
"year": 2022,
"sha1": "debb0800c19474af7659d10bac098de116d87850",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "debb0800c19474af7659d10bac098de116d87850",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Biology"
]
} |
54688294 | pes2o/s2orc | v3-fos-license | The Ability of Select Probiotics to Reduce Enteric Campylobacter Colonization in Broiler Chickens
Campylobacter is the leading cause of foodborne illness worldwide and is often associated with consumption and/or mishandling of contaminated poultry products. Probiotic use in poultry has been an effective strategy in reducing other enteric foodborne pathogens but has not proven consistent for Campylobacter. As Campylobacter resides and utilizes intestinal mucin for growth, isolates selected on the basis of mucin utilization might be a strategy to screen for efficacious probiotic bacterium. In this study, bacterial isolates demonstrating increased growth rates in mucin, in vitro (trials 1 or 2), or isolates demonstrating a reduction of Campylobacter counts when co-incubated with mucin, in vitro (trials 3 or 4) were selected for their ability to reduce Campylobacter colonization in four bird trials. In trials 1 or 2, ninety day-of-hatch chicks were randomly divided into 9 treatment groups (n=10 chicks/treatment) and treated individually with one of four bacterial isolates demonstrating increased growth in media containing mucin. The treatments included a positive Campylobacter control (no isolate) or four isolates grown in media with or without mucin prior to inoculation. In trials 3 or 4, sixty day-of-hatch chicks were divided into six treatment groups (n=10 chicks/treatment) receiving either no isolate (positive Campylobacter control) or dosed with five individual isolates all demonstrating the ability to reduce Campylobacter counts when co-incubated with mucin, in vitro. These isolates were grown in media containing mucin prior to inoculation. In all four trials, birds were gavaged with individual isolates at day-of-hatch and orally challenged with a four strain mixture C. jejuni on day 7. Ceca were collected at day 14 for Campylobacter enumeration. Results from these first two trials demonstrated two individual isolates, one with increased growth rates when grown in mucin or one isolate incubated without mucin, consistently reduced cecal Campylobacter counts (1.5 to 4 log reduction) when compared with controls. In follow-up trials with isolates selected for their ability to directly reduce Campylobacter counts when co-incubated with mucin, in vitro, one isolate consistently reduced cecal Campylobacter counts by approximately 1.5 logs. These results support the potential use of mucin to preselect isolates for their ability to reduce enteric Campylobacter colonization.
INTRODUCTION
Food-borne illness is one of the greatest problems in public health today, which is mostly due to consumption of food contaminated with bacteria, viruses, parasites and/or toxins (WHO, 2011). It has been reported that food-borne illnesses are likely to occur in developing and underdeveloped countries due to poor sanitation and poor socioeconomic conditions prevailing in those countries (WHO, 2012a). However, recent studies have documented an increasing number of food-borne diseases in developed countries (Humphery et al., 1993;CDC, 2011).
Even with one of the safest food supplies in the world, it has been reported that 1 in 6 Americans get sick due to food-borne illness, resulting in 128,000 hospitalizations and 3,000 deaths each year in the United States (CDC, 2011). It has been estimated that food-borne illnesses cause economic losses worth $77.7 billion per annum in the United States (Scharff, 2012).
Campylobacter infection in humans is one of the leading causes of food-borne illness worldwide (WHO, 2012b). In the United States alone, approximately 13.85 cases were reported per every 100,000 people in 2013 (CDC, 2014) and it is estimated to cause economic loss of $1.7 billion annually (Hoffmann et al., 2012). Similarly in the European Union, approximately nine million human campylobacteriosis cases have been estimated with a resulting economic loss of € 2.4 billion annually .
Historical overview
In 1886, Theodor Escherich first observed a unique spiral shaped bacteria in the stool samples from infants with diarrhea (Escherich, 1886), which was later identified as Campylobacter. During the early 1900s, McFadyean and Stockman first isolated Campylobacter spp. from the fetal tissues of aborted sheep Skirrow, 2006). Since then several
In vitro growth requirements
Campylobacter spp. are fastidious organisms, needing complex growth media and microaerophilic environmental conditions for their growth (Buck and Smith, 1987;Kelly, 2001;Garénaux et al., 2008). Optimal growth is observed at 42°C under microaerophilic conditions (5% oxygen, 10% carbon dioxide and 85% nitrogen; . Despite the high temperature requirement for their growth, C. jejuni displays physiological activity even at 4°C (Hazeleger et al., 1998). It has been reported that C. jejuni can resist environmental stressors by changing its morphology from spiral-bacilla to coccoid forms, which is characterized by loss of culturability (Kelly, 2001), however they still remain viable (Rollins and Colwell, 1986). Also, unfavorable environmental growth conditions such as changes in temperature, pH, osmolarity and loss of nutrients in the medium are responsible for transition from spiral to coccoid viable but nonculturable (VBNC) state (Rollins and Colwell, 1986;Lázaro et al., 1999;Moore, 2001).
It has been found that this VBNC state possesses the ability to infect hosts (Saha et al., 1991;Cappelier et al., 1999a;Baffone et al., 2006). However, contradictory opinions have been proposed about the ability of VBNC forms to become metabolically active and produce disease upon exposure to favorable conditions (Jones et al., 1991;Beumer et al., 1992;Korsak and Popowski, 1997;Cappelier et al., 1999a;Cappelier et al., 1999b). The molecular mechanism underlying the VBNC state development and resuscitation are still unknown (Pinto et al., 2013).
Environmental reservoirs and sources of Campylobacter infection.
Campylobacter spp. are ubiquitous and normally found in a wide range of warm-blooded animals (Humphrey et al., 2007), including food-producing animals such as pigs (Nesbakken et al., 2003), sheep (Firehammer and Myers, 1981;Stanley and Jones, 2003), beef cattle, turkeys (Zhao et al., 2001) and chickens (King, 1962;. Apart from food-producing animals, contact with pets can also serve as a source of human Campylobacter infections (Deming et al., 1987;Kapperud et al., 1992). Consumption of untreated water may be another route of human Campylobacter infection ( Vogt et al., 1982;Palmer et al., 1983;Taylor et al., 1983;Hopkins et al., 1984). Some research articles also suggested that raw or unpasteurized milk is a source of Campylobacter infection resulting in human gastroenteritis (Blaser et al., 1979;Porter and Reid, 1980;Robinson and Jones, 1981). Consumption of fruits and vegetables (Evans et al., 2003) and mushrooms (Doyle and Schoeni, 1986) have been reported as minor sources of Campylobacter transmission to humans (Rosef and Kapperud, 1983). Among the various causes of human infections, poultry is considered the primary source of infections (King, 1962;. Handling of Campylobacter contaminated chicken and consumption of undercooked chicken are the major sources for human campylobacteriosis (Hopkins and Scott, 1983;Ikram et al., 1994;Neimann et al., 2003;. It has also been found that cross contamination of raw chicken with other uncooked food items during food preparation in the kitchen is a major route for Campylobacter infections in humans (Boer and Hahne, 1990;Luber, 2009)
Human Incidence
It was shown that Campylobacter spp. are able to cause infections in animals in the early 1900s; however it wasn't reported in humans until about 1980 Silva et al., 2011). With the development of filter techniques (Dekeyser et al., 1972) and selective medium . Campylobacter has been recognized as the most common cause of food-borne diarrheal illness in humans (Allos, 2001;EFSA/ECDC, 2013;CDC, 2014). Cases of human campylobacteriosis are seen in both developed and developing countries, and are likely to occur often in children, immune-compromised and elderly persons (Tauxe et al., 1992;Corry and Atabay, 2001). The incidence in infants (24.08 per 100,000) is higher as opposed to adults (14.54 per 100,000) in United States (CDC, 2014). Campylobacter infections in male to female ratio is approximately 1.2-1 (Louis et al., 2005;Olson et al., 2008). However, the reason behind such high incidence in males compared to females is not fully understood (Friedman, 2000). It has been reported that the incidence may vary with the season; peak incidences occurring from June-August in North America, Europe, UK and Canada (Nylen et al., 2002;Nelson and Harris, 2011;Lal et al., 2012).
In the United States, the CDC has reported 13.83 incidences per 100,000 people in 2013 which is greater than previous reports (CDC, 2014). This reported incidence might be lower than the actual incidence due to many underdiagnosed and underreported cases (Mead et al., 1999;Samuel et al., 2004). The European Food Safety Authority and the European Center for Disease Control and Prevention jointly estimated 220,201 Campylobacter cases in 2011 which is 2.2% more than in 2010 . It is also reported as the most frequent zoonotic disease after salmonellosis in 2011 (EFSA/ECDC, 2013). In Australia, England and Wales, annual cases of 225,000 and 500,000 were reported respectively (Hall et al., 2008;Nichols et al., 2012). Other countries such as Germany, Netherlands and Finland have also reported increasing cases of campylobacteriosis (Nakari et al., 2010).
Pathogenesis of Campylobacter
The molecular mechanisms involved in the pathogenesis of Campylobacter are not clear (Ketley, 1997;Svensson et al., 2014). However it is believed that adhesion, colonization and invasion of host intestinal epithelium play a pivotal role in producing symptoms associated with campylobacteriosis (Ketley, 1997). Campylobacter possesses the fibronectin binding proteins cadF, FlpA and periplasmic or membrane associated protein (PEB 1) that is responsible for host cell binding and colonization (Konkel et al., 1997;van Vliet and Ketley, 2001;Young et al., 2007;Konkel et al., 2010). Host cell invasion and gastroenteritis is mediated by protein secretion via the flagellar type III secretion system (Larson et al., 2008). The infection is further aided by flagellar-driven motility and Campylobacter invasion antigen (Dasti et al., 2010). It has been found that internalization of C. jejuni into host cells is triggered by the combined effects of the microfilaments and microtubules of host cells (Biswas et al., 2003). Johnson and Lior (1988) reported that C. jejuni produces a toxin called cytolethal distending toxin (Cdt). Cdt causes a host cell cycle arrest, preventing cells from entering the M phase, inducing host cell apoptosis (Whitehouse et al., 1998;Dasti et al., 2010). The genes encoding Cdt were sequenced for C. jejuni in late 1990s (Pickett et al., 1996;Bang et al., 2001) and for C. coli and C. fetus in late 2007 (Asakura et al., 2007(Asakura et al., , 2008.
Human Infections
Campylobacter infection is one of the leading causes of bacterial gastroenteritis in humans Allos, 2001;CDC, 2014). It has been suggested that children and immunocompromised people are more susceptible to Campylobacter infections (Allos, 2001). Thermotolerant Campylobacter spp. specifically C. jejuni and C. coli, together account for approximately 95% of human campylobacteriosis cases worldwide EFSA/ECDC, 2013). An infective dose as low as 500-800 live cells may be sufficient to cause illness in humans (Robinson, 1981;Black et al., 1988). The incubation period ranges from 2-5 days, but has been reported up to 10 days . In most patients symptoms may include diarrhea, abdominal cramps, malaise, myalgia and fever .
Diarrhea may be loose, watery or bloody, suggestive of ulcerative colitis due to the invasive nature of C. jejuni (Blaser, 1997). Extra-intestinal manifestations including meningitis (Goossens et al., 1986), osteomyelitis (Vandenberg et al., 2003) and neonatal sepsis are less frequently seen . Campylobacteriosis is generally a self-limiting disease and the affected patients may recover without any treatment . Some cases of campylobacteriosis have been associated with serious post-infectious complications such as Guillain-Barré syndrome, reactive arthritis, irritable bowel syndrome and inflammatory bowel disease (Gumpel et al., 1981;Spiller and Garsed, 2009).
Guillian-Barré syndrome.
Guillian-Barré syndrome (GBS) is one of the potential longterm severe complications of Campylobacter infection. It is a neuromuscular disease characterized by ascending paralysis that causes weakness of limbs, respiratory muscles and loss of reflexes (Allos, 1997). It has been identified that 20-40% of the GBS cases were associated with a preceeding C. jejuni infection (Mishu and Blaser, 1993). Approximately 1 in 1000 cases of Campylobacter infections may develop GBS (Allos, 1997). Patients usually develop GBS 1-3 weeks after the onset of Campylobacter enteritis . Approximately 20% of GBS patients requires hospitalization in the intensive care unit for respiratory ventilation (WHO 2012b).
It has been postulated that molecular mimicry between lipooligosacccharides of C. jejuni and host GM1 gangliosides may cause the development of autoantibodies and play a role in the pathogenesis of GBS (Yuki et al., 1993(Yuki et al., , 2004. There are four subtypes of GBS: 1) acute motor axonal neuropathy (AMAN); 2) acute inflammatory demyelinating polyadiculoneuropathy (AIDP); 3) acute motor and sensory axonal neuropathy (AMSAN) and 4) Miller Fishers syndrome. Among these four subtypes of GBS, Campylobacter infections are most frequently associated with the AMSAN subtype (Kuwabara, 2004).
Reactive arthritis.
Reactive arthritis (ReA) is a spondyloarthropathy and occurs subsequent to microbial gastrointestinal infections, including Campylobacter (Carter and Hudson, 2009;Wu and Schwartz, 2008). Symptoms of ReA may include inflammation of joints, tissues, skin and tendons Townes, 2010). It has been estimated that 1-5% of Campylobacter cases may result in reactive arthritis , however estimates of up to 16% have been reported . Though children are more likely to get Campylobacter infections, ReA is more common in adults (Carter, 2006;. Pathophysiology of this disease is still not clear. However one hypothesis involves antibody production against pathogens having affinity to HLA-B27 and another hypothesis is impaired cellular immunity (decreased interleukin-2 production) against the inciting microorganism correlating with disease development (Wu and Schwartz, 2008).
Irritable Bowel Syndrome. Irritable bowel syndrome (IBS) is a recurring functional
gastrointestinal disorder characterized by frequent abdominal pain (three or more per month) or discomfort linked with defecation or change in bowel habit and abdominal bloating (Quigley et al., 2009). Prevalence of IBS in North America and Europe ranges from 10-16% (Quigley et al., 2009). It is believed that Campylobacter infections as an antecedent infection account for about 10% of IBS cases (Spiller and Garsed, 2009). The exact mechanism by which Campylobacter causes symptoms of IBS is not completely understood, however Campylobacter spp. is known to produce cytotoxins and some of them are believed to be associated with development of IBS (Thornley et al., 2001).
Inflammatory Bowel Disease. Inflammatory bowel disease (IBD) is a collective term for
ulcerative colitits and Crohn's disease (Papadakis and Targan, 1999). It is a chronic relapsing disease characterized by diarrhea, constipation, tenesmus, abdominal cramps, fever, pain and/or rectal bleeding with bowel movement, (Bernstein et al., 2009). Campylobacter jejuni has been isolated from 10% of IBD cases (Gradel et al., 2009). Campylobacter promotes translocation of non-invasive bacteria by disrupting transcellular transport across the intestinal epithelium playing a role in the pathogenesis of IBD (Kalischuk et al., 2009).
Treatment
Campylobacter infections are generally self-limiting and do not require antibiotic therapy, if antimicrobial therapy is needed fluoroquinolones are the drug of choice (Allos, 2001).
However, in the past few years, fluoroquinolone resistant Campylobacter strains are emerging (Allos, 2001). Campylobacter spp. may also be resistant to other antibiotics including ciprofloxacin, bacitracin, novobiocin, rifampin, trimethoprim, vancomycin and tetracycline (Taylor and Courvalin, 1988;Kuschner et al., 1995;Engberg et al., 2001). Currently, erythromycin is used most frequently to treat Campylobacter infection due to its low toxicity, narrow spectrum and low cost Allos, 2001).
Epidemiology of Campylobacter in poultry
Campylobacter is ubiquitous in poultry flocks and it has been found that the percentage of broiler flocks colonized with Campylobacter varies from country to country (Newell and Fearnley, 2003). In the United States and Great Britain nearly 90% of flocks are colonized with Campylobacter (Evans and Sayers, 2000;, 41.1% in Germany (Atanassova and Ring, 1999) and 47.5% in Japan (Haruna et al., 2012). However, in Europe prevalence rates vary from 18 to 90%, with the northernmost countries having remarkably lower percentages than southernmost countries (Newell and Fearnley, 2003).
Many research findings have shown variability in
Campylobacter contamination with retail poultry products. Factors such as sample collection, detection methodology, season, geographical location and production practices may contribute to variability in Campylobacter contamination in poultry and poultry products (Lee and Newell, 2006). An epidemiological study in Greater Washington D.C. suggested that approximately 70.7% of raw chicken meat was contaminated with Campylobacter (Zhao et al., 2001). Other studies have revealed as high as 90-100% of the raw chicken meat is contaminated with Campylobacter (Suzuki and Yamamoto, 2009).
Campylobacter colonization in birds
Campylobacter is generally nonpathogenic in poultry Stern et al., 1988). Environmental contamination is the primary source of infection in newly placed chicks (Shane, 1992). Chicks around the age of 2-3 weeks get colonized with Campylobacter in their intestinal tract as a commensal organism . The infectious dose for chicken has been reported to be as low as 50 organisms (Achen et al., 1998;Knudsen et al., 2006).
Campylobacter predominantly resides in the lower part of the intestine, notably in the ceca, and concentrations may reach up to 10 8 CFU per gram of cecal contents Stern et al., 1988;Achen et al., 1998). A study conducted in UK reported that there is no seasonal variation regarding the prevalence of Campylobacter in broiler flocks (Humphery et al., 1993).
Nevertheless, some studies found a summer peak in the prevalence of positive flocks (Wallace et al., 1997;Nylen et al., 2002). The mechanism of colonization in the bird's intestine is not fully elucidated. However, it is hypothesized that chemoattraction of C. jejuni to mucin plays a significant role in colonization; C. jejuni uses mucin as a substrate and colonizes in high numbers in the cecal crypts . Similarly, an immunological investigation on host immune response to Campylobacter in chickens suggested that down regulation of certain genes in the host by Campylobacter plays a vital role in the persistent high level of colonization (Meade et al., 2009). In most cases, Campylobacter localizes in the intestines. However, systemic invasion to organs such as liver, spleen, heart and lungs has also been reported (Young et al., 1999;Meade et al., 2009).
Transmission Horizontal transmission. Several studies have shown a wide range of hosts for
Campylobacter, wild birds, domestic birds (Luechtefeld et al., 1980;Glünder et al., 1992), rodents (Cabrita et al., 1992) and insects (Jacobs-Reitsma et al., 1995). Horizontal transmission is the predominant mode of transmission of Campylobacter in poultry Silva et al., 2011). Poultry flocks naturally become colonized from the above mentioned sources and Campylobacter positive birds rapidly shed the organisms in the feces which act as a source for other birds (Jacobs-Reitsma et al., 1995;Achen et al., 1998;Mead, 2002), which then spreads rapidly from bird to bird making the entire flock contaminated Horrocks et al., 2009). The rapidity of the shift from un-colonized to almost 100% colonization of Campylobacter in a flock is aided by coprophagic behavior of chicks and via contamination of food and water sources (Montrose et al., 1985;Keener et al., 2004).
Vertical transmission. Transmission of C. jejuni from parent hen to chicks is controversial.
Vertical transmission of any bacteria can take place by either primary (contamination of egg content in the hen's reproductive tract) or secondary (contamination of the eggshell with fecal material after lay) infection of the egg (Sahin et al., 2003a). Many researchers have found the presence of Campylobacter in various parts of the male and female reproductive tracts of poultry (Cox et al., 2002;Cole et al., 2004) indicating a possibility of vertical transmission of Campylobacter to chicks. Several investigations have been conducted to verify the possibility of vertical transmission of Campylobacter in poultry (Doyle, 1984;Clark and Bueschkens, 1985;Shanker et al., 1986). In their earlier studies, Clark and Bueschkens (1985) inoculated fertile eggs with C. jejuni, and demonstrated that 11% of the resulting chicks had Campylobacter in their intestinal tract. However, naturally it is not easy for Campylobacter to get into the egg content via egg shell penetration and even if it does contaminate the egg contents, it is not likely to survive for more than 48 h stored at room temperature (Doyle, 1984;Shanker et al., 1986). In contrast to these findings, some researchers demonstrated that Campylobacter can remain viable inside egg yolk for up to 14 days, but <8 days inside the air sac and albumen (Clark and Bueschkens, 1986).
PREHARVEST CONTROL STRATEGIES OF CAMPYLOBACTER IN POULTRY
With increasing cases of human campylobacteriosis, development of intervention strategies are necessary to control and reduce Campylobacter in poultry and poultry products to minimize human infections. Reducing the bacterial concentration in poultry prior to processing would be beneficial, as cross contamination between fecal contaminated carcasses and meat may occur during processing. Risk assessment studies conducted by Rosenquist and his colleagues (2003) predicted that a 2 log reduction of the Campylobacter on chicken carcasses can reduce the human incidence by 30 fold. Many pre-harvest intervention strategies have been evaluated with varying results. Some of them are briefly described below which may be used as potential control measures to reduce the Campylobacter counts in poultry and poultry products.
Biosecurity
Biosecurity is the protection of farm animals from various types of infectious agents by using different types of measures such as use of protective clothing, cleaning and disinfecting of farm house, provision of clean water, restricting the movement of people or animals between farms, etc. (Silva et al., 2001;Vandeplas et al., 2008). Studies have demonstrated that adopting standard biosecurity methods led to an approximate 50% reduction of Campylobacter prevalence in broiler flocks . Similarly, a report from two Dutch broiler farms suggested that introduction of hygiene measures significantly reduced the Campylobacter prevalence in broiler flocks (van de Giessen et al., 1998). A review on biosecurity-based interventions by Newell and colleagues (2011) suggested that diligent application of biosecurity measures is required to reduce flock prevalence. However, complete elimination of Campylobacter from flocks is unlikely (Wagenaar et al., 2006;Vandeplas et al., 2008).
Moreover, the costs involved with the adoption of such strict on farm biosecurity measures limits the practicality of biosecurity (Fraser et al., 2010).
Bacteriocins
Bacteriocins are small biologically active protein compounds of approximately 5-6 kilodalton, produced by some strains of bacteria that can inhibit the growth of other closely related bacteria (Klaenhammer, 1993;Cleveland et al., 2001). Both Gram positive and Gram negative bacteria such as, Lactobacillus, Lactococcus, Pediococcus, Carnobacterium, Enterococcus, Escherichia, Bacillus, Paenibacillus, Staphylococcus, Pseudomonas and Clostridium have been reported to produce bacteriocins (Svetoch and Stern, 2010). Bacteriocins application in poultry processing was initiated in 1994 with the test against Listeria monocytogenes using nisin (Mahadeo and Tatini, 1994). Similarly, a literature review on bacteriocins suggest the potential use of bacteriocins for the reduction or elimination of many food-borne pathogens (Joerger, 2003). Bacteriocins produced by certain strains of Bacillus circulans and Paenibacillus polymyxa were found to be inhibitory to Campylobacter growth in vitro . Stern and colleagues (2005) reported that bacteriocins (B602) produced by P. polymyxa reduced cecal C. jejuni to undetectable levels in chickens.
Subsequently, in a second study, inclusion of a microencapsulated bacteriocin (OR 7) in chicken feed for 3 days (day 7 to day 10) reduced cecal Campylobacter counts from 1.3 log CFU/g to undetectable levels, whereas control groups were colonized at 7-8 log CFU/g in 10-day-old broiler chickens ). An additional study utilizing bacteriocins produced by P. polymyxa and Lactobacillus salivarius have shown cecal Campylobacter coli reductions to undetectable levels in turkey poults (Cole et al., 2006). Although research has shown promising results on bacteriocins against Campylobacter in poultry, bacteriocins can be degraded easily inside the host gut due to its proteinaceous nature (Joerger, 2003). It is expensive to adapt techniques such as microencapsulation, to prevent the enzymatic digestion of bacteriocins in the gastrointestinal tract (Joerger, 2003;Svetoch et al., 2005). In addition, implementation requires approval from the U.S. Food and Drug Administraion (FDA) which would require extensive and expensive safety and efficacy studies. So far only one bacteriocin (Nisin) has GRAS (generally recognized as safe) status (Joerger, 2003). Moreover, it has been found that Campylobacter develops resistance against bacteriocins which further limits their use in poultry (Hoang et al., 2011a, b).
Bacteriophage
Bacteriophages are viruses capable of infecting and killing specific bacteria (Huff et al., 2005;Hagens and Loessner, 2007). Bacteriophages that infect and replicate in bacteria subsequently killing the host cells are virulent bacteriophages, which are particularly important for reducing pathogenic bacteria (Huff et al., 2005). Campylobacter specific phages have been isolated from chicken excreta, retail poultry, abattoir effluent, sewage and other animal as well as human sources (Atterbury et al., 2003b;Connerton et al., 2004). Several studies were conducted to evaluate the potential application of bacteriophage to reduce cecal colonization of Campylobacter in broiler chickens Wagenaar et al., 2005;El-Shibiny, et al., 2009;Carvalho et al., 2010). Loc Carrillo and co-workers (2005) Although a sharp decrease in cecal C. jejuni is noted immediately after phage administration, C.
jejuni re-establishes itself over time Wagennar et al., 2005). This phenomenon supports the application of bacteriophage a few days before slaughter could be more effective for reducing Campylobacter counts in market age birds (Wagenaar et al., 2005).
In addition, it has also been observed that bacteriophage administration in the feed is more effective than oral gavage (Carvalho et al., 2010). Introduction of Campylobacter specific bacteriophage on artificially Campylobacter contaminated chicken skin showed a promising result in reduction of recoverable Campylobacter cells from treated chicken skin samples (Atterbury et al., 2003a). In contrast, another study on bacteriophage application on chicken meat samples stored at 4 o C did not reduce Campylobacter counts (Orquera et al., 2012).
Bacteriophage application in food products is safe for human health (Hagens and Loessner, 2010). However, consumer acceptability, narrow host range (Janež and Loc-Carrillo, 2013)
Vaccination
Vaccination could be another effort to reduce or eliminate Campylobacter. It has been proposed that maternal antibody against Campylobacter plays an important role in preventing Campylobacter colonization in the early stage of life in chicken (Sahin et al., 2003b). Several investigations have been conducted on the possible use of vaccination but have had limited success. A research article reported cecal Campylobacter reduction by approximately 2 log CFU/g of cecal contents after administration of killed C. jejuni whole cells and flagellin vaccine intraperioneally at the age of 16 and 29 day of age . Similarly, formalin inactivated C. jejuni vaccine administered orally in broiler chickens reduced intestinal colonization ranging from 16 to 93% compared with a non-vaccinated control group (Rice et al., 1997). A study conducted by Wyszyńska and colleagues (2004) found that oral immunization (on the day of hatch and two weeks after primary immunization) with an avirulent Salmonella vaccine strain carrying the C. jejuni cjaA gene significantly reduced cecal Campylobacter counts.
In addition, researchers have demonstrated an increase in anti-Campylobacter secretory IgG with inactivated whole cell vaccine Rice et al., 1997) or both IgG and IgA after recombinant Salmonella vaccination (Wyszyńska et al., 2004). Recombinant vaccine candidates which elicit better humoral response produce better results in recent studies (Wyszyńska et al., 2004;Layton et al., 2011). Vaccines against Campylobacter colonization in poultry are not commercially available yet. Further research in the development of effective vaccines against C.
jejuni is warranted and should be feasible economically and practical for use in the poultry industry.
Natural compounds
Medium Chain Fatty Acids. Medium chain fatty acids (MCFAs) such as caproic, caprylic, capric, lauric, etc., possess antimicrobial activity against various microorganisms making them a viable alternative to antibiotics (Bergsson et al., 1998;Decuypere and Dierick, 2003). One extensively studied MCFA is caprylic acid (eight-carbon saturated fatty acid), also known as octanoic acid which is naturally present in coconut oil, palm-kernel oils, bovine and breast milk (Jensen et al., 1990;Sprong et al., 2001;Jensen, 2002). Caprylic acid is classified as a generally regarded as safe (GRAS)
Plant extracts.
In the last few decades consumer awareness and preference towards organic food products in addition to increased pressure to find alternative to antibiotic use in animals has led researchers to evaluate the antimicrobial properties of plant extracts (Atterbury et al., 2003b;Sirsat et al., 2009). The phytochemicals from various medicinal plants possess antimicrobial properties (Cowan, 1999). Use of medicinal plants by humans has a long history and it has been observed that other primates repeatedly consume certain plants which have medicinal properties (Glander, 1994;Baker, 1996;Halberstein, 2005 (2010) on the effect of trans-cinnamaldehyde, eugenol, carvacrol and thymol against C. jejuni in cecal contents demonstrated significant reductions of C. jejuni, in vitro. However, these compounds did not produce consistent results in bird studies conducted by various researchers (Metcalf, 2008;Arsi et al., 2014). Similarly, Woo-Ming (2012) reported efficacy of cranberry extracts against C. jejuni, in vitro, but not in vivo. It has been suggested that the failure of plant extracts to work in these in vivo trials may be due the compounds being absorbed in the upper digestive tract and unable to reach the target site (ceca) in adequate concentrations to reduce C. jejuni counts (Woo-Ming, 2012). More research is needed to determine the most effective plant extracts and the appropriate administration strategy to reduce cecal Campylobacter counts in poultry production.
Probiotics
'Probiotic' means 'for life' in Greek and has been described many ways by multiple scientists over time (Fuller, 1992). Almost a century ago, Metchnikoff (1907) first described the beneficial effect of consuming fermented milk on human health. Lilley and Stillwell (1965) first used the term 'probiotic' to describe the secretory substances produced by one microorganism that promotes the growth of other microorganisms. Later on, "probiotic" has been redefined as "microbial growth stimulating tissue extract" (Sperti, 1971) or "microorganisms and substances that contributes to intestinal microbial balance" (Parker, 1974). The terminology has been well defined over the last few decades. The extensively used definition of probiotic as given by Fuller is "live microorganisms which when administered in adequate amounts can confer beneficial effects on host health" . Salminen and colleagues (1998) redefined probiotics as "a live microbial food ingredient which is beneficial to health".
More than four decades ago, Nurmi and Rantala (1973) demonstrated that administration of probiotics (undefined mixture of bacteria from adult birds) at an early age can prevent the colonization of Salmonella Infantis in chickens. The precise mechanism by which probiotics produce beneficial effects is not clearly elucidated. However, some researchers predict that probiotics provide beneficial effects by producing bacteriocins , reducing pH due to production of metabolites such as organic acids (Sanders, 1993), by competing for substrates or attachment sites or by increasing macrophage mediated phagocytic activity . Initially, this concept was used to control Salmonella infection in poultry (Nurmi and Rantala, 1973;Nurmi et al., 1992;Blankenship et al., 1993;Stavric and D'aoust, 1993;Hume et al., 1998). Lately, this concept is being used to reduce the prevalence of various enteric pathogens such as E. coli , Clostridium perfringens (La Ragione and Woodward, 2003) and C. jejuni (Soerjadi-Liem et al., 1984;. Even though probiotic strains reduced Campylobacter in vitro, most of them failed to demonstrate similar efficacy against Campylobacter in vivo Robyn et al., 2012). One possible reason for such variability in in vivo studies may be due to failure of probiotics to survive the acidic pH of the upper gastrointestinal tract ). Recent studies from our laboratory demonstrated that protecting the probiotic isolates from stomach acids by making them available in the lower intestinal tract via intercloacal transfer significantly reduced C. jejuni colonization in broiler chickens (Arsi et al., 2015).
Several investigations on probiotics against Campylobacter jejuni colonization emphasized the need to develop effective probiotics through better screening methods and/or by using effective methods of probiotic administration. It has been found that mucin (mucus glycoprotein) acts as a chemoattractant to C. jejuni and provides a source of carbon and energy for the growth of Campylobacter (Berry et al., 1988;Hugdahl et al., 1988). Research findings also suggest the affinity of Campylobacter spp. towards mucin as an essential factor for both colonization and infection (Slomiany et al., 1987;Sylvester et al., 1996). We hypothesized that probiotic bacteria with affinity towards mucin may competively inhibit Campylobacter at the preferred sites of colonization. Thus, selecting bacterial isolates with affinity to utilize and grow in the presence of mucin could be an effective strategy to reduce Campylobacter jejuni in poultry.
INTRODUCTION
Campylobacter infections are one of the leading causes of bacterial gastroenteritis in humans worldwide (WHO, 2011;CDC, 2013). In the United States alone, 1.3 million cases of human Campylobacter infections have been reported annually (CDC, 2013). More than 17 Campylobacter spp. has been identified , of which, Campylobacter jejuni alone is responsible for approximately 95-99% of cases of human campylobacteriosis (Friedman, 2000;. Most of the Campylobacter enteritis cases are self-limiting Coker et al., 2002;, however, some severe post-infectious sequelae such as, Guillain-Barré syndrome and reactive arthritis have been reported (Rhodes and Tattersfield, 1982;. Various sources of Campylobacter have been identified, among them poultry is regarded as the principal source of infection for humans (King, 1962;CDC, 2013). It has been reported that more than 90% of the US poultry flocks are contaminated with Campylobacter jejuni , which potentially present a serious threat for humans .
Hence, reduction or elimination of Campylobacter in poultry flocks would significantly reduce the human incidence of campylobacteriosis . Several preharvest intervention strategies such as biosecurity, bacteriocins, bacteriophages, plant extracts, vaccine, medium chain fatty acids, and probiotics have been evaluated aiming to reduce Campylobacter prevalence in poultry flocks Solis de los Santos et al., 2008a,b;Arsi et al., 2015a.
Unfortunately, none of them are successful in completely eliminating Campylobacter from poultry . Application of probiotic bacteria is one strategy that may potentially inhibit/reduce Campylobacter colonization in poultry. Probiotics are "live microorganisms which when administered in adequate amounts can confer beneficial effects on host health" . Probiotics effectively reduced food-borne pathogens such as, Salmonella, E. coli, Listeria, Clostridium, etc., Hume et al., 1998a,b ). However, administration of probiotics can produce inconsistent reductions in Campylobacter colonization in broiler chickens Robyn et al., 2013;Arsi et al., 2015a). Such inconsistent results against Campylobacter colonization suggested the need of better screening methods of probiotic bacteria. It has been observed that supplementation of porcine intestinal mucin in broth media induces the cell surface proteins in Lactobacillus reuteri strains and improve the mucus-binding properties in vitro (Jonsson et al., 2001). Since Campylobacter colonizes in intestinal mucus and uses mucin as a source of carbon and energy Hugdahl et al., 1988), selection of probiotic isolates which utilize intestinal mucin could be an effective approach to competitively inhibit the enteric colonization of Campylobacter.
The objective of this research was to screen probiotic isolates that can eliminate/reduce cecal Campylobacter counts in poultry. In this study we used selected bacterial isolates that are generally regarded as safe (GRAS) and possess efficacy against Campylobacter, in vitro. These isolates were further screened for their ability to utilize mucin or inhibit Campylobacter in the presence of mucin. Isolates which demonstrated increased growth or anti-Campylobacter activity in vitro, in the presence of mucin, were selected and tested in vivo.
Probiotic isolates
In this study, we used selected GRAS bacterial isolates (Bacillus and Lactobacillus spp.) with efficacy against Campylobacter, in vitro, using a soft agar overlay technique. The selected bacteria were isolated and identified from the cecal contents of healthy birds during earlier studies from our laboratory (Arsi et al., 2015a,b).
Screening for bacteria with the ability to reduce Campylobacter counts when coincubated with mucin.
Each bacterial isolate was co-cultured with four-strain mixture of wild type C. jejuni in 5 mL of TSB (no mucin) and 5 mL of TSB containing 3% porcine gastric mucin separately. The tubes were incubated microaerophilically at 42°C for 24 h. Each co-culture was then serially diluted in BPD and plated on Campy Line Agar (Line, 2001) for enumeration. The Campylobacter colonies were enumerated and each isolate was evaluated for its efficacy to reduce Campylobacter when co-cultured in the presence or absence of mucin in the growth media. The five isolates which demonstrated the greatest reduction of Campylobacter counts, in vitro, in the presence of mucin compared to non mucin media, were selected and further evaluated in vivo.
In vivo studies
Experimental animals and housing. For all in vivo trials, day of hatch broiler male chicks were procured from a local commercial hatchery. Chicks were weighed at the beginning and at the end of each trial. Birds were raised in floor pens with pine shavings, with ad libitum access to feed and water throughout the 14-day trial period.
Experimental design.
A total of 4 bird trials were conducted at the poultry farm facility of University of Arkansas. Four probiotic isolates which had shown higher growth in the presence of mucin in the broth media were selected for in vivo studies. Two replicate trials were conducted (trails 1 and 2) and in each trial, a total of 90 male chicks were randomly divided into 9 treatment groups (n=10 chicks/treatment). The treatment groups include a Campylobacter control (Campylobacter, no isolate) and 8 treatment groups each receiving a separate bacterial isolate grown in the presence or absence of mucin prior to oral administration.
For trials 3 and 4, we selected isolates that reduced Campylobacter in vitro, in the presence of mucin instead of selection for increased growth in the presence of mucin. Five isolates which inhibited Campylobacter in vitro, in the presence of mucin in the broth media were selected and tested in replicate trials 3 and 4. In each trial, 60 male chicks were randomly divided into 6 treatment groups (n=10 chicks/ treatment) and treatment groups include control (Campylobacter, no isolate) and 5 treatment groups each receiving a separate isolate grown in the presence of mucin prior to oral.
Bacterial dosing in chicks.
In each trial, at day of hatch, chicks from all the treatment groups except Campylobacter control were orally gavaged individually with 0.25 mL of specific probiotic isolate containing approximately 10 6 -10 8 CFU/mL as previously described (Arsi et al., 2015a). On day 7, all the chicks were orally gavaged with a cocktail of 4 strains of wild type Campylobacter containing approximately 10 8 CFU/mL organisms as previously described (Farnell et al., 2005). On day 14, bird's ceca were aseptically collected for Campylobacter enumeration. Cecal contents were serially diluted 10-fold with BPD and plated on CLA for direct enumeration. Plates were incubated at 42°C under microaerophilic conditions for 48 h and Campylobacter colonies were enumerated and expressed as CFU/g.
Statistical analysis
To achieve homogeneity of variance, cecal Campylobacter jejuni counts were logarithmically transformed (Log CFU/mL) before analysis of data (Byrd et al., 2003). Data were analyzed by using the PROC GLM procedure of SAS (SAS, 2011). Treatment means were partitioned by least square means (LSMEANS) analysis and a probability of P < 0.05 was required for statistical significance.
RESULTS
A total of 68 GRAS isolates were tested in vitro in this study and the four isolates (called isolate 1, 2, 3 and 4) which showed a greatest increase in counts when growth in mucinsupplemented media compared with the unsupplemented media (data not shown) were selected for the in vivo studies in trials 1 and 2 (Table 1). In addition, the five isolates demonstrating the largest reduction in Campylobacter counts when incubated with mucin (data not shown) were selected for the in vivo studies in trials 3 and 4 (Table 2).
In trial 1, isolate 1 or 4 grown without mucin prior to inoculation reduced cecal Campylobacter counts (approximately 2-3 logs CFU/g ) whereas isolates 2, 3 or 4 incubated with mucin prior to inoculation reduced Campylobacter counts (approximately 2-3 logs CFU/g; Table 1 ) when compared with the controls. In trial 2, isolates 1, 2 or 3 grown without mucin reduced Campylobacter counts by approximately 1.5 to 4 logs CFU/g in the ceca whereas only isolate 4 incubated with mucin reduced Campylobacter counts (Table 1) compared to controls. When compared across trials, isolate 1 grown without mucin or isolate 4 incubated with mucin consistently reduced Campylobacter counts in two separate trials (Table 1). When isolates were selected based on their ability to reduce Campylobacter counts when co-incubated with mucin in vitro, isolates 5, 7 or 8 or isolates 5 or 6 reduced Campylobacter counts in chicks when compared with controls for trials 3 or 4, respectively ( Table 2). None of these isolates adversely affected body weight gains at 14 days of age when compared with controls (Tables 3 and 4).
DISCUSSION
Campylobacter is a flagellated, highly motile, microaerophilic bacterium able to colonize heavily in cecal crypt mucus Hugdahl et al., 1988). One theory of why probiotics are ineffective against enteric Campylobacter colonization is because Campylobacters are sequestered in the intestinal mucus laden cyrpts and the probiotic bacteria are not able to penetrate and inhibit their colonization in these locations (Aguiar et al., 2013). In an effort to overcome this potential issue, four bacterial isolates demonstrating the ability to inhibit Campylobacter growth and which grew better in mucin, in vitro, were evaluated against Campylobacter colonization in chickens. These isolates were also grown in mucin media prior to inoculation to determine if this would enhance efficacy, possibly due to changes in gene expression associated with mucin co-incubation (Naughton et al., 2014). In the first bird trial, two out of four isolates grown without mucin prior to inoculation reduced cecal Campylobacter counts (approximately 2-3 logs CFU/g ) whereas three out of four of these isolates incubated with mucin prior to inoculation reduced Campylobacter counts (approximately 2-3 logs CFU/g; Table 1 ). In trial 2, many of these isolates also reduced Campylobacter counts by approximately 1.5 to 4 logs CFU/g in the ceca. When compared across trials, two isolates consistently reduced Campylobacter counts in two separate trials (Table 1). Isolate 4 was more efficacious when grown in mucin prior to inoculation with an approximate 1.5 to 2.5 log reduction in Campylobacter counts whereas isolate 1 produced a greater reduction when not incubated with mucin prior to inoculation with an approximate 2-4 log reduction in Campylobacter counts.
None of these isolates adversely affected body weight gains at 14 days of age when compared with controls (Tables 3 and 4).
In an effort to select isolates with even greater efficacy, follow-up trials were conducted selecting isolates with the ability to directly reduce Campylobacter counts when co-incubated with mucin, in vitro. The five most efficacious isolates, in vitro, were evaluated in two separate bird trials. In these trials, one isolate consistently reduced cecal Campylobacter counts in two separate trials (Table 2) by approximately 1.5 CFU/g in the ceca. Results from these trials support the preselection of probiotic isolates with the ability for increased growth rates in the presence of mucin or the ability of isolates to inhibit Campylobacter counts when co-incubated with Campylobacter, in vitro. It is unclear if incubating these preselected isolates in the presence mucin prior to inoculation enhances their efficacy against Campylobacter in poultry.
Although cecal Campylobacter counts were consistently reduced by three isolates in the current study, these isolates where not able to eliminate Campylobacter colonization in chickens.
It is unknown why these isolates are effective in liquid culture but do not eliminate containing Campylobacter. Even though these isolates did not eliminate Campylobacter colonization, they did reduce Campylobacter counts by 1.5 to 4 logs. Risk assessment studies conducted by Rosenquist and his colleagues (2003) predicted that a 2 log reduction of the Campylobacter on chicken carcasses can reduce the human incidence by 30 times. Therefore bacterial isolates demonstrating the reduction in counts produced in the current study could significantly reduce the incidence of this disease in humans. The Institutional Animal Care and Use Committee (IACUC) has APPROVED your protocol 14030: "Testing the efficacy of probiotic cultures against Campylobacter colonization in chickens." In granting its approval, the IACUC has approved only the protocol provided. Should there be any further changes to the protocol during the research, please notify the IACUC in writing (via the Modification form) prior to initiating the changes. If the study period is expected to extend beyond 3/12/17 you must submit a new protocol. By policy the IACUC cannot approve a study for more than 3 years at a time.
The IACUC appreciates your cooperation in complying with University and Federal guidelines involving animal subjects.
CONCLUSION
In the later few decades, Campylobacter spp. has been identified as a leading cause of foodborne illness in the United States, and epidemiological evidence indicates that consumption and/or mishandling of contaminated poultry products is often associated with Campylobacter infection in humans. Probiotic use in poultry has been an effective strategy in reducing other enteric foodborne pathogens but not consistently for Campylobacter. As Campylobacter resides and utilizes intestinal mucin for growth, isolates selected on the basis of mucin utilization might be a strategy to screen for efficacious probiotic bacterium. In this study, bacterial isolates demonstrating increased growth rates or anti-Campylobacter property in the presence of mucin in broth were tested in a total of four bird trials. In both trials 1 and 2, ninety day-of-hatch chicks were randomly divided into 9 treatment groups (n=10/treatment) and treated individually with one of four bacterial isolates (Bacillus sp.) grown in media with or without mucin prior to inoculation or a Campylobacter positive control (no probiotics). In trials 3 and 4, sixty day-ofhatch chicks were divided into 6 treatment groups (n=10/treatment) and were dosed with five individual isolates (Lactobacillus sp.) all grown in mucin prior to inoculation or a Campylobacter positive control (no probiotic). All birds were gavaged with individual isolates at day-of-hatch and orally challenged with a four strain mixture C. jejuni on day 7. Ceca were collected at day 14 for Campylobacter enumeration. Campylobacter counts were logarithmically transformed (log10 CFU/g) and treatment means were partitioned by LSMEANS analysis (P < 0.05). Results from these trials demonstrated three individual isolates grown in mucin prior to inoculation consistently reduced cecal Campylobacter counts (1.5-4 log reduction). These results support the potential use of preselection and growth of isolates in mucin in evaluating bacterial | 2018-12-05T18:12:52.826Z | 2017-01-15T00:00:00.000 | {
"year": 2017,
"sha1": "db835ff52ff7d93f8e63a54f571c059b8a6b7f48",
"oa_license": null,
"oa_url": "https://doi.org/10.3923/ijps.2017.37.42",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "db835ff52ff7d93f8e63a54f571c059b8a6b7f48",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
73694039 | pes2o/s2orc | v3-fos-license | Towards a Kantian Phenomenology of Hope
The aim of this paper is to examine the extent to which Kant ’ s Critique of the Power of Judgment ( CPoJ ) can be, or otherwise ought to be, regarded as a transcendental phenomenology of hope. Kant states repeatedly that CPoJ mediates between the first two Critiques , or between the theoretical knowledge we arrive at on the basis of understanding and reason ’ s foundational role for practical philosophy. In other words, exercising the power of judgment is implicated whenever we try to bring together the ethical issue of strictly determining our actions on the one hand and the necessity to act in the physical world on the other. We will argue that this mediating function is properly understood only if the ideations produced by self-understanding are characterized as objects of rationally required hope or fear.
event has a cause (the law of nature). So, unless what finally ought to happen can happen in accordance with the law of nature, pure theoretical reason and pure practical reason are not in harmony; and such harmony can only exist if the final end is the purpose of nature. Since pure reason cannot be thought to be in contradiction with itself, harmony between pure theoretical reason and pure practical reason must be supposed. But the concept of a purposiveness of nature is not a concept of pure theoretical reason, nor is it an idea of pure practical reason. Instead, it is provided by the power of judgment, which is independent of practical reason. As such, in mediating between the theoretical and the practical, it enables, indeed, requires, belief (faith) that the final end is possible, and not merely that it ought to be possible.
So read, CPoJ differs from the earlier Critiques in identifying and designating the power of judgment as having an autonomous role from the powers of understanding and reason in the way in which 'Understanding' or 'Reason' (which terms Kant often interchangeably uses to designate the collective powers of the mind by which humans are capable of self-awareness, as well as to designate specific sub-faculties of the mind) grounds theoretical, aesthetic and moral claims. By so doing, and by assessing the powers and limits of the faculty of judgment, which requires an examination of the beautiful, the sublime and teleology, Kant claims to have better explained, and thus secured, his faith in the existence of God.
Self-understanding might well require us to accept the moral law, the summum bonum might well be a postulate of the moral law, and God's existence might well be necessary for the summum bonum to be achieved. But will it be achieved, or is it even possible for it to be achieved? What Kant thinks understanding the power of (reflective) judgment shows is that it is a requirement of self-understanding for an agent to represent nature as if it were organized in accordance with the purposes of an intelligent cause (God) and that this cannot be questioned intelligibly (see 5:397-404).
We claim in this paper that the function of the power of judgment is crucially important for getting Kant's ethical ideas to work, and requires all the ideations 2 of Reason to be characterized as referring to objects of rationally required hope (whereas Kant confines this status to happiness under the summum bonum), where to be rationally required is to be a necessary condition of human self-understanding, and hope is defined in the following way: 'A hopes that Q is the case' means 'A desires Q to be the case and (with the idea of Q being the case in mind) considers it to be possible for Q to be the case in the sense that A neither believes that Q is the case nor that Q is not the case. 3 We claim no more for this concept than that it is phenomenologically adequate in describing a way in which persons do and can relate to objects that they have a positive attitude towards (see Beyleveld 2012), and that it resonates with how Kant characterizes hope. This paper has three parts. In Part 1, we explain why the concept of hope is central to understanding the thesis of CPoJ that it is the power of judgment thatthrough mediating between the understanding and reasonrenders the moral law apodictically certain and, consequently, belief in God rationally necessary. In Part 2, we examine Kant's concepts of faith and hope and various other concepts that he introduces to elucidate the epistemic status of various ideations of Reason, and argue that it is the ideation of 'hope' that is best suited to perform the role Kant assigns to the power of judgment. In Part 3, we focus on problems with the idea that 'faith' best performs this role, and suggest that CPoJ should be reconstructed as a transcendental phenomenology of hope, which is to say that all the ideations connected with the moral law (free will, God, immortality, the summum bonum) should be seen as objects of rationally required hope 2 CPoJ and Establishing the Moral Law Kant does not characterise the ideations of Reason in terms of hope. His explicit view, presented in the Critique of Practical Reason (CPrR) and followed and expanded in CPoJ is that free will is proved to exist (shown to be real) in being the condition (ratio essendi) of the moral law, which is an apodictic law of practical reason. As such, free will is the keystone of the whole structure of a system of pure reason, even of speculative reason; and all other concepts (those of God and immortality), which as mere ideas remain without support in the latter [i.e., in speculative reason], now attach themselves to this concept. (5:3-4) Unlike free will, the existence of God and of immortality are not conditions of the moral law but conditions of applying the moral law to the summum bonum, which is the necessary object (final end) of the moral law. Though we cannot affirm that we cognize and have insight into […] [the ideas of God and immortality,] […]; by means of the concept of freedom objective reality is given to the ideas of God and immortality and a warrant, indeed a subjective necessity (a need of pure reason) is provided to assume them, although reason is not thereby extended in theoretical cognition. (5:4-5) In other words, the existence of (e.g.,) God (and henceforth, we will concentrate on God) is a postulate of pure practical reason (see 5:124-134), meaning that 'it is morally necessary to assume the existence of God' (5:125); or that the moral law is the 'ground of a maxim of assent [to the existence of God] for moral purposes ' (5:146). In still other words, God is an object of pure practical rational belief (see, e.g., 5:144-146) or 'faith', as he puts it in CPoJ (see especially 5:471-473). But it must not be thought that we have a moral obligation to believe in God, for 'a belief that is commanded is an absurdity ' (5:144). Faith in God is not an affirmation […] to which we hold ourselves to be obligated, but one which we assume for the sake of an aim in accordance with the laws of freedom […] as adequately grounded in reason (although only in regard to its practical use) for that aim. (5:472) These statements need further explanation, particularly with regard to their systematic foundation in Kant's philosophy. Clearly, theoretical philosophy is insufficient to support them. And they are far from clear, for Kant definitely holds that the apodictic certainty of the moral law provides grounds for belief in God that outweigh any considerations (or the lack of them) derivable from theoretical reason (see 5:472). At the same time, Kant cannot simply mean that practical reason takes over where theoretical reason fails: it is far from clear that there is any kind of smooth transition between these faculties that would make such an operation of replacement possible. It is at just this juncture that CPoJ commands our attention.
At the end of his Introduction to CPoJ, Kant says that the human mind has three facultiesthose of cognition, feeling (of pleasure and displeasure), and the faculty of desire. The faculty of cognition, itself, has three sub-faculties or capabilities: understanding, power of judgment, and reason, each of which 'contains' a 'constitutive' distinctive type of a priori principle with a distinctive sphere of application corresponding to a specific faculty of the mind. Thus, understanding characterizes the faculty of cognition as such and contains a principle of lawfulness applied to nature; the power of judgment ('independent of concepts and sensations that are related to the determination of the faculty of desire ' [5:196-197]) characterizes the faculty of feeling and contains a principle of purposiveness applied to the experience of beauty and to the special laws dealing with natural things and events; while reason constitutes the faculty of desire and contains a principle of the final end applied to freedom (see 5:196-198).
If CPoJ may be said to have an overarching thesis it is that the power of judgment, provides the mediating concept between the concepts of nature and the concept of freedom, which makes possible the transition from the purely theoretical to the purely practical, from lawfulness in accordance with the former to the final end in accordance with the latter, in the concept of a purposiveness of nature; for thereby is the possibility of the final end, which can become actual only in nature and in accord with its [nature's] laws, cognized. (5:196) How does this mediation occur? What Kant says about 'the common human understanding', which consists of those cognitive powers that are 'the least that can be expected from anyone who lays claim to the name of a human being' (5:293)so (in effect) those cognitive powers required for one to be aware of one's own existenceis illuminating. The common human understanding is governed by three maxims of reasoning: 1. To think for oneself; 2. To think in the position of everyone else; 3. Always to think in accord with oneself.
[…] The third maxim, namely that of the consistent way of thinking […] can only be achieved through the combination of the first two […]. One can say that the first of these maxims is that [sic] maxim of the understanding, the second that of the power of judgment, the third that of reason. (5:294-295) As far as thinking in the position of everyone else is concerned, it consists of setting oneself 'apart from the subjective private conditions' of one's own judgment, and reflecting on this judgment 'from a universal standpoint' by putting oneself 'into the standpoint of others' (5:295).
In his Logic (9:57), Kant refers to these maxims as 'general rules and conditions for avoiding error', so they apply to theoretical, aesthetic and practical reasoning or thinking. Since he holds that the moral law is a fact of pure reason and God is an idea of reason postulated by the moral law, it is particularly pertinent to look at these maxims in application to practical reasoning. We suggest that, so viewed, the first requires agents to adopt a broadly internalist viewpoint. I, an agent, must only accept practical precepts (maxims) if they are acceptable to me on the basis of what I personally (subjectively) value. The second maxim requires me to have values that any agent must espouse. This is because the exercise of the reflective power of judgment requires me to recognize that I cannot be the particular agent that I am (constituted, at least in part, by my personal values and choices) unless I am an agent, i.e., unless I have the capacities of the common human understanding, the cognitive faculties of the human mind that are necessarily shared by all agents. So, I cannot obey the first maxim unless my personal values are acceptable to me as an agent per se. This requires me to reason from the universal viewpoint of all agents, the viewpoint of an agent qua being an agent per se. Consequently, pure reason requires maxims in accord with the first maxim to be consistent with values that every agent must choose. However, the maxim of reason generated by rendering the maxims of the understanding and the power of judgment consistent is, surely, none other than the categorical imperative as expressed by Kant's Formula of Universal Law: [A]ct only in accordance with that maxim through which you can at the same time will that it become a universal law. (4:421) In CPrR, Kant tells us that consciousness of pure practical laws (moral laws), which is a fact of reason, is made possible for us by [our] attending to the necessity with which reason prescribes them to us and to the setting aside of all empirical conditions to which reason directs us. The concept of a pure will arises from the first, as consciousness of a pure understanding arises from the latter. (5:30) If we read what reason prescribes to us as the generation of the maxim of reason, and the setting aside of all empirical conditions as the operation of the reflective power of judgment, then the implication is that the moral law is given to us as the fact of reason in the processes of self-reflection that underpin Kant's three maxims of the common human understanding. Indeed, on this basis, despite the popularity of views to the contrary, Kant's argument for the moral law in Groundwork of the Metaphysics of Morals (GMM) is identical to his argument in CPrR. For, in GMM, Kant says that 'a human being' inherently finds in himself a capacity by which he distinguishes himself from all other things, even from himself insofar as he is affected by objects, and that is reason (4:452) and the rightful claim to freedom of will made even by common human reason is based on the consciousness and the granted presupposition of the independence of reason from merely subjectively determining causes (4:457) on the basis of which agents must think of themselves as belonging to the world of understanding as well as to the world of sense, and though Kant here refers to 'reason' this duality is forced upon agents by what in CPoJ is designated as the reflective power of judgment.
In our opinion, when Kant says that the moral law/the categorical imperative is a synthetic a priori proposition, he means that it is one that an agent must accept on pain of failing to understand what it is to be an agent (see 4:420 coupled with 4:426). His argument that the moral law is given to us as the fact of pure reason (see 5:29-31) is simply that, given that understanding, the power of judgment and reason are the capacities that enable self-understanding, exercising these capacities consistently with each other requires acceptance of the moral law as the Formula of Universal law. Now, supposing all this is so, what makes the question of hope central to CPoJ? Well, in Logic, Kant says that the field of philosophy may be summed up in the following questions: 1) What can I know? -2) What ought I to do? 3) What may I hope? 4) What is man?
Metaphysics answers the first question, morals the second, religion the third, and anthropology the fourth. Fundamentally, however, we could reckon all of this as anthropology, because the first three questions relate to the last one. (9:25; cf. CPuR A804-805/B832-833) It is clear that CPuR deals with the first question and CPrR deals with the second. In a letter to Stäudlin in 1793, Kant says that it is in Religion Within the Boundaries of Mere Reason that his question 'What may I hope?' is finally worked out, and that this question amounts to, as he adds between brackets, a 'philosophy of religion ' (quoted in Peters 1993, 15). While this is true if we depict the answer to the first question as finally worked out in The Metaphysical Foundations of Natural Science, and the answer to the second question as ultimately worked out in The Metaphysics of Morals, the position of hope within Kant's philosophy differs from that of the objects of knowledge and the imperatives of reason. Kant not only addresses the third question in all three of his Critiques, his answer to it concludes all three Critiques. In all three works, what I may hope for is addressed in the context of Kant's moral argument for God; issues concerning hope, therefore, may be said to integrate the three Critiques, which integration Kant claims is the systematic function of CPoJ. The reason Kant consistently gives as to why I must assume that God exists, is that God must exist for it to be possible for the summum bonum (the state of affairs in which all will receive their just deserts in relation to their moral virtue), which is postulated by the moral law, to be brought about. In a nutshell, Kant claims that I must assume that God exists because the apodictic necessity of the moral law entails that I must hope that it is possible that I (all) will receive happiness in proportion to my (their) virtue, and God's existence is necessary for this hope to be realistic.
Faith and Hope Between Theoretical and Practical Philosophy
'Hope', therefore, needs to be related both to issues concerning knowledge and to practical ideas. When we do so, we discover a distinctive epistemic signature of 'hope' that distinguishes it from both knowledge and faith. While faith in God is certain, no one will be able to boast that he knows that there is a God and a future life; for if he knows that, then he is precisely the man I have long sought (A 828-9/B 856-7) (but whom, or so one would have to supplement this passage, one cannot, and should not be able, to find). Kant discusses this difference at length in the section on 'opinion, knowing, and believing' (where 'believing' stands for the German 'Glaube', which is better translated as 'faith') in the methodological chapter at the end of CPuR. The basic epistemological framework of this discussion is the same for faith and knowledge. In both cases, we deal with specific forms of 'taking something to be true', and in both cases it is towards propositional contents such as the existence of God and immortality that we adopt the epistemic attitude of 'taking to be true' (A 820/B 848). As such, faith is a direct replacement for knowledge in cases where knowledge itself cannot be obtained. But hope is not, for it presupposes that I am ignorant, or at least undecided, as to whether what I hope for is real or not, and knowledgereplacement operations cannot overcome this type of ignorance or uncertainty (for an analysis of forms of ignorance in the context of Kant's philosophy of hope, see Axinn (1994), 165-181, 248). What, then, are the important differences that make Kant ask 'What may I hope?' in place of 'What can/may 4 I have faith in?'?
Kant introduces the notion of hope in the context of our prospects of achieving happiness in the futuremore precisely, of achieving happiness earned by acting morally. This suggests that arguments based upon hope fall under what Kant calls the 'primacy of practical reasoning' (5:119-121), and are employed where our theoretical resources are insufficient. What one may call the official doctrine in these matters is pretty straightforward: all operations going beyond theoretical knowledgethe key topic of the methodologies of CPuR and CPoJ alike 5can bring about an extension of our knowledge only 'with respect to practical reason' (e.g., 5:133). 6 However, Kant's notion of hope does not seem to fit the picture of purely practical transgressions of the critical bounds of knowledge in at least two ways: hope does not always lead to fully specified or characterized outcomes (see section 2.2), yet hope remains importantly related to issues in theoretical philosophy.
In the very same sentence in which Kant defines hope as being directed towards happiness, he declares that the practical necessity to assume moral principles can be equated with the theoretical necessity that requires us to hope that we can earn happiness by morally good behaviour (A 809/B 837). In this equation, the question 'What may I hope?' is not only both practical and theoretical, it also has a clear direction towards the realm of the theoretical so that the practical leads like a clue [in the German original, it reads 'nur als ein Leitfaden', as 'nothing but a clue'/'only as a clue'; note also that a 'Leitfaden' is more than just a hint: it guides one to a discovery] to a reply to the theoretical question, and, in its highest form, the speculative question. (A 805 / B 833) This connection is confirmed in a number of passages. Two examples will suffice. In CPuR (A 809/B 837), Kant distinguishes between reason in its practical usage and a theoretical usage of reason, and it is in the latter context that we must assume that 'everyone has cause to hope for happiness in the same measure as he has made himself worthy of it in his conduct'. In Religion, hope is explicitly related to the discoveryfollowing the guideline of 'natural miracles'of 'new laws of nature', while genuine miracles destroy our confidence in our theories of natural phenomena (6:88; 'hope' being emphasized in this passage by Kant himself).
This provides a genuine challenge. It is important to emphasize again that, though referring to issues in theoretical philosophy, these passages do not provide a strategy for smoothly extending the domain of possible knowledge beyond its theoretical limitations. 7 Hope's functioning like knowledge does not imply that we indeed have knowledge when we have hope. Hope is more clearly theoretically directed than postulates and hypotheses, but on the other hand, hope only allows weaker conclusions to be drawn than those possible on the basis these other notions. If I am required to postulate the existence of God, I must act as if He exists; if I must hope that happiness can be achieved, this only implies acting as if happiness could possibly be brought about. 8 5 The status of the methodological considerations in CPrR is, at least at first sight, different from that in the other two Critiques: in moral philosophy, methodology is concerned with the question of how practical reason can be made subjectively effective (5:151). It would be worthwhile to explore how this question relates to the more traditional methodological discussions on subjective vs. objective forms of cognition in the other two Critiques. 6 One has, therefore, to concludetogether with the editors of the Academy editionthat the Second Critique (5:132) contains a Freudian typo when Kant omits a 'nicht' in the phrase 'erweitern also zwar das spekulative Erkenntnis' (otherwise, the 'aber' in the next clause would be incomprehensible). Freudian typos are frequent in the Third Critique with its frequent switches between 'Teleology' and 'Theology'. 7 When Wiesche (2012, e.g. 59) emphasizes that the 'intermediary epistemic status [epistemische Zwischenstellung]' of hope results from the primacy of practical reason, this, in the light of the passages just quoted, precisely misses the intermediary status of hope between the theoretical and practical dimensions of human reason. -Cf. also Wimmer (1990, e.g. 75-6) on the necessity to (more clearly than in Kant's oeuvre itself) distinguish between the moral and religious tenets in Kant. 8 Cf. Beyleveld (2012) on the necessity of sharpening Kant's phrase in A 806/B 834: the important point about hope is not that it implies an argument leading from 'something should happen' to 'something is', but only to 'something should be'.
Hope and Epistemic Openness
Hope's epistemic status is remarkable. While clearly being a key concept in Kant's systematic enterprise, hope requires us to adopt an open epistemic attitude. While hypotheses, for instance, can play a role in explanations, no such explanatory power, according to Kant, can be derived from hopethis weakness, however, is compensated by hope's ability to establish systematic unity. This may be put in modal terms. I can only hope for what is not impossible, but being so directed is not sufficient for being in a state of hoping. We need to attach a positive value to what our hope is directed towards as well as presupposing a form of indeterminacy on the epistemological level: hope requires dubiety, a questioning, as against an affirmative, stance 9 ; we cannot know, in the sense of having determined concepts, what we are going to get when we pursue what hope is directed towards. This sets arguments involving hope apart from the forms of an 'extension of pure reason' as discussed in CPrR. The key problem there is how concepts that go beyond what can be given in intuition can attain 'objective reality', i.e., how these concepts can be treated just like ordinary concepts that arise in theoretical cognition. Kant argues in a way that rather smoothly extends our ordinary forms of knowledge. A concept such as 'God', when considered empirically, remains a 'not precisely determined concept' (5:139), but when considered practically, it can be thus determined. But this is not the case with Kant's prototypical objects of hope: hoping that we can achieve future happiness does not tell us more about what this happiness will consist inbeside the bare fact that is the object of hope, namely that it will be a form of happiness that is deserved on moral groundsand the same holds with regard to the systematic unity of the special laws or nature. While having faith in the existence of God, or accepting immortality as a matter of postulates, extends our knowledge in the sense that we both accept the content of these claims and also have a conceptually fixed picture as to what it is that we thus accept, this does not seem to be the case with the typical objects of hope.
Hope needs to be triangulated very carefully here. While I can only hope for what is not in itself, and that on theoretical grounds, impossible, I may wish for the impossible, or accept an article of faith despite its absurdity. On the other hand, hoping is different from assuming that something is the case or will be the case (i.e., from expecting it): while hypotheses claim the status of being (unproved, yet possible) propositions from which further theoretical argument can depart, this is impossible for both faith and hope. This holds, as it were, both upwards and downwards: the claims of hope cannot be derived via theoretical deduction, and they provide no basis for further such deductions.
Let's work out the difference between faith and hope, within a Kantian framework, in some more detail. 10 First, as regards their respective objects, Kant distinguishes, though without much argument, between their objects. The prototypical object of hope in Kant's Critiques, future happiness, is not a matter of faith/postulates, while immortality is (which raises the question as to how immortality, as an object of faith/postulates, and future happiness relate to each other). The important difference on this level is that faith, but not hope, is directed towards objects or certainties that already hold here and now; they lie, as Kant frequently states, 'in the world'. 11 Closely related is the fact that hope is tensed in a way that is different 9 Stratton-Lake (1993, 67) analyses Kant's notion of hope in terms of an 'infinite striving', an approximation that never exhausts what it is directed towards. This does not seem adequate for the positive connotations that hoping conveys. Better suited is Wiesche's (2012, 57) characterization of hope in terms of a 'kontrafaktisches Gelingen'. 10 Also interesting are some passages in Struggle of the Faculties (7:43) where Kant states that hope becomes alive in us through faith. This is a clear two-step account that gives hope a function that is different from that of faith; cf. also 7:42 on 'hope and fear '. 11 In the relevant passages in CPoJ, this phrase is regularly attached to the summum bonum, (see, e.g., 5:435, 447). from both faith and postulates (cf. Beyleveld 2012; Stratton-Lake 1993). Hope is 'subjectively future', while faith is 'subjectively present'. A last point here: typical forms of knowledgetransgressing reasoning, such as the framing of hypotheses or analogical reasoning, are inadequate for dealing with the supersensible. In §90 of CPoJ, Kant maintains that the framing of hypotheses fails here because any hypothesis requires confirmation or falsification via sense-experience. The same argument is pressed against analogical reasoning (5:464-5). 12 While there are other ways of going beyond our established knowledge that do not pronounce mere possibilities (both postulates and faith are explicitly about the supersensible, but are pronounced as certain), hope is directed to what is viewed as merely possible while still being able to be directed towards the supersensible. The insufficiency of theoretical reasoning to provide access to the supersensible has further implications. A successful way of accessing the supersensible cannot be a matter of probabilities (see 5:465, and most explicitly 5:400: 'Probabilities count for nothing here', see also A822/B850). This, then, will also hold for hope regarding the supersensible.
Secondly, there is a difference in the degree to which the objects of faith and hope, respectively, come to be conceptually determined by taking the attitudes of faith and hope towards them. To be sure, it is possible to state that I hope that a very specific event is going to happen, but this does not seem to capture the essence of hope in the Kantian picture. When I state that I hope that the sun is going to shine tomorrow, this is not essentially different from wishing (when it has been raining all day today) that this is going to happen. But Kant clearly distinguishes hope from wishing (see 6:117). If we can only rely on our own human resources, we can only wish for happiness. But hoping is closely related (though not identical) to expecting a future event. Only when a superhuman power is introducedand that is what Kant's arguments concerning hope are intended to establishcan we go beyond wishing and achieve a state of expectation (6:482). 13 Take another example (discussed in detail in Beyleveld 2012): When I express the hope that my friend has caught her train on time, I am not only stating that I am ignorant as to whether she did or not, and at the same time expressing my desire that she did; I am also engaging in a consideration or exploration of alternative scenarios. Without compromising my ignorance about the actual course of events, I consider alternative options for what might have happened. Even where hope is about a concrete event or object, and very clearly so in those cases where the object of hope is not conceptually determined, hope is fundamentally directed towards what we may call a framework 14 : We need not know what a happy life feels like or what this implies in terms of concrete features; it is sufficient to be directed, in the attitude of hope, towards a framework that inherently leaves open that/how it will be filled in. Put simply: expecting can be seen as a state of believing that something will happen whether or not it is desired; hoping is a state of considering that something that will satisfy one's desire might happen, and wishing involves no commitment to something actually or possibly happening but merely desiring it. 12 Maly (2012 e.g. 108-113) strongly emphasizes the 'otherness' involved in Kant's discussion of symbolic modes of cognition. 13 See also 5:452: What the 'righteous man' -Spinoza is the stock examplecannot have is an 'expectation' that nature and purposes are going to harmonize. Still, hope does not become identical to expectation; and another interesting comparison can be made with 'Trost', 'comfort', which Kant explicitly links to hope (6:76), but to which he also denies certainty. 14 See also Axinn (1994) who needs to further differentiate his original statement that in order to hope we need to able to mention a 'description of a certain situation' (193) in the sense that being able to 'construct a schema' is also sufficient for hope (195). This is because the methodological terms 'construction' and 'schema' do not exactly fit the epistemic constraints under which hope operates.
All of this can be given a more nuanced analysis in the epistemological terms used by Kant himself, even though he never devotes an explicit discussion to these terms. Hypotheses are introduced in order to serve as a basis for an explanation (5:466; 5:126). Explanations are (see 5:411) deductions of conclusions from given presuppositions. So, what the hypothesis assumes must be understood clearly and distinctly. Since matters of faith, 'Glaubenssachen', cannot be clarified via sensory intuitions, they cannot form the basis for an explanation. Faith is, as Kant states, a basis for 'comprehensibility' (5:126, see also 6:118). This term, which is not given a clear technical meaning in Kant's critical philosophy, 15 is also used in the methodological sections of CPoJ where Kant discusses alternatives for explanatory modes of reasoning. In this respect, too, hope and faith are closely related: what I can only hope for is also unable to serve as the basis for deductions, and needs consequently to be analyzed in terms of 'rendering something comprehensible' or 'intelligible' rather than 'allowing the derivation of knowledge claims'. Hope, in this context, has a clearer systematic position than faith. While matters of faith are indeed, to some extent at least, conceptually determined, this is not the case for hope. The ambiguity inherent in faithdetermining its concepts, to some extent at least, but still not allowing for deductionsis avoided in the notion of hope. Hope is unable to function as a premise in deductions not only because of its being related to moral ideals, but also because of its epistemic openness.
New Epistemic Options
It is striking to see how deeply this openness is built into Kant's epistemic vocabulary. One example: In almost all the key passages where he deploys the notion of hope, Kant also employs the less spectacular, but epistemologically equally interesting term of a 'Leitfaden' (which may be translated literally as a 'guideline' or 'guiding thread', Ariadne's thread providing secure guidance in a maze). 16 Take some examples: In CPoJ §70, where Kant discusses the 'antinomy' of the power of judgment, it is precisely the reflective, and not constitutive/determining character of the power of judgment that requires us to accept a mere 'Leitfaden' as sufficient, and Kant translates this immediately into a statement to the effect that we can 'only hope' that nature forms a unity under empirical laws (5:386). The same terminology is used in §89 (5:460); the passage has already been stated: when we relate hope not to the practical realm, but to reason in its theoretical usage, we again get a 'Leitfaden'.
Kant offers a number of possible equivalents for this term. In CPoJ §72, he distinguishes forms of reasoning along a 'Leitfaden' from investigations into a first origin (5:389). This said, 15 The notions of 'Deutlichkeit' and 'Verständlichkeit' are dealt with in Kant's lectures and writings on logic (see Breitenbach 2009, 171-2). Kant distinguishes between a 'logical' clarity which is brought about by determining concepts, and an 'aesthetic' clarity, based upon intuitions and conveyed by examples, and termed 'understandibility'/'Verständlichkeit' (9:62; the Cambridge translation renders the German 'verständlich' differently in different texts). Logical and aesthetic clarity do not coincide. Breitenbach emphasizes the necessity to combine both forms of clarity in coming to judgments about nature. -Both conceptual modes of understanding and 'symbolic representation' (6:171) are described by Kant as ways of making statements comprehensible. 16 CPuR regularly proceeds along the 'Leitfaden der Kategorien'. -Grimm's (Grimm 1885) dictionary views 'Leitfaden' as a term of the 18th century that, however, is an insignificant addition to the German language since this word, given its evident mythological background, was readily understandable from the very first moment when it was adopted, and required no explanatory efforts. It is noteworthy, however, that it was related to methodological ideas on more than one level; 'Leitfaden' was, as one of the first occurrences in the writings of Lessing (quoted in Grimm) reminds us, broadly used for introductory guidebooks. Förster (2002, 335) discusses the problem that, within the framework of CPoJ, the 'idea' of an organism cannot guide the investigation, but that a grasp of the complete complexity of what an organism can be needs to precede all further investigation. He relates this argument to yet another dimension of the semantic field of 'Leiten', by referring to Goethe's usage of the concept of a 'Ladder'/'Leiter'. there remain two options for understanding what a 'Leitfaden' might be: it can be subjectively valid, which makes it 'nothing but a maxim of the power of judgment', or it can be an objective principle of nature. This immediately gives us two alternative renderings: maxim and principle. In neither case does a 'Leitfaden' itself amount to genuine knowledge. A 'Leitfaden' is not aimed at producing theoretical knowledge: what it gives us isas Kant states in the same §72 (5:390) of CPoJ in surprisingly open termsan 'Ahnung'. This term is notoriously difficult to translate. The Cambridge translation offers 'presentiment', but 'intimation' might be an alternative translation, both capturing the fact that 'Ahnung' goes beyond traditional conceptions of rational explanation. 'Ahnung' comes coupled with a 'Wink' ('hint'/'clue') that it might be possible to go beyond a purely natural/naturalistic study of nature in terms of (mechanical) causality. A 'Leitfaden' is a method, but not a method geared towards deriving specific results; it is a method that only claims that we can get a certain type of result (such as, in the context of CPoJ, nature's being ordered as a realm of purposes), but it does not point towards what we get in the process of investigating nature along this 'Leitfaden'. Kant further explicates this notion (5:398) by stating what a Leitfaden is aimed at, namely at the conducting of research ('nachforschen'), i.e., the mere possibility of investigating nature in its organic objects ('if we would even merely conduct research among its organized products by means of continued observation'). The 'Leitfaden' hasagain just like hopea twofold object: the compatibility of two forms of causality, teleological and mechanical, and the possibility of transcending theoretical reasoning. Although a 'Leitfaden' is a 'mere Leitfaden', not constituting knowledge in itself, it remains within the domain of pure reason, and is not affected by issues concerning probabilities.
Kant consistently opposes these operations to 'explaining'. What 'explanation' means is sufficiently clear: logical deduction from first principles that we know clearly and distinctly. The other terms comprise a large variety of options and operations; some are directed more towards future discovery ('indicate', 'presentiment', 'clue'), some are concerned more with modes of organizing knowledge we already have, or of organizing data in a way that renders them comprehensible (making 'intelligible'). The latter term is particularly interesting because Kant opposes it explicitly to 'explaining' (5:412, with an added difficulty because Kant uses two different terminological alternatives for 'explanation', namely 'deduction' and 'explication'). The passage on 'elucidation'/'Erörterung' follows the pattern that has emerged repeatedly already: where no deduction from clearly conceived principles is possible, we need recourse to other epistemological strategies such as 'elucidating'. This, however, also changes 17 In his discussion of the connection between a system of nature, and a system of freedom in Kant, Guyer uses notions from the semantic field of 'hope' (such as 'encouragement', Guyer 2005, 22) together with epistemic notions such as 'assume' or 'suppose' (20) that are in need of further clarification. 18 Maly (2012), an enormously precise and detailed analysis of the role of symbolic cognition in Kant, does not really compare this type of cognition with other cognitive modes. Breitenbach's (2009) book on the analogy between reason and nature, likewise, does not give a comprehensive typology of forms of knowledge extensions; but see her remarks on the 'Deutlichkeit und Verständlichkeit der Natur' (171-2). the subject matter we are discussing. This passage is not so much about an alternative mode of epistemic access as about the compatibility of two different modes or two different explanatory maxims. If they are to be in harmony, this must necessarily be by reference to a higher, supersensible principle; and this must no longer be argued within the framework of explanatory deduction. Hope, as a theoretical operation, is a way of discovering. This characterization of hope manages to combine the theoretical impossibility of having knowledge with the positive optimism of being able to arrive at insight. 19 4 Towards a Transcendental Phenomenology of Hope?
If our analysis thus far is sound then it indicates that it is in terms of 'hope' and the related novel epistemic concepts Kant introduces that we can combine theoretical and practical reason in a way that yields a strategy for positively dealing with the limitations of knowledge Kant identifies, and at the same time compensate for these limitations by appealing to results derived from practical reason. In short, the status to be accorded to transcendent objects (free will, God, immortality, the summum bonum) is that their existence is something that we rationally ought to hope for.
But, as we have seen, this is not what Kant does. Of these objects, it is only the summum bonum that we rationally ought to hope for. The other objects are portrayed as objects of rationally required faith. According to Kant, the moral law is given to us as the undeniable fact of pure reason, on the basis of which the existence of free will (being the ratio essendi for the moral law) must be equally undeniable as must be hope for the summum bonum. Then, because the intelligibility of hope for the summum bonum requires the possibility of the summum bonum, which requires the existence of God and immortality (i.e., God and immortality are ratio essendi for the summum bonum), the propositions that God exists and that we are immortal are rendered as undeniable as the moral law.
But giving a different epistemic status to the summum bonum from that given to the existence of God and immortality is extremely problematic. For one thing, if God (for Kant an omnipotent wholly just being) exists, then we rationally ought not merely to be able to hope for the summum bonum, we rationally ought to believe that it will be brought about. The idea that it might possibly not be brought about contradicts the idea that an omnipotent wholly just being exists. So, if understanding that we are categorically bound by the moral law requires us to have faith in God, then it also requires us to have faith, not merely hope, in the summum bonum. Conversely, if all we can derive from the rational necessity of acceptance of the moral law is that we can/ought to hope for the summum bonum, then all we can derive is that we ought to hope that God exists and that we are immortal. This point can be put slightly differently: in reasoning that we can/ought to hope for the summum bonum, Kant seems to reason that the unity of theoretical and practical reason requires us to hold that the summum bonum ought to exist (i.e., it would be an absolutely good thing for it to exist). But, on this basis, it follows only that God ought to exist and that we ought not to believe that it is impossible that God exists. Also, if (as Kant has it in CPrR, 5:4-5) the presupposition of free will (FW) grounds (is sufficient reason for) faith in the existence of God and immortality (G&I), and free will is the necessary condition for the moral law (ML), then must not God and immortality be necessary conditions for the moral law, too? This is because if 'ML entails FW' (entailed by 'not-FW entails not-ML') and 'FW entails G&I' then'not-G&I entails not-ML'.
Matters are not helped by maintaining that faith (not mere hope) in the existence of the summum bonum is required. If I believe that the summum bonum will necessarily be brought about then will this not alter my motivation to obey the moral law? There will no longer be any virtue in my obeying the law. I will not obey it because it is the law, but because I believe that punishment is inevitable if I disobey it. And should not Kant concur, for he holds that knowledge that God exists would damage moral motivation in this way (see 5:147)? To be sure, Kant would disagree, because he holds that faith in God does not negate virtue, and it is surely his belief that this is so that explains his claim in CPuR that he has denied knowledge of God in order to create room for faith (see B xxx). But Kant is wrong, simply because what distinguishes faith from theoretical knowledge for him is not the certain truth of the belief that God exists but only the grounds we have for affirming this certain truth, and it is not the certain truth of the belief or the grounds for its acceptance, but the acceptance of it as being certainly true that does the motivational damage (see Beyleveld 2012, 31).
The problematic nature of Kant's reasoning can also be shown by reflection on the architectonic of his three maxims of the common human understanding. Kant, here, views the moral law as a categorical imperative, as generated by the rational requirement to render observance of the maxim of the understanding consistent with observance of the maxim of the power of judgment. The way in which he interprets this has the following results: that aspect of myself in which I am the same as all other agents, which I must recognize on the basis of the power of judgment, revealed by abstracting from all my particular individuating characteristics, is thought of as giving me access to my noumenal or 'proper' self (see, e.g., 4:457), which he refers to as 'homo noumenon' in The Metaphysics of Morals, and homo noumenon is thought of as giving the moral law to homo phaenomenon (myself as the particular agent that I am) (6:239, 335). This accords with the fact that Kant thinks of the moral law as a law of nature for a being possessing the characteristics of homo noumenon, which is a being with free will unaffected by heteronomous incentives, and with his claim that the moral law only appears as a categorical imperative to homo phaenomenon (4:444-445). This is odd for at least two reasons. First, architectonically, it makes the moral law the law of the power of judgment, and free will the object of the power of judgment rather than of reason as Kant consistently has it (see, e.g., 5:198). Second, since free will is ascertained by abstraction from the contingencies that constitute homo phaenomenon, it makes it difficult to see how a law governing homo noumenon can govern homo phaenomenon. The response that I cannot be the particular agent that I am without being an agent is not sufficient, unless coupled with the recognition that I cannot be an agent without being the particular agent that I am (see Beyleveld 2013a), simply because, as Bernard Williams has pointed out, I cannot be a rational agent and no more (see Williams 1985, 63). Related to this, to see the categorical imperative as an application of the moral law that is the law of the nature of such an abstraction, especially when faith in my immortality is to be seen to follow from recognition of my freedom, surely implies that there is no need for a law to protect my value.
This second reason, we suggest, goes to the heart of the matter. The maxim of reason needs to be interpreted in such a way as to render the other two maxims consistent with each other, not by subordinating the maxim of understanding to the maxim of the power of judgment, and this can only be done by according the ideations of understanding and the power of judgment the same epistemic status.
A strategy that follows this up has a precise place in the history of post-Kantian philosophy: the account that Fichte (1978) gives of how we acquire knowledge about the world integrates will and cognition, activity and receptivity, and gives this integration an anthropologically plausible foundation. My idea that there is a world that exists apart from my awareness of such a world arises from my awareness that I cannot infallibly predict what the content of that awareness will be, and especially that I cannot infallibly determine what my experience will be simply by willing it. At the same time, I have a sense that I can control some aspects of my awareness by my will, that I am, at least in part, subject to laws of pure reason that are not explicable in terms of the causal laws of nature. Kant's transcendental philosophy may be looked at, indeed, should be viewed as an exercise in critical self-reflection on the idea of having a self in relation to the world, and to other selves. The idea of a self is, inherently, the idea of a being with finite powers of willing. The idea I have of myself as a self is inextricably linked with the idea of there being something that is not myself; while, at the same time, awareness of something that is not myself only arises when I am aware of, have the sense that, I am distinct from and exist in some way independently of that which limits the power of my willing. This is precisely the epistemic predicament in which hope functions; hope captures, as has been analysed in more detail in Part 2, this experience of having a positive attitude towards an open framework of possibilities that defy my attempts to grasp and control them. Now, if this is correct, Kant's transcendental philosophy should be taken, as a whole, to be an analysis of the dialectic between my phenomenological sense that there is a world independent of myself, of which I am at least a part, which I must suppose is governed by universal natural causes and my phenomenological sense that I am able to act according to laws of pure practical reason (see also Lutz 2010Lutz , 2012. In short, it is a phenomenology of hope. But, as such, it cannot purport to tell us (to establish) whether or not there is a world that exists independently of my phenomenological awareness of it, nor whether that world in itself is wholly determined, governed by pure chance, or in accordance with the laws of freedom or some combination of these. Nevertheless, rational reflection on how finite beings become selfconscious cannot get away from the fact that reason produces the idea that we are bound by the moral law, and that the world ought to be ordered in ways consistent with what reason prescribes. This cannot be denied for the simple reason that to do so involves the use of the very thing the efficacy of which would be denied. Our suggestion is that reason, in demanding consistency, demands a view of the status of reason that simultaneously recognizes these ultimate limitations of reason, yet, at the same time opens up a space for action (both theoretical and practical) to be meaningful for us, for it to have a point.
The things that Kant himself has to say about what transcendental philosophy can establish are ambivalent, and, remarkably, he employs precisely the terms he developed in discussing hope to address large-scale issues referring to transcendental philosophy as a whole. The conclusion that Kant draws to CPrR is, perhaps, one of the clearest indications of his intentions in this regard. According to Kant, the idea that the world is governed by universal natural causes, threatens to negate the idea that I have any value in myself, which is most dramatically presented to me when I contemplate the vastness of space and time and think that I might just be a speck of matter in the cosmos. On the other hand, my awareness of the moral law, the fact that reason in its practical use requires me to assent to a categorical imperative, requires me to assent to the idea that the cosmos is inherently meaningful (5:161-162). Kant here uses the epistemic notions that we encountered in his discussion of hope: These experiences can stimulate, a 'Nachforschung'/'inquiry', even though, strictly speaking, an inquiry is impossible; the idea of a consistent determination of my existence under purposes makes it possible to 'infer' (which is a rather too strongly logical translation of the far more open German 'abnehmen') a life-form that transcends the realm of sense experiences.
The conclusion of CPrR combines a threat and a promise. The problem is that these are not equally forceful for Kant. He maintains that the fact that I can contemplate the vastness of the cosmos at all and perceive the threat it poses to the cosmological meaningfulness of my existence (and, he might have opined, that I have the idea of a wholly material world only because I experience limitations to my will), means that primacy must be accorded to the promise generated by pure practical reason. Because Kant holds that pure practical reason requires me to suppose that I have a free will, etc., the consequence is that I am required to have faith that these things are real. However, if what we are suggesting is sound, then, to put it bluntly, the idea of the moral law ought not to be separated from that of the categorical imperative, and must be seen to be grounded (have its ratio essendi) in the fear that my being has no value, coupled with the hope that it does have value. 20 If we view the moral law as the law of this hope and fear, this integrates the finiteness of man with his supersensible dimensions, but within one phenomenon (see also Stratton-Lake 1993). 21 Consequently, the essence of being human is not simply that one has the capacity for autonomy, in one's capacity for agency. It resides in the phenomenology of being a vulnerable agent. And this is a being who hopes and fears. It is the capacity for hope (for the efficacy of reason in cosmological matters) and fear (that reason might have no such efficacy) that is the ultimate source of meaning, and not merely meaning that can motivate action. But in the latter regard, human beings are to be seen as having dignity, a moral worth, precisely because they are hoping and fearing beings.
Concluding Remarks
The position presented in Part 3 is no more than a sketch. To justify it fully requires much more work. Parts of the argument have been developed fully elsewhere (see especially Beyleveld 2012 and 2013a), but we admit that more needs to be done. So we put forward our thesis more as a suggestion or hypothesis for further investigation, and this is the thesis that the power of judgment does not provide us with reason to believe that the cosmos is ordered as a unified system compatible with the categorical imperative and its postulates, but requires us to hope that it is, and no more.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 2018-12-20T21:34:17.568Z | 2015-02-08T00:00:00.000 | {
"year": 2015,
"sha1": "1ae0406bb0fea585d36f6e0fecb07537836dcef7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10677-015-9564-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e422363f65cf497f71bd8304c39df446e941894c",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
233622711 | pes2o/s2orc | v3-fos-license | MR On-SeT: A Mixed Reality Occupational Health and Safety Training for World-Wide Distribution
The current generation of dedicated Mixed Reality (MR) devices can be considered as the first generation, which is truly mobile while also being capable of sufficient tracking and rendering. These improvements offer new opportunities for the on-set use of MR devices enabling new ways of using MR. However, these new use cases raise challenges for the design and orchestration of MR applications as well as how these new technologies influence their field of application. In this paper, we present MR On-SeT, a MR occupational health and safety training application, which is based on the experiences of an operational division of a world-wide operating German company. The intended purpose of MR On-SeT is to increase employees’ awareness of potential hazards at industrial workplaces by using it in occupational health and safety training sessions. Since the application is used at various locations throughout the company’s world-wide subsidiaries, we were able to evaluate it through an expert survey with the occupational health and safety managers of seven plants in France, Germany, Japan, and Romania. They reported the condensed experience of around 540 training sessions collected within three months. The purpose of the evaluation was twofold: 1. to understand their perceived attitudes towards the application-in-use, and 2. to collect feedback they received from respondents in training sessions. The results suggest that MR On-SeT can be used to extend current, predominantly theoretical, methods of teaching occupational health and safety at work, which also motivates existing employees to actively engage in the training sessions. Based on the findings, several further design implications are proposed.
Introduction
Research on Mixed Reality (MR; including Augmented Reality, AR) and Virtual Reality (VR) in education and training has been growing exponentially over the past 20 years [1]. These technologies are now available for the mainstream and consumer market. However, it is important to differentiate between them: whereas MR consists of the combination of real and virtual content (e.g., the integration of virtual objects to enhance the user's real physical environment), VR provides the ability to cut out the physical world and immerse users in a completely computer-generated environment [2]. According to a survey by the World Economic Forum [3], 53% of the companies in the professional services are likely to adopt both technologies before 2022. The benefits, such as increased motivation to actively engage in the training or the availability of new modes of interaction, of using MR and VR for training and education have been validated in different studies over the past years (see e.g., [4]). Also, both technologies have been used in various training use-cases, e.g., in health training [5], at universities for hybrid education [6], in a training approach for OHS [7] or in teacher training in a MR integrated learning environment [8].
According to the World Health Organization, occupational health and safety (OHS) training aims to develop, acquire, and extend the knowledge and skills required to perform work in a safe manner, while having a positive impact on the sensed safety climate, i.e., the perceived value of safety in a work environment [9], of the organisation [10]. As today, many factors have been identified as important components of a safety climate, including the management and organisational practices, such as adequacy of training and provision of safety equipment [9]. Furthermore, establishing a safety climate is positively perceived by the employees, because the organisation values their safety and well-being [11].
Lecture-based OHS training (e.g., explaining risks supported by depictions of hazardous situations) still are widely adopted, but Lawson et al. [12] have shown that using VR might be more efficient. They for example, could find a higher knowledge retention and engagement levels for VR fire safety training compared to using Power-Point.
The present study is based on the hypothesis that MR and VR may provide a novel, more engaging approach for training employees on the risks associated with their regular work tasks. Based on our experience, we imply that modern devices such as the Microsoft HoloLens -used in this study -can enable mobile hands-free MR experience among users.
In the next sections, we present related work, followed by an introduction of MR On-SeT, a MR based OHS training we developed. Afterwards, results of an expert survey conducted directly after the roll-out of MR On-SeT in seven plants in four countries are presented. In the next section, we discuss implications for designing a MR technology assisted approach to train occupation safety. Finally, conclusions and an outlook on future work are proposed.
Background
Mixed Reality (MR) describes a continuum between real world and Virtual Reality (VR) [13]. It includes any kind of simulation, in which virtual elements and the real world are blended into a semi-virtualised, mixed environment. Augmented Reality (AR, i.e., reality enhanced with virtual objects) and Augmented Virtuality (AV, i.e., VR enhanced with real objects) are part of MR.
Orchestration of MR applications can be defined as the activities necessary to prepare the virtual content and the user experience in such mixed or virtual environments, and to adapt them during run-time [14]. Koller and Rauh et al. [15] distinguish between two types of orchestration: pre-orchestration and live-orchestration. Preorchestration refers to the preparative measures for a user-experience, such as planning the course of action of a specific training session before hand. Many MR and VR training systems are completely pre-orchestrated (e.g., [7]), since the program flow is fixed by design. Live-orchestration describes all measures available to re-adjust or control the user experience during the immersion, including changing digital content during runtime or asking provocative questions. For example, the temporary take-over of a virtual avatar is exemplified for live-orchestration by Koller et al. [16].
OHS training is restrained by the requirement to not expose trainees and coworkers to health and safety risks [17]. Some risks can be reason enough to not conduct training sessions in real-life settings, as for example, working in great heights [18], in the context of OHS trainings sessions. Besides, OHS training on-site can interfere with the production target [17].
One approach to safely expose trainees to hazardous situations in their context of work is the use of VR learning environments, as it was done by Ke et al. [8]. In this context, scholars have already shown the benefits of using MR and VR technologies for OHS training. Eiris et al. [19] demonstrated how to use 360° images, augmented with traditional two-dimensional user interface elements to design a training-game on computer screens for construction workers. Trainees have to explore the 360°-sphere and identify potentially hazardous situations. By selecting the right fix from multiple possible fixes, they can solve each situation. The authors found that users value the idea of the software, but also stressed potential for improvement, e.g., to use a more advanced display technology or to apply their approach to other work areas [19]. Also, to train construction-workers, Yabuki et al. [20] proposed a system in which workers on-site can discuss OHS issues with their office-bound supervisors. They employed AR to increase awareness for OHS and minimise unsafe conditions. Others [17] demonstrated the use of a Head Mounted Display (HMD) with an advanced head tracking system for training work in great heights on construction sites. They found that experiencing heights in a game reflecting the work context already allows trainees to accustom themselves to the working conditions and therefore, increases their comfort of working under these circumstances after the training. In contrast to the close-to-reality virtual environment described by [17], Shamsudin et al. [7] developed a comic-like environment. They report that trainees struggle to transfer OHS knowledge learned in VR to real-world situations with their VR solution. Van Wyk and De Villiers [21] offered VR for training miners on occupational safety using gaming mechanisms. The authors describe how they have modelled a mining environment containing several hazards for trainees or virtual co-workers, based on contextual requirements and constraints of the setting.
All the aforementioned use cases typically extend OHS training by adding some gamification elements (integrated into software) to facilitate users' learning. Further-more, the presented occupational training simulations either use computer screens or VR-HMDs. The former offers only limited levels of engagement and presence, i.e., the sense of being in the virtual environment [22]. The latter takes the user completely out of the real environment, which can cause the problem of incorporating the trainer into the virtual environment, requiring an extended hardware setup. Especially in highly immersive systems, simulation sickness [23], i.e., malaise while and after being exposed to a simulation (e.g., in VR), occurs. In general, many current VR solutions require a rather complex hardware set-up since they often come with some tracking hardware to be installed besides using the HMD.
There is a broad body of research on digital games to enhance learner motivation [24], mainly by providing extrinsic rewards for quantifiable accomplishments [25]. According to Whitton [26], Learn through Play or Playful Learning for adults is a still emerging field. While there are many similarities to children's play, adults bring assumption and values to the practice of play. Whitton specifies that there is need for establishing trust and relationships for playful learning between group members, since the sole availability of games (and playful activities) alone is not enough [26].
Burke et al. [27] defined three types of engagement in OHS training ranging from least engaging (e.g., lectures) over moderately engaging (e.g., programmed instruction) to most engaging (e.g., hands-on experiences). The authors state that the more engaging a training is designed the better the knowledge acquisition and alongside the reduction in lost time injuries.
Implementation
In this section, we present our approach for implementing MR On-SeT and provide some contextual information. Furthermore, the application is characterised using user flows from the perspective of the OHS managers and the perspective of the trainees.
Approach and context
MR On-SeT is an approach to simulate an informal learning experience in a formal learning setting. It attempts to model 'learning in situations' and systematises 'learning by experience' to allow trainees to connect learnings on OHS to their specific work context as suggested by Burke et al. [36]. MR On-SeT aims to reduce the threshold to approach the topic of OHS. To address issues experienced with a VR safety training prototype, e.g., complexity of setup, higher risk of injury, or simulation sickness, we decided to use MR for this solution.
MR On-SeT consists of 55 critical situations throughout six different scenarios, which need to be identified and solved by the users. These hazards have been gathered with an experienced manager of the department of Occupational Safety and Environmental Protection in the Chassis Systems Control division of Robert Bosch GmbH, Abstatt, Germany. This manager collects data on incidents reported by local OHS managers of the world-wide distributed facilities. OHS managers collect work accidents oversee planning and conducting OHS training according to the needs of the employees in their plant, past incidents, the companies OHS guidelines, and legal regulations. Furthermore, they instruct OHS officers, who are in charge of putting the concept in place in their specific department and serve as contact person for OHS concerns of their colleagues. Basing the content creation on the manager's experience enabled us to create a solution tailored to nowadays typical work accidents, such as stumbling over an open drawer of an office container or reaching into running machines after bypassing safety shutdown mechanisms. MR On-SeT provides hazardous situations in the following contexts: Electrical Safety, Maintenance, Assembly, Logistics, Manufacturing, and Office. All these scenes have two rooms depicting diverse safety hazards. In the centre of Figure 1, one of two rooms of the maintenance scene is shown, containing three hazardous situations. Furthermore, the initial scene, called Lobby ( Figure 2), functions as a three-dimensional main menu, tutorial, and an entrance space.
In this study, we used Microsoft HoloLens 1. It is a binocular MR-HMD equipped with a set of different environment sensing cameras. It weighs 579 gram and has an estimated battery life of two to three hours. The MR application was developed in Unity3D, a multiplatform game engine and development environment. Microsoft provides a MR framework, called MixedReality Toolkit. To select objects on the HoloLens Microsoft defined the 'Air Tap'-Gesture, which relies on the mental model of using the left mouse button (see [28]). To execute this gesture, users have to put the spread-out thumb and index finger together like pinching someone and then release.
User Flow
In this section we describe how the user flow is orchestrated from the perspective of the OHS mangers and the trainees. Both are primary users of MR On-SeT at different times.
Setting Up MR On-SeT: The set-up of MR On-SeT is performed by the OHS Managers. After switching on the device, they have to consider whether they want to use the streaming option, enabling others (e.g., the OHS managers or co-trainees) to follow the immersed trainee (i.e., the trainee who is wearing the MR-HMD) by streaming the field of view to an additional screen. If they select to enable streaming, the OHS managers have to set-up a WiFi network using their computer or an additional router. Afterwards (or if they choose to not enable streaming), the OHS managers have to start MR On-SeT by clicking on the Tile (icon) in the HoloLens' start menu. The scanning wizard starts, supporting the OHS manager to anchor the mixed environment in one corner of the physical room. This replaces the HoloLens' standard way of placing virtual objects by dragging, rotating and scaling them oneself until they are at the right position in the right orientation and of right size. The OHS managers are assisted by scanning the room in the area of the corner where they want to place the mixed environment and then can place the room by using the pre-defined 'Air-Tap' gesture to place it. After successfully anchoring MR On-SeT in the physical room, the Lobby is displayed. OHS managers can reset the overall score, displayed behind the front desk (see Figure 2, yellow screen on the right). MR On-SeT is set-up and ready for handing it over to trainees. The Lobby. It serves as three-dimensional main menu, an area to accustom to the interaction concept and welcomes the trainee in the application, like a real lobby of a building. Behind the front desk, a display shows the current score (yellow display on the right). Next to it, the user can reset the score and all changes made. On the left side of the elevator, the user can watch short clips on the monitors, explaining how to use the Air-Tap gesture to interact within MR On-Set, while on the elevators right side, the general concept of navigating through the mixed environment is explained on two posters.
Experiencing and solving safety hazards: MR On-SeT is designed to let trainees orchestrate their workflow (i.e., the order rooms are visited, and hazardous situation are solved) by themselves but also let others (e.g., the OHS managers) intervene.
To access the mixed environment, the trainees have to put on the HMD. In the Lobby, trainees can familiarise themselves with being immersed in MR in general, but also with the 'Air-Tap' gesture and how to use it to drag and drop objects. Short video clips, illustrating these two interaction modes of MR On-SeT, are displayed in two screens (see Figure 2, the two screens on the left). Both interaction modes are based on the 'Air-Tap' gesture. Furthermore, trainees can learn how to navigate through the MR On-SeT mixed environment on two posters, depicting the use of the elevator to change between scenes, such as Lobby or Logistics, and the use of the portal (see Figure 1, the blue striped cylinder in the front right) to change rooms within on scene. This concept aims to offer the experience of being in a seven-storey building containing different work areas.
Whenever the trainees feel confident to change rooms, or they are triggered by spectators (co-trainees or trainer), they click on one of the buttons on the elevator to move to another storey. All storeys (except the Lobby) display several hazardous situations in one of six work contexts. Trainees can look for these situations by moving through the mixed environment. To solve OHS hazards they either click on an object (e.g., closing an open fuse box lid) or they can remove the hazard causing object by holding and dragging it to a safe location (e.g., moving a ladder, which is blocking an emergency stop). While dragging the objects, users are aided by a hint in form of a hologram depicting the proper location of the object. If streaming is available, trainees in MR might be guided by the OHS managers or even their spectating cotrainees. Trainees can change the room by using the portal and travel to another scene by using the elevator at any time. The number of solved and total hazards of the current room is displayed next to the elevator ( Figure 1, robot left to the elevator). This allows the users to keep track of their progress at any time.
Expert Survey
In order to understand OHS managers' perceived attitudes towards the usage of MR On-Set we handed out questionnaires in March 2019 to explore what impact on the context of use of MR On-SeT as mobile MR learning environment has. The participation in the study was voluntary. We received feedback from seven (out of eleven) OHS managers in charge of the OHS of plants located in Japan, Romania, France, and Germany. In their answers they condensed their experience but also the experience of about 540 employees who used MR On-SeT since December 2019.
Questionnaire
Expert surveys have the power to quickly produce results [29]. Besides, we decided to hand out questionnaires to collect data for several reasons: First, we wanted to motivate as many OHS managers as possible to participate. Hence, we aimed to re-duce the time needed to participate. Second, this study aims to initially understand the use of MR On-SeT and how the application and the current tools can coexist and benefit from each other. Therefore, we wanted to explore the impact as soon as possible after the roll-out of MR On-SeT to collect data on the changes it caused.
We asked the participating OHS managers to report on their experiences and what they would like to be improved. Also, we asked them to specify the occasion they used MR On-SeT. We explicitly asked the responding OHS managers to not just reflect on their experience, but also to include the feedback they received from trainees during the use of MR On-SeT.
In addition, to better understand the target group of MR On-SeT we collected demographic data about the trainees in the participating plants.
Data analysis
To identify patterns in the transcribed survey data a reflexive thematic analysis [30] has been performed. We chose to approach the thematic analysis in an inductive way, i.e., we developed themes based on the content of the data. This approach best acknowledges the explorative character of our survey and, therefore, can help to better understand the use of MR On-SeT. Furthermore, we decided to analyse the data using the inductive approach, because this allows to inspect and evaluate the data in an unbiased way. All statements related to one question were clustered. First, after general familiarisation with the data, we clustered the particular statements to preliminary themes (e.g., Orchestration), according to emerging codes. Second, we merged these clusters to overarching themes (e.g., Learn through Play) and reviewed them. The final themes, respondents who contributed statements related to a theme, and some descriptive statements are presented (Table 1) for occasions MR On-SeT has been used. In Table 2, the themes, subthemes and a short description of statements related to questions regarding user experience and potential improvements are presented.
Based on the themes, we suggest implications for designing a MR training system and discuss the needs of trainers and trainees and how MR On-SeT is used in the next section.
Results
The survey findings revealed that from December 2019 to March 2020 the participating OHS managers conducted OHS training sessions where they utilised MR On-SeT with around 540 employees (ranging from 16 years to 68 years, 22 % females and 78% males) during regular OHS instructions for existing and new employees with different knowledge levels. Also, the application has been presented and tested on management meetings, public events, etc. (Table 1). We received feedback on how the local OHS managers experience the application and the feedback they got from workers of their plant, during and after OHS training sessions. In Table 2, we present the identified and clustered themes and subthemes.
From the data, two end-user perspectives emerge: 1. the OHS manager as orchestrator, teacher and facilitator of the training, and 2. the trainee as learner in MR. Thus, we report our results based on these perspectives.
5.1
The occupational safety manager: Teacher, facilitator, and orchestrator Because of the multiple roles OHS managers have to perform; several themes directly contribute to the understanding of their needs and experience using MR On-SeT (see Table 2).
Values: The use of shared learning. Shared Learning for training on occupational safety differs from the current practice, which rather can be described as a formal learning setting. Also, when using MR, a computer mediated communication channel needs to be established to let people share their experiences and experience those of others. Several respondents (R1, R3, R4, R7) described how the use of MR On-SeT in their training sessions is enabling Shared Learning. R4, for example mentions that streaming the image to an additional screen can be employed to allow others to participate in the trainee's experience. Reflection on the shared experience then takes place in the group after the recent session. Using MR On-SeT in this way was confirmed by R3, who stated that the application "provides the basis to start a discussion on the content (safety)". Some respondents (R1, R3) described how they use MR On-Set as a safe space, where trainees can experience hazardous situations without being exposed to a real threat. They use the experiences from MR On-SeT to let the trainees reflect on OHS. This can also affect the way OHS managers will plan training sessions and needs means to carefully mediate the interaction between trainer and the immersed trainee. Table 2. Themes and Subthemes based on the Survey Results.
Content Variety
Respondents noted positively that MR On-SeT has a high variety of scenarios but also suggested further scenarios and other OHS related use cases for MR On-SeT.
Tailorability
Respondents requested to be able to adapt the system to local needs but also to the capabilities of individual trainees and maintain the content based on incidents related to OHS.
Shared Learning Technological Enabler
Respondents reported positively on streaming the trainees' field of view to an additional screen but also mentioned issues while using this option.
Reflection
Respondents described how they use MR On-SeT to start a discussion on OHS.
Simulation Sickness
Symptoms Symptoms which can be accounted to simulation sickness were reported by the responding OHS managers.
System Behaviour
Some OHS managers reported system behaviour of MR On-SeT and the Ho-loLens that can cause symptoms which can be accounted to simulation sickness.
Realism
We got positive feedback on the visual and auditive realism MR On-SeT provides.
Orchestration
Respondents stated the need to adapt and extend the scenarios to be able to fit it to their training sessions.
Gamification
Respondents highlighted the playful character, the interactive manner of MR On-SeT and its overall attractiveness.
Enter Interaction Gestural Interaction
Some trainees had problems with the gesture-based interaction mode of Microsoft HoloLens 1. Respondents suggested improving or even replacing it.
Scanning
Respondents reported that aligning the mixed environment with the real world to be complex and sometimes failed There seems to be the desire to actively fill out the role of a mediator of the OHS managers. Therefore, we offer a hybrid computer mediated communication channel, serving as Technological Enabler (streaming option), which needs to be established. R5 reported that the streaming option, i.e., streaming what the immersed trainee is currently experiencing in MR, to an additional screen to allow others (OHS managers and trainees) to follow the immersed trainee, is a positive feature. Three others (R3, R4, R6) felt restrained in terms of actively participating by the issues related to streaming. While R3 reported a too large delay in transmission, which inhibits following the trainees in MR on what they are doing, to directly raise questions (as a Teacher) or guide them through MR On-SeT (being a Facilitator and Motivator), R4 requested a streaming option. The latter might be due to the fact that the respondent did not find the suitable information in the user manual we handed out with the application. Streaming, and therefore Shared Learning, is not just inhibited by the delay of transmission. R6 mentioned that "the operation of AR glasses and [the companies] IT cannot be combined", which relates to the required connection of both devices to a wireless network. Since the local IT-infrastructure does not allow to connect uncertified devices, OHS managers have to administer their own.
The work to make the training work: Enter interaction in MR to set-up the stage: Preparing the training session and interrupting as well as resuming the MR training is a requirement to conduct OHS training with MR On-SeT. As a stand-alone solution (R6), in opposite to former VR and MR solutions which rely on additional hardware to be set up to (e.g., to enable user tracking), MR On-SeT takes MR train-ings out of the lab, where technical experts run the hardware and software and transfers this task to non-expert users. These non-expert users then consequently have to Enter Interaction with the system. Since they are new to the use of MR and its prerequisites, the necessary steps needed to run the application and how to trigger them (by interacting with the system) have to be reduced. The Scanning wizard of MR On-SeT, a tool to allow the OHS managers to align the virtual content with the real environment, exemplifies this need for simplification, since we attempted not to hide the process' complexity. This unhidden complexity directly affects the OHS managers in the execution of their role as orchestrator: MR On-SeT requires the OHS managers to prepare the application while pre-orchestrating the training session. In this role, they have to scan the room where the training shall take place. Three Respondents (R1, R2, R4) experienced related issues. For example, R1 stated that "Measuring the room is sometimes not easy", while R2 highlighted the insufficient description, "especially for the calibration", in the user manual we handed out.
Some respondents (R1, R3, R4) reported on issues related to resetting the application and its content. Also related to the task of orchestrating a training session, two participating OHS managers (R1, R4) recognised, that as soon as the application is sent to the background (described as closed application), the tracking is lost and the application must be reopened. In addition, R3 recognised issues resetting solved hazards.
Orchestration of trainings: Variety in MR scenes: To be able to use MR Content (see Table 2) for different training goals, OHS managers need to have a Variety in different scenarios in which OHS hazards can be experienced by trainees. Variety can be achieved in a fully pre-orchestrated system such as MR On-SeT to some degree (R4, R5, R7). Still, some participants saw opportunities to increase scene Variety within the application (R2, R3-5). R3 suggested supporting more training scenarios, while R2 and R4 suggested specific scenarios. R4 pointed out the potential "Integration of further scenarios such as a room to hazardous substances…".
The OHS managers' remarks regarding their empowering to orchestrate Content by themselves (e.g., intentionally vary arrangements) are grouped under our Tailorability subtheme. Three respondents (R5, R6, R7) saw the need to tailor MR On-SeT to their own needs by themselves. This was expressed by e.g., requests to "adapt the risks to the plant." (R5). Being able to tailor the Content to specific needs of one training session automatically increases the Variety of experiences trainees can have in MR. Also, the request to customise MR On-SeT by translating it to the local native language (R2) is related to Tailorability.
Variety at a higher level is represented by the suggestions (R3, R5) of different ways to use MR On-SeT. R3 suggested to offer "More opportunities for interaction (possibly also with the environment)". R5 suggested to allow trainees to take the perspective of someone who is working in an environment whereby they are causing risks to others (e.g., driving a forklift) or are exposed to increased risk (e.g., working in great heights).
The respondents expressed a need for higher flexibility in orchestrating the experience, which highlights that even though, MR On-SeT is used within one company only, there are still local differences, e.g., the need for text-based instruction in the users' language.
The trainee: opportunities for learners in MR
In contrast to the OHS managers, the trainees only have the role of a learner in MR On-SeT. Since they are the target user group of MR On-SeT, their role is related to all themes. We condensed the connection to the themes using the statements of the responding OHS managers. We are aware that what OHS managers say about their own experiences has more validity than on what they report on the trainees' experiences. Still, since they supervise trainees in MR, they are capable of sketching the trainees' perspective and needs.
Mediation: Support and experiences in shared learning: Shared Learning is not only important from the OHS managers' perspective, but also for trainees. Being able to see what the immersed trainee is seeing (using the streaming option) allows other trainees to learn in groups. Afterwards they can start a discussion on the topic of occupational safety within the group or with the OHS manager. This Reflection on the recent session contributes to Shared Learning. It therefore might help to foster a Community of Practice, since employees are encouraged to start a dialogue on the topic of OHS and might define a certain safety climate as described by Zohar [31]. Furthermore, trainees in MR might need help when they are lost and indicating the need to rely on a computer mediated communication channel as Technological Enabler. Here the OHS managers can help by following the trainees on the screen. R3 confirmed the need for streaming as a mediated communication channel. However, s/he stated that the current delay in transmission makes following the trainees in MR time consuming. This delay in the mediated communication forces the trainees and the OHS managers to enter a dialogue on navigation through the MR. This dialogue mainly consists of instructions from the OHS managers (such as simple directional navigation, but also request to show a certain object), who are trying to anticipate the mixed environment, and reactions and call-backs from the trainees trying to align the instructions with the actual mixed environment. Instead of reflecting on the topic of OHS, the trainees hence try to figure out how to interpret the OHS managers' instructions on actions, which the trainees executed around three seconds ago. Experiences users can have in a mediated reality with a delayed transmission were illustrated in a commercial spot by Umeå Energi AB [32].
Mitigating simulation sickness: Simulation Sickness needs to be considered when designing Mixed Learning Environments but also when pre-orchestrating individual training sessions. MR (and VR) experiences are still influenced by this phenomenon. Since trainees are the users of MR On-SeT who are exposed to the mixed environment, Software Behaviour causing Simulations Sickness and Symptoms of Simulation Sickness form an important, but not exclusive, theme for the trainees. One responding OHS manager (R5) reported that tracking issues during movement can cause Simulation Sickness, while R3 and R4 specify image flickering while moving. Another responding OHS manager (R7) reported how the selected hardware can become a factor in regard to Simulation Sickness. The high weight of the HMD puts too much load on the bridge of the nose (via the nose pad of the HMD) resulting in a painful experience. Dizziness or feeling bad after use was reported by R6.
Even though we decided to use MR technology to address the persisting issue of Simulation Sickness, respondents report on this phenomenon. We selected a less immersive display technology than VR, expecting that constantly perceiving the real environment might reduce the effects of visual causes. Still, the problem of Simulation Sickness is highly complex and relates to more than one cause. With current hardware, we recommend that users are exposed to MR less than 30 minutes (according to own experience, but also reported by R3).
To improve tracking, which is not just relying on the registered room dimensions but also on colour information, unicoloured walls and floors should be avoided. Offering pre-designed posters, which can be printed right before the use, can help to address this issue. Furthermore, OHS managers can reduce the time each trainee is exposed to MR (and consequently to potential system behaviour which causes simulation sickness) when training groups by letting each of them only solve a certain number of hazards and then hand-over the device to the next trainee. This might support Shared Learning. While Simulation Sickness mostly is an unwanted phenomenon, it could be also employed to intensify experiences such as operating heavy machinery [33]. Issues with streaming the trainees (in MR) field of view to an additional screen need to be addressed properly.
First steps in MR: Enter interaction: Two OHS managers (R1, R2) stated that MR On-SeT is "very interactive". However, many trainees are novice users of MR and Gestural Interaction, who need a low threshold to Enter Interaction. Based on the feedback of respondents, there is need of better designing the Gestural Interaction of MR On-SeT, since interacting with the device caused problems, four responding OHS managers (R3, R5, R6, R7) reported. All of them noted that the gesture-based interaction poses higher level of complexity for some trainees, mainly specified as being difficult to execute the drag and drop gesture. R6 for example, specified that drag and drop "is extremely difficult for some [trainees]". Two OHS managers (R1, R5) proposed to improve on the Gestural Interaction by simplifying it ("Making it easier to click on the danger points", R1) or by offering an alternative interaction mode ("For beginners use controllers", R5).
Motivation in formal learning settings: Learn through play in MR: Learn through Play contributes to occluding the formal learning character of MR On-SeT and increases the trainees' motivation to engage in the training. MR On-SeT can be placed in the Most Engaging OHS training category defined by Burke et al. [27].
Three responding OHS managers (R1, R2, R3) reported directly on its attractiveness while R4, in addition, mentioned that "Working with the app was more interesting for everyone than other types of instruction". Furthermore, R1, R4 and R7 mentioned the playful character of MR On-SeT. Playfulness in combination with attractiveness underlines the potential of Gamification for MR On-SeT. Gamifying formal learning approaches might lead to a more informal way of learning, which than reflects the most common way of acquiring knowledge in the work environment.
Responding OHS managers requested to Orchestrate the scenarios. Being able to adapt and create scenarios based on local requirements or a special focus of a training session can be used to keep up the element of surprise for the trainees, since the application will never be exactly the same. Besides increasing the Variety, it also can be used to increase the degree of Realism, since virtual copies of real workplaces, the trainees are actually working in, can be created. Two responding OHS managers (R3, R4) reported that MR On-SeT has a high degree of Realism, while one (R3) positively mentioned the use of scenario-typical sounds, such as by-passing floor-borne vehicles in the manufacturing scenario, we recorded on-site. A high degree of Realism may enable trainees to easier transfer what they just experienced to real work life problems.
Discover new perspectives on safety: Content shapes experiences: The Content is important to provoke trainees to actively participate in training session. The reported Variety of MR On-SeT (by R4, R5, R7) in the scenarios and the scenarios suggested by responding OHS managers (R2, R3, R4, R5) allow trainees to explore different areas representing typical work environments. Hence, MR On-SeT can be used to awake the human urge to discover unknown things. Besides increasing motivation and arousing the trainees' curiosity, there also is a serious element: Many workers spend little time in other departments, but often use the walkways through it (e.g., to prevent getting out in the winter). In these cases, they are exposed to specific hazards of this area. If trained properly, they also can contribute to the safety of others by identifying and reporting OHS hazards.
Discussion
Addressing hardware limitations to increase the user experience. R5 requested to be able to use the application in "any environment (not specially in a gloomy room)". This is a problem related to see-through displays used in HoloLens. In a too brightly luminated environment, the maximum display brightness might not be sufficient to enable users in MR to fully see the virtual objects. Therefore, we suggested to slightly dim the lights in the room the training should take place. Another approach to address this problem could be using brighter displays, which than might cause eyestrain (potentially causing Simulation Sickness) or even damage the eyes. Vasilevska et al. [34] discuss how to reduce that eyestrain by offering design suggestions to reduce screen brightness for VR-HMDs. They found that reducing display brightness and using so-called night-modes (where white interface-backgrounds are inverted) is excepted by the user and can help reducing eyestrain. Transferring the concept of night-mode to mixed environments, darker (passive) background objects might already reduce the overall brightness. Also, to further increase visibility additional shades (e.g., similar to the shades of the Epson Moverio BT-350 HMD) could be designed which can be put on top of the see-through display in bright environments, to be able to reduce display brightness.
While designing the application we were aware of the small field of view and tried to address this in our design. We reduced the space, where interactable objects (hazards) are placed, to two of the four walls of the room. Still, respondents reported the small field of view to be problematic. One approach to reduce this might be distorting the image close to the outer boarders of the field of view, similar to side mirrors of some heavy goods vehicles or urban buses. Designers need to trade-off the area of distortion and whether the degree of distortion follows the path of any polynomial, which then smoothly fades from no distortion to increased distortion, or there is one pre-defined degree of distortion in this area of distortion. As this small field of view especially limits peripheral visual perception, simple hardware solutions, such as suggested by Xiao and Benko [32] where a low-resolution LED display is placed around the lenses of VR glasses, could be applied as well.
Trainer empowerment: According to Bowers [35] management overheads, such as the OHS mangers experience when preparing MR On-SeT, which result in extra work might "be reason enough for abandoning technologies" [35]. While Bowers in 1994, argues to increase available support, for MR technologies, which depicts a completely different technology compared to the network of stationary computers Bowers investigated on, today, it rather should be: Decrease of necessity of support whenever it is possible. Computers, laptops, and even smartphones and tablet-PCs still not require to actively interact with the physical environment and, therefore, run within relatively stable frame conditions, where support can be supplied, even over long-distance. MR in contrast can be understood as the opposite: The unconditional turn to employing the environment for the purpose of providing a (semi-) virtual environment. The mobile character of the Microsoft HoloLens 1 and MR On-SeT therefore represent a new way technology is used at the workplace. Since every physical room MR On-SeT can be used in differs, the amount of not controllable frame conditions multiplies.
Bowers demand that those who are involved in the maintenance of a technical system (here the OHS managers) should become "heterogenous engineers" also could help to overcome the reported problems, which are related to scanning, resetting, and streaming. This seems to be problematic for a mobile solution, which adds to a set of training tools, such as MR On-SeT. First, the ubiquity of computing with today's mobile technology moves complex systems from professional spaces, where experts operate them, to the less experienced end-users (who want to use the technology as a tool, rather than experimenting with it). Second. the responding OHS managers seem to expect MR On-SeT to be working out-of-the-box, rather than need to be educated to make it work. There might be a large discrepancy in the skillset different OHS managers bring along, making it impossible to assume a common level of technological skills.
The need for a Technological Enabler is exemplified in the issue of connecting MR On-SeT to a wireless network to use the streaming option and offer Shared Learning. There is the need to empower the OHS managers to establish this streaming option without engaging with complex network administration tasks. The streaming option set-up should reflect the technical skills of the OHS managers. It might be fruitful to allow MR On-SeT to open a wireless network, to which other devices can be connected. This reflects the usual way of connecting a laptop, tablet-PC or smartphone to a network, which might reduce complexity. To allow controlling the live-streaming option and to empower the OHS managers, an interface should be added to MR On-SeT, in which the users can activate the wireless network and are guided through the process of connecting another device they want to stream to.
Offering variety by live-orchestration and new learning experiences: The provided Variety of scenes enabled OHS managers to understand the potential of MR On-SeT and inspired them to request more scenes (see theme Content). Still, preorchestrated systems might quickly reach boundaries, which then might affect the perceived benefit, since the Mixed Learning Environment rapidly becomes boring for the trainees. In contrast, pre-orchestrated systems might better make sure that company guidelines are respected. One promising approach to address the effect of wearingoff is offering live-orchestrating selected virtual objects (or avatars), such as Koller et al. [16] propose.
Addressing the potentials and risks of Mixed Learning Environments might already affect how systems are orchestrated, since the pre-orchestrated elements of a mixed environment can drastically influence how people perform tasks in MR. The combination of real and virtual environment can lead to a diminished distinction of when the MR and when reality is experienced [36]. According to Brey [37], there is the responsibility for designers to consider and prevent ethical questionable actions (e.g., active violence against a non-playable character) and the general depiction of ethical questionable content in MR (e.g., as proposed by Van Wyk and De Villiers [21] or by letting trainees experience accidents by themselves). Live-orchestrating can amplify this responsibility since live-orchestrators can alter the perceived environment of others. They might be able to annoy users in MR, e.g., by sudden changes in the setup, but also can have an extensive influence on what the users in MR can and cannot see. Live-orchestrators might even perform violent actions against users in MR, while perceiving them as virtual characters. While live-orchestration, therefore, can be a powerful tool to influence MR users' acting and thinking it is one potential solution to keep up the element of novelty. Currently, MR On-SeT is perceived as novel, since both, HMD and MR are not widely spread, neither at work nor in private life. Constantly introducing new hardware or hardware add-ons in the field of serious applications will probably be too ambitious. Still, to reach high levels of motivation and even pleasant anticipation, keeping novelty in MR On-SeT on the long run appears to be important. Besides live-orchestration, designers might want to include Easter Eggs (i.e., hidden features, mainly entertaining), or dynamically change arrangements in the mixed environment.
Besides requesting live-orchestration-like features, respondents recognised new (related) possibilities to make use out of the potential of MR On-SeT by adding Variety at a higher level. The proposed increasing of (also non-hazardous) interactable virtual objects can be interpreted in two ways: First, currently only the virtual objects that are part of a hazardous situation can be manipulated. It might add to the perceived Realism if trainees also can, for example, grab a pencil, which is lying on a desk, as they could in the real world. Their behaviour might even create new hazardous situations (e.g., when dropping a pencil). Second, allowing trainees to manipulate more objects in their environment might occlude the hazards and therefore increase overall complexity of the training, which can be a mean to create new experiences without live-orchestration. This might introduce a design tension defined by the flow theory of Csikszentmihalyi [38]: more complex systems might further increase the threshold to initially engage, since unexperienced and cautious trainees might feel overwhelmed by the complexity of the system, while too simple systems might bore trainees.
Experiencing working in highly risky environments has been suggested. Bosche et al. [17] and Yabuki et al. [20] for example already describe how working in great heights can be simulated. Another example of experiencing risky work environments has been demonstrated by Van Wyk and De Villiers [21] where trainees can observe virtual colleagues (i.e., a non-playable characters) who ignore safety rules and experiencing very severe consequences (e.g., being pulled into a running machine after trying to remove an object from it). Some consequences of accidents might be serious but even be experienced by the trainees themselves. Since these experiences then are intentionally caused, users might perceive this as interference with their physical integrity (as described by Brey [37]). The same accounts for intensifying experiences by employing Simulation Sickness [33]. It has to be kept in mind that some users respond very heavy on System Behaviour which cause Simulation Sickness. To address this ethical dilemma, designers could offer an option, where users can decide on their own how much intensifying Simulation Sickness they want to experience. Another solution, proposed by Freiwald et al. [39], would be assessing each users susceptibility to Simulation Sickness by using the Cybersickness Susceptibility Questionnaire (CSSQ), which aims to predict the likeliness a user is affected by Simulation Sickness. The CSSQ could be included into VR or MR applications intentionally causing Simulation Sickness.
Furthermore, mobile systems mainly are administrated by users on site, this directly influences how the orchestration of the mobile applications can look like. Any orchestration of MR systems needs to take the fact into account, that non-expert users (in terms of MR) need to set up and control the behaviour of the mixed environment by themselves. This poses the challenge of simplifying setting up and maintaining a system.
Introducing altered environments for learning at work heavily can affect the way of working but also the general work-life. Some replies of the respondents indicate that MR On-SeT has the potential to pose a disruption, since it can change how OHS training takes place in future. While disruptions can mean an interesting new way of working or learning (as it was reported for MR On-SeT), they always include the risk of causing affected workers to reject changes [35], since it might expose their way of doing things to their supervisor which could make them feel observed all the time.
Translucency of new interaction modes and fallback solutions:
The theme Enter Interaction reveals that there are difficulties interacting with the system for individual users. This could be due to missing translucency of the gestural input, which possibly lead to request a change in the interaction paradigm. As described by Ebling et al. [40], translucent (socio-technical) systems need to "expose critical aspects […] while hiding noncritical details to preserve usability".
Problems with the input modes and the suggestions to improve interaction indicate the gap between the potential of MR and its understanding by users and therefore missing translucency. In the new version of Microsoft HoloLens, HoloLens 2, a concept to increase translucency towards the input-gestures has been established. As soon as the HoloLens 2 detects a gesture, it shows a 3D model of a hand, which demonstrates how to execute the gesture properly.
It is an interesting fact that companies and researchers, working with HMDs, include gesture-based interaction and naming this mode of interaction "natural interaction". In serious applications, established (hence, known) interaction modes are understood to simplify the interaction (such as controllers of video game consoles). They might therefore be perceived as mean to address the need for a translucent system. Even gestures, which are understood as known by a large part of the population, such as dragging objects on a touchscreen, are not guaranteed to be understood by all users, as shown for elderly users by Mihajlov et al. [41]. The particular request of controllers might not just be related to problems with executing gestures correctly but also with effects like haptic uncanny valley [42], i.e., that the haptic sense often still is excluded from the experience, while being important for human perception.
Translucency to some extend overlaps with the change of user tasks due to the lack of direct expert support: Respondents mentioned they sometimes had problems to run the scanning wizard successfully, which is a basic requirement to execute MR On-SeT. A more translucent set-up-wizard, which for example enables the user to better understand why the application needs to scan the environment and which measures can be undertaken to increase the probability of success, can reduce the complexity of setting up the mixed environment.
Design Implications
In this section implications for designing MR training systems are presented. The implications are based on the themes.
Awakening the discoverer: Designers should reflect on how to keep up the element of novelty in their mixed learning environment (Theme Learn through Play, subtheme Gamification). If trainees have the feeling, there still is something left to discover, even though they explore the mixed environment yet again, they might be more motivated to actively participate in the training due to their curiosity. Besides serious content, which is hard to find, Easter Eggs or other entertaining features could be included. Furthermore, if possible, it is advisable to work with the element of novelty of a technology, to promote the Mixed Learning Environment. Support playfulness in learning -'Informalise' formal learning settings: We suggest designing an application like MR On-SeT towards user engagement but in a realistic way. Based on our experience and as reflected in Theme Learn through Play (subthemes Realism and Gamification), enabling users to explore and try things out in a playful way, already adds to the entertaining character. This might change how trainees perceive the character of OHS training from formal towards informal learning (which reflects how learning often takes place at work). Furthermore, designers should try to create a training climate in which playfulness is not inhibited by observing trainees or the trainer (as described by Shared Learning), to keep the immersed trainee experimenting and reflecting. Support non-it users to orchestrate and create variety: The design for a training application, which is used on a world-wide scale in a formal setting, such as MR On-SeT, requires adaptions to local specialities. Therefore, we argue for designing a (semi-) flexible orchestration concept (Theme Content, subtheme Tailorability and Theme Learn through Play, subtheme Orchestration) that would allow trainers to address these local needs and adapt the application to their particular workflow. Carefully offering flexibility in the design of the mixed environment should include preorchestrated content while providing the possibility to trainers to orchestrate for local specialities. A special focus when analysing the context of use should be on the need for flexible orchestration. Furthermore, the analysis should ensure that the context of use is fully understood, and relevant stakeholders (enabler) could verbalise their requirements adequately. Finally, designers need to be aware of the potential ethical issues related to live-orchestration and prevent misuse by design.
Translucency lowers the threshold to engage: To allow users of MR training systems to understand complex system behaviour (Theme Enter Interaction, subtheme Gestural Interaction), the system needs to (e.g., visually) reflect the users' activity to a certain extend. While trainees are in the Mixed Learning Environment, they should not be forced to understand how to interact with the system and why the system reacts in a certain way. They rather should focus on understanding the topic, in our case OHS safety, by engaging with the mixed environment. It is advisable to carefully include feedback mechanisms, which inform the user how to correctly interact with the system. Designers should keep in mind that translucency is not just informing the users on wrong or unclear input but is a powerful tool to introduce users to an interaction concept in an informal 'learning by doing' way.
Design for the itinerant trainer: Technological progress allows to design fully mobile HMDs such as Microsoft HoloLens, which therefore also can operate without additional infrastructure (e.g., for rendering). While this has many advantages (e.g., no cables) it also will transfer responsibility to the user (e.g., to prepare the physical environment without assistance by an expert), which increases the threshold to Enter Interaction to pre-orchestrate the MR learning environment. Designers must account for that in their design solutions, for example, by including wizards or similar concepts to guide the user through setting up the system. Disrupt with mixed learning environments: Introducing MR into learning practice can be disruptive, as it not just provides a new tool to use but changes the way employees perceive the work environment and also how they work. Designers should reflect this in their concepts, for example, by mainly working with realistic 3D models (in contrast to Shamsudin et al. [7]) and designing for self-descriptiveness and learnability. They need to place their design into the current understanding of digital environments and be aware that they might shape this understanding with their design.
Conclusion
In this paper, we present MR On-SeT, a MR technology-based OHS training enabling trainees to explore hazardous situations, without being exposed to real-life threats. MR On-SeT offers a space in which playful learning takes place but also allows to interact with the immersed trainee.
We conducted an expert survey to collect early experiences directly after the rollout of the system. Respondents of the presented survey identified the high potential of the solution by requesting the possibility to orchestrate the scenes by themselves to integrate current topics, which would allow a training tailored to the local needs. The respondents reported high levels of engagement and perceived MR On-SeT as an extension to the tools they can use for OHS training. Also, they highlighted the need for adaptable scenarios, such as tailoring hazardous situations to their specific (local) training program. Some users (trainers as well as trainees) experienced technical issues. Furthermore, some OHS managers reported that especially unexperienced trainees have problems with the interaction concept. Additionally, the OHS managers already include the solution in the regular OHS training sessions.
Based on the study results, we propose design implications crucial to the design of a MR learning application.
In future work, the themes identified with the inductive approach of our thematic analysis can be used for applying a deductive approach in a follow-up study. We want to further investigate on the potential of MR On-SeT to enable occupational managers to understand the trainees' experiences while they are exposed to MR. This includes the problems trainees have while working in MR, but also the challenge to assess the trainees' level of knowledge in the field of OHS. To better support the communication between the trainer and the trainee, we want to investigate how to offer out-of-the-box solutions, which would allow the trainer to understand the trainee's current situation and offer software tools to actively support the immersed trainee. | 2021-05-05T00:09:08.984Z | 2021-03-16T00:00:00.000 | {
"year": 2021,
"sha1": "ccbd7e4b4b71b80eb9c3e61ddd263eb96a95540e",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jet/article/download/19661/8877",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "54f7fcd5a4257f0084b83d83a21b346190e50c1a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
250643794 | pes2o/s2orc | v3-fos-license | Breast Plasmacytoma as Extra-Medullary Lesion of Multiple Myeloma: A Case Report
Breast plasmacytoma is relatively uncommon in which most of the recorded cases were related to disseminated multiple myeloma. However, many of these cases tend to be misdiagnosed with other breast lesions such as breast carcinoma. This article presents a case study on a Libyan female patient around the age of 55 who has a single breast lump, which was first diagnosed to be a malignant lesion. All the results of immunostaining for cytokeratins, GATA3, estrogen receptor, progesterone receptor, HER2, and E-cadherin were negative; hence, the possibility of a breast carcinoma was not considered. However, plasma cell tumors were indicated by the presence of CD138, MUM1, and kappa-light chain markers. In addition, the patient had multiple osteolytic bone lesions, plasma cell infiltration, a monoclonal gammopathy, and signs of renal failure, which considered to be an indication to an extra-medullary breast plasmacytoma secondary to advanced multiple myeloma. This case study emphasizes the necessity of complete histopathological and imaging evolution for proper diagnosis of breast plasmacytoma.
Introduction
Solitary plasmacytoma (SP) is a rare cancer that has affected approximately 0.191/100.000 men and 0.090/100.000 women worldwide. 1 It is characterized by the lack or minimal presence of diffuse bone marrow involvement. 1 Solitary bone plasmacytoma is defined by the presence of a single lytic bone lesion with or without soft-tissue extension, whereas extra-medullary plasmacytoma (EMP) is defined by the presence of a soft-tissue mass without bone involvement. 2 Multiple myeloma (MM), however, is described as a lesion of the bone marrow that is associated with a variety of clinical, radiographic, and laboratory characteristics. MM can also affect several extra-osseous locations and manifest as a single EMP or a relapse of MM. 3 EMPs produce only 4% of all plasma cell tumors and are usually seen in the respiratory tract, oral cavity, and soft tissues. 3 Breast plasmacytoma (BP) is a rare form of plasma cell tumor as most of the reported cases occurred as a result of the disseminated MM. According to the literature, BP accounted for roughly 1.5% of all the occurred plasmacytoma; of which, 15% of these instances were classified as primary BP, whereas the rest of the instances were secondary MM events. 4 Clinically, cases of BP are mostly presented with unilateral or bilateral, solitary or multiple breast lumps; which are similar to various lumpy lesions of the breast. 4 As a result, it can be confused with other breast cancers easily, especially when their clinical and imaging features are similar. Consequently, patients may be subjected to unnecessary surgeries such as mastectomy. Furthermore, indicating whether the BP lesion is a primary or a part of advanced MM can influence the outcomes and the treatment management. Therefore, employing advanced imaging and histopathology techniques, and a precise diagnosis of BP is absolutely essential.
In this article, a case of BP related to disseminated MM is presented, which emphasizes the importance of documenting the histological and imaging evolution of any breast lump accurately to distinguish between unusual lesions (i.e., BP) and other breast lesions.
Case Report
A Libyan patient woman around the age of 55 was examined at National Cancer Institute-Sabratha in August 2021. She had a painless unilateral breast mass on her left side. Over the course of 3 months, the size of the mass grew without any nipple secretion or skin retraction. One year ago, the patient complained of general weakness and weight loss. Based on the examination, the patient was cachectic, underweight (45 kg), and had a single breast mass in the left breast at 1 o'clock. The mass was mobile and firm, measured 1 × 2 cm in diameter without changes to the surrounded skin nor palpable corresponding lymph nodes. Mammographic evaluation of the breasts and axilla suggested a high suspicion of cancer with a BI-RADS score of V. Under ultrasound (US) guidance, a core needle biopsy was taken for histopathological evaluation.
The obtained sections demonstrated extensive tumor necrosis, with a few minor areas of malignant epithelial cell infiltration (see Figure 1). The size of tumor cells was between small and medium, with sparse pale to clear cytoplasm, vesicular nuclei with indistinct nucleoli, and minor pleomorphism. The cells lack cohesion and spread out randomly through the fibrous stroma, which resulted in creating loose cellular grouping with a few single-file linear cords without tubular formation. Mitosis was sparse, with only 8 out of 10 of mitosis that has been seen in a high-powered field. The tumor was graded with a score of 5, with no evidence of ductal carcinoma in situ (DCIS), lobular carcinoma in situ (LCIS), lymph-vascular, or perineural invasion. As a result, a primary diagnosis of invasive mammary carcinoma with lobular feature grade I was applied.
Further immunohistochemistry staining of tumor sections excluded the possibility of invasive breast carcinoma as a diagnosis and instead suggested the plasma cell tumors. The obtained results of breast carcinoma-related markers such as cytokeratins (Ck and CK7), the transcription factor GATA3, estrogen receptor (ER), progesterone receptor (PR), HER2, and E-cadherin were negative as shown in Figure 2. Whereas the staining was positive for each of CD138, MUM1, EMA (focally), P53, and kappa-light chain as showed in Figure 3, which resulted in proposing plasma cell neoplasia. Other markers such as CD68, LCA, CD79a, and lambda-light chain were also shown to be negative (see Figure 3). In 5% of tumor cells, the proliferation index Ki67 was positive. In our instance, the final diagnosis for advanced multiple myeloma was extra-medullary breast plasmacytoma (a kappa-restricted form).
Further examination of the breast for tumor staging via using post-contrast CT scanning and magnetic resonance imaging (MRI) revealed a calcified lesion impeded in the breast parenchyma with no involvement of the surrounded skin, underlying muscle, or ipsilateral lymph nodes. In addition, diffuse osteolytic bone lesions were discovered in the ribs and pelvic bone, primarily at the left iliac bone as illustrated in Figure 4.
Moreover, MRI results showed that there was multiple increased destructive osteolytic vertebral body lesions at the lumbar vertebrae as well as there was a parenchymal focal lesion in the left lobe of the liver, which was an indication of liver metastasis (see Figures 5 and 6). The hepatic lesion was a tiny hypoechoic ill-defined focal lesion, measuring about 1.2 × 0.7 cm on US examination. Pancytopenia with about 29% of plasma cell infiltration and a monoclonal gammopathy were also discovered in the patient's bone marrow and protein electrophoresis, respectively. These findings suggested that the multiple myeloma was a secretory type with extra-medullary metastases.
Due to severe renal failure, the patent had renal dialysis at the time of her presentation, which demonstrated systemic involvement. The patient's most recent biomedical tests revealed that there was hypocalcaemia and a slight increase in creatinine levels. Other tests, such as blood sugar, urea, electrolytes (eg, phosphorus, uric acid, CL-2, Na+, and K+), and the whole liver function tests (LFT) were within normal limits. The patient is currently undertaking treatment for multiple myeloma and a regular monitoring plan is considered for her.
Discussion
According to the International Myeloma Working Group (IMWG), active MM is defined as a bone plasma cell proliferation of more than 10% or biopsy-approved plasmacytoma (ie, bony or extra-medullary) with evidence of 1 or more of MM-Defining Events (MDE). 5 Such events can be indicated by (1) end-organ damage (eg, bone osteolytic lesions, renal failure, hypercalcemia, or anemia), (2) more than 60% of clonal bone marrow plasma cell, (3) involved/uninvolved serum free light chain (FLC) with a ratio of more than 100, or (4) the appearance of more than 1 focal lesion with a diameter of at least 5 mm on MRI investigations. 5,6 In accordance with the above description, our index patient showed about 29% of bone marrow plasma cell proliferation, pancytopenia, renal failure, and monoclonal gammopathy. In addition, there were multiple osteolytic lesions on the lumbar vertebrae, ribs, and sternum, with liver metastases. This case verifies the diagnosis of advanced MM with extra medullary lesions.
MM is one of the plasma cell cancer types that is counted around 1% of all cancers and about 10% of all hematologic cancers. 5,7 Extra-medullary disease (EMD) is a type of MM that occurs outside of the bone marrow. 3 In certain studies, the overall incidence of EMD reached 13%, which was divided into 2 categories: primary EMD (7%), and relapse/ secondary EMD (6%-20%). 3,8 EMD has different subtypes which are (1) solitary plasmacytoma (SP) with no marrow involvement, (2) SP with limited marrow involvement, (3) bone-associated EMD with MM (EMM), (4) organ infiltrating EMD, and (5) plasma cell leukemia (PCL). 3 Within this context, the presented case in this article is an example of extra-medullary breast plasmacytoma secondary to disseminated MM.
Because the clinical and radiological features of BP are nonspecific, it can be difficult to distinguish it from other breast cancers. BP usually manifests as a single or numerous breast lumps, well-defined or ill-defined lesions, with or without microcalcification. 8 Traditionally, BP is defined as hyperechoic or hypoechoic solid mass lesions on US examination. MRI studies typically reveal hypointense lesions on T1WIs and hyper-intense lesions on T2WIs. 8,9 The radiological evaluation of our case revealed similar findings, though it was inconclusive as to the rarity of BP and the radiological features resemblance to other lymphoproliferative diseases or even benign breast lesions.
Our case was confirmed by histopathological investigation and immunohistochemical (IHC) staining. CD138, multiple myeloma oncogene 1 (MUM1) marker, and kappa-light chain restriction were substantially positive in the examined sections. Whereas, CD68 (a selective marker for human monocytes and macrophages), CD79a, and leukocyte common antigen (LCA) were negative. These findings supported our diagnosis of extra-medullary breast plasmacytoma and enabled us to distinguish it from other lymphoproliferative neoplasms such as large B-cell lymphoma.
CD79a, CD19, CD20, and PAX5 are B-cell differentiation antigens that are frequently positive in big B-cell lymphomas, although CD138 and CD38 are plasma cell markers that are usually negative. Extra-medullary plasmacytoma, however, expresses the plasma cell markers CD138 and/or CD38 frequently. 9 The expression of CD79a antigen is almost exclusively seen in B-cell neoplasms before immunoglobulin heavy-chain gene rearrangement during B-cell ontogeny. 10 In late phases of B-cell development and plasma cell cancers, CD79a is downregulated or completely eliminated. 10 Hence, emergency medical technician (EMT) is frequently negative for CD79a expression. 10 Furthermore, in MM, a high Ki-67 index may be a prognostic predictor as well as a risk factor for early recurrence and high mortality.
Extra-medullary involvement in MM is frequently linked to high-risk cytogenetic alterations, resistance to therapy, and a poor prognosis. Few studies have identified that the translocations at (4;14) and (14;16), deletion at (17p), and gaining (1q21) are associated with extra-medullary disease. In addition, deletion of (17p13) and (13q14) may be a potential prognostic marker for MM extra-medullary illness. 11,12 However, prognostic indicators for the development of extramedullary disease are still undetermined.
Surgical excision with adjuvant radiation is the conventional treatment for localized plasmacytoma. However, because the disease was so widespread, systemic therapy (ie, chemotherapy) was suggested for our patient. Chemotherapy is the chosen treatment for large tumors (ie, greater than 5 cm in diameter), high-grade tumors, and relapsed or refractory disease. 13,14 Also, intensive multi-agent therapies and allogeneic stem cell transplantation have been reported to be other options. 13,14 However, because of the disease's rarity, there is little experience in treating breast plasmacytoma.
In conclusion, extra-medullary breast plasmacytoma is a rare MM presentation that should be investigated in cases with breast lumps with plasma cell tumor characteristics. To distinguish BP from other breast cancers and to determine a suitable therapeutic approach for the disease, a comprehensive examination involving radiological, histological, and cytogenetic tests is required. | 2022-07-20T06:17:36.390Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "25b71471f59f93c9ca8dc6799720c60724bb3b32",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23247096221111773",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb527f40ed23cbfa564551ca367cd0653173d468",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52860138 | pes2o/s2orc | v3-fos-license | A New E-Health Tool for Early Identification of Voice and Neurological Pathologies by Speech Processing
The objective of this study is to develop a noninvasive method of early identification and classification of voice pathologies and neurological diseases by speech processing. We will present a new automatic medical diagnosis tool which can assist specialists in their medical diagnosis. The developed strategy is based on speech acquisition of the patient followed by audio features extraction, training and recognition by using the HTK toolkit. The computed parameters are compared to standard values from a codebook database. The experiments and tests are conducted by using the MEEI pathological database of KEY Pentax. The obtained results give good discrimination with a mean pathology recognition ratio about 95%. Finally, this EHealth application is helpful for the prevention of specific diseases and improving the quality of patient care as well as reducing the costs of healthcare. Keywords—E-Health; voice disorder; HMM classification; feature extraction; MFCC; pathology recognition rate
I. INTRODUCTION
Developments of non-invasive methods for voice pathology diagnosis were developed in order to assist medical staff and otolaryngologists to conduct objective and efficient diagnosis.At present a number of classic diagnostic tools are available on the market which were based on speech measurements and imaging analysis.
Many studies which used the speech features extraction succeeded to obtain acceptable discrimination ratio between normal and pathologic speakers.Some of them have achieved classification accuracies between 70% and 90% [1].In fact, acoustic analysis allows estimating a large amount of longterm acoustic parameters such pitch, formants, jitter, shimmer, Amplitude Perturbation Quotient, Harmonics to Noise Ratio and Normalized Noise energy [2].These features are very useful for characterizing speaker disorders especially if they are associated with MFCC or RASTA-PLP coefficients.
In other references, advances in speech processing have contributed to the identification of some neurological diseases (Parkinson, dyslexia, scleroses) by voice parameters analysis.The developed method is based on the determination of speech parameters of a speaker from a hardware interface and software for digital acquisition and processing of the speech signal [3].In this research, we will develop a speech processing tool for clinical observation and detection of pathological.This interface is intended, not only for patients but also for people currently using voice (singers, teachers).The methodology is very easy and is based on a voice recording.Then, the extracted speech parameters are applied as inputs to the famous HTK toolkit (HMM classifier).The results are compared with normal and pathological values for a detection and classification disease by using the famous MEII database [4].
II. RELATED WORKS
During the last decade, several digital methods of pathological identification from speech processing have been used for the classification and early identification of some diseases.These strategies can be classified into three categories: The first method is based on the extraction of acoustic parameters and the search of new descriptors and metrics of quality, distortion and voice irregularities such as MFCC, RASTA, LPC coefficients, Jitter, Shimmer and Harmonic ratio.The MFCC parameters were considered in other numerous studies, such as [5] and [6].In [5] subjects with nodules, edema and unilateral vocal fold paralysis were analyzed with not encouraging accuracy results (78%), while in [6] patients suffering from spasmodic dysphonia were selected.
The second one is based on machine learning techniques such as SVM and LDA.Among several machine learning techniques existing in literature, Support Vector Machine (SVM) has been widely used in voice signal processing such as the work of L.Godino [7] and S.N. [8] with accuracy ratio of 86%.
The third is a statistical method which uses Hidden Markov Models (HMM) or Gaussian mixtures (GMM).
It is based on learning and testing procedures for voice recognition and classification.The learning procedure constitutes the codebook (database of speech models and parameters), hence the testing procedure consists the audio real time acquisition and recognition step.
For example, Emary [9] uses GMM algorithm on a very small subset of the SVD database containing 38 pathological and 63 healthy voices in order to identify neurological disorders.www.ijacsa.thesai.org
A. The Studied Voice and Neurology Pathologies
Vocal fold pathologies can be classified as physical, neuromuscular, traumatic and psychogenic diseases.They affect the voice quality.Several voices, neurological, organic or genetic diseases are associated with speech disorders and dysfunctioning [10].In fact, a voice disorder can generate a language disorder causing degradation of the voice and its intelligibility.These disorders can be divided into next classes: Dysphonia: It can be considered as an abnormality of the speech production and quality or a paralysis or a kind of laryngitis.
Dysarthria: Is a speech disorder related to paralysis or to poor coordination of the muscles involved in the articulation.This disease has a neurological origin and conducts to dyslexia disease for children.
Aphasia: Is a language disorder due to a lesion of the cerebral cortex.The patient no longer includes the meaning of words or can no longer be expressed [10].
Sclerosis is a neurological disease in which affects the brain of a part of the brain and spinal cord [11].It causes muscle weakness, trouble with sensation, coordination and speaking [12].
Parkinson and Alzheimer are neurological diseases which affects the brain controls then body mechanisms and articulations.
B. Speech Aspects
The speech signal is full of physiological and acoustic parameters.It can inform as about the identity of the speaker, its health and even its emotional state.
The speech is characterized by its variability in amplitude and phase and its non-stationary behaviour.It is the result of a convolution between a phonation (glottis source) and an articulation (vocal tract).The source is characterized by the pitch F 0 , yet the vocal tract is characterized by a formant structure which reflects the resonance of the vocal tract given by the formants (F 1 ,F 2 ,F 3 ,..) [12].For example, Fig. 1 and 2 represent an illustration of the waveform, spectrogram, pitch and formants parameters of speech and music signals.Fig. 1 and 2 represent the waveform, spectrogram and pitch profile of a female speech signal "bientot.wav"sampled at 11025Hz.According to Fig. 2, we can observe that mean pitch frequency is about 245Hz with a silent zone between 0.25 and 0.4 seconds.
The wide band spectrogram of Fig. 1 shows the formantic character of the speech illustrated by the red curves.
III. MATERIALS AND METHODS
In this work, we used the statistical HMM method, because it is very famous for speech recognition and synthesis and gives high accuracy for classification especially for high databases and noised environments.Other references used SVM, LDA and GMM classifiers [13,14].
A. Speech Pathology Database
We have used the MEEI database of disordered voice (Kay Elemetrics Corporation) which was produced by the Kay Pentax [4].The database is composed of many data dealing with the assessment of voice pathologies.It is considered as the most widely used dataset for research in pathological voice classification.The KAY database includes recordings of vowels pronounced by 53 normal subjects and 657 pathological voices coming from several diseases.The technical sheets are provided with the Recording and data files, containing information on the subjects (age, sex, language, smoking or not) and the results of the analysis calculated by the software MDVP.This software is also exclusively produced by Kay Pentax Corporation and is widely used in the clinical field as a tool for the recording and analysis of patient"s voice.The available pathologies are: the dysphonia, nodules, paralysis, polypoïde degeneration, and vocal cords disorder.These pathologies are recorded up to 10 seconds by men and women.Table I gives more details about the content of this database.
B. Pathology Identification with HMM
We have used the famous HTK platform based on the Hidden Markov Models (HMM) in order to recognize the pathological voices and a further disease classification.This tool is a set of libraries and programs in C language developed at Cambridge University under the direction of Young in 1989 [15] in order to develop a performing technical Automatic Speech Recognition System.This toolkit is composed of: a speech database a training procedure by using the Baum-Walch and Kmeans algorithm for speech modeling.This step is applied on the speech database to constitute the reference codebook.
a recognition procedure which is based on a real time acquisition and analysis , then a comparison with the training words by using Viterbi algorithm.
This procedure is illustrated by Fig. 3 where we can observe the different steps of parameterization (feature extraction), training, recognition and classification.In this step, the test audio model is compared with the codebook in order to find any similarity or coincidence with pathological models.
C. Speech Features Extraction
The first step of the speech analysis before modeling and coding is the parameterization of the speech frames into MFCC, LPC, PLP or RASTA coefficients.The Mel Frequency Cepstral Coefficients (MFCC) are the most famous method in speech processing, recognition and synthesis.Its principle is illustrated by Fig. 4. In fact, MFCC is the most used for speech feature extraction and parameterization.MFCC algorithm which is represented by Fig. 2 can be expressed as [16].The MFCC formula can be expressed as: Where: Ek: is the energy of the k th filter N: is the number of band-pass filters Two others parameters are very useful in voice disorders analysis which is Jitter and shimmer.These indicators represent the irregularities and perturbations respectively in frequency and intensity.The expressions are given by next equations [16,17]: Hidden Markov Models is a useful for data statistical modeling and classification.
The implementation of the HMM system requires three phases: Describe a network whose topology reflects the sentences, vocabulary words or basic units Make the training mode settings: λ = (π, A, B) Carry out the actual recognition occurrence by calculating the maximum likelihood [14].
IV. SIMULATION RESULTS
Several platforms and software are used in speech processing such as, Praat, Vocalab, EDVP, Speech Analyser Matlab and HTK toolkit.These tools offer many parameters and indicators form speech evaluation such as, pitch, formants, Jitter, shimmer and SNR.
A. Effect on the Pitch
Pitch is the first indicator of the speech production as represents the period of the glottis signal.Fig. 6 shows the variations of the speech waveform, the zero-crossing, the pitch, the spectrogram, the spectrum and the formants of a normal speaker (without any disease).We can observe that the pitch (Fig. 6) is characterized by a continuous and constant profile with of value F1= 210 Hz (male speaker).
However, in the case of a pathological voice (organic or neurological origin), the speech profile presents distortions and dynamic variations around the pitch nominal value as illustrated in Fig. 7 to 10 which will be discussed later in details.
B. Effect on the Jitter and Shimmer
Disturbances of the durations of glottal cycles (Jitter) are irregularities in the period glottal signal.These disturbances are a basic phenomenon that is present in the voice and are therefore a feature of vocal timbre.This disturbance can be used to characterize spectrally hoarse voices, neurological, emotional or normal.The method is based on a study of the spectral effects of the glottal cycle"s variance and the
Pathological Pitch
Pathological Jitter www.ijacsa.thesai.orgevolution of the jitter values.On the other side, the study of perturbations of the amplitude (Shimmer) shows that they are a consequence of disturbances durations and energy.These mechanisms are asymmetries in the movement of both vocal cords and acoustic propagation of the glottal signal through the vocal tract [17].Fig. 6 demonstrates that the normal jitter value is 0.02 (2%), hence the pathological value is over 0.8 (80%).
A. Multiple Sclerosis Disease
In this case of disease, the most common deficits affect recent memory, attention, processing speed, speech, visualspatial abilities and executive function.Symptoms related to cognition include emotion, instability and fatigue including neurological fatigue [18].The following speech profiles and parameters of Fig. 8 (pitch, formants) demonstrated a correlation between the speech features and this pathology.
In fact, we can observe in Fig. 8 that in this case, the pitch profile becomes very disturbed with a high standard deviation contrarily with normal and safety speaker of Fig. 7.This state indicates a dysfunctioning of the speech production system which is monitored by the brain.
B. Dyslexia
It is considered a cognitive disorder but it does not affect intelligence.Problems may include difficulties in spelling words, reading quickly, writing words, "sounding out" words in the head [19], [20].The examination of the speech of figure 9 shows disturbances of the pitch curve at the level of the coarticulations and the changes of vowel consonants in the pronounced word.These variations remain around 20% of the nominal value, but the standard deviation remains almost 12%.Also, the wide band and narrow band spectrograms are affected by this variation and disturbance.
C. Alzheimer
It is a chronic neurodegenerative disease that usually starts slowly and worsens over time.It is the cause of 60% to 70% of cases of dementia.The most common early symptom is short-term memory loss), problems with language, speech, disorientation, mood swings, loss of motivation and behavioral issues [21].Recent research studies demonstrate relations between speech production and Alzheimer disease.Fig. 10 illustrates the speech parameters of a speaker suffering from Alzheimer (age: 72).
D. Parkinson
This neurology disease has a long-term degenerative disorder of the central nervous system that mainly affects the motor system.The symptoms generally come on slowly over time.At a first step, the most obvious are shaking, rigidity, slowness of movement, and difficulty with walking.Thinking and behavioral problems may also occur.Dementia becomes common in the advanced stages of the disease.Depression and anxiety are also common occurring, including sensory, difficulty of speaking, sleep, and emotional problems [22].The examination of the speech through Fig. 11 shows high deviations and variations of the pitch (glottis signal) rather large of 100% which alters the language understanding and loses the speech recognition of the speaker.
E. Pathology Recognition Ratio
All the described procedures and steps (acquisition, training, feature extraction, recognition and pathology classification) are embedded and inserted in a smart interface illustrated in Fig. 12.
Our tests are compared to the MEEI of KAY Pentax speech database, described in the last paragraph, on the HTK toolkit.According to Table II, we obtained a pathology recognition ratio (RR) between 86% and 100%.
These values are very interesting because they can discriminate several diseases from the normal voices characterized by a 100% value.The obtained Pathology identification ratios demonstrate that we obtain high (RR) values for both hyper-function (dysarthria) and paralysis diseases (dysphonia) respectively 94% and 98%.Besides, we compared our results with other studies using similar and different databases.
Table III compares our proposed algorithm with previous significant works [5,6,7,8,23,24].Although in these works the databases are different, it is observed that the proposed algorithm with HMM appears competitive and has a high accuracy to identify pathological and normal voices.We succeeded at the first step of our work to identify more than 8 kinds of organic and neurological diseases.
VII. CONCLUSION
In this paper, we developed a new tool dedicated to identification and diagnosis of vocal and neurological diseases.The method is based on analysis of acoustic parameters of a patient after a real time speech acquisition and processing.The modeling and classification procedures are automated by using HMM, training and recognition procedures.The validation was carried out thanks to the pathological famous database MEEI of Pentax.The obtained recognition ratio of the pathology is around 95%.The most significant indicators of the pathological speech are disturbances in amplitude (Shimmer) and frequency distortion and irregularities of pitch (Jitter) and finally the loss of glottis control (high standard value of the pitch).Besides, this application allows us to follow changes in the physiological state (heartbeat, blood pressure, ECG) and acoustic parameters (pitch, formants, timbre) and then we can compare them with normal and standard values.This is very interesting because it helps us to follow the disease evolution, to predict and to avoid patient complication and to improve his re-education therapy.
The following step of this study is to extend this application to other critical diseases such as cancer and Hepatics C and then to evaluate it through a large number of patients.
pitch period A: the amplitude of the pitch N: the number of samples.k: indices of the frame D. HMM Training and Recognition Fig. 5 represents the principle of the training-recognitionclassification procedure.The training step uses a database or a codebook constituted of audio parameters and Baum-Walch and K-means algorithms.The recognition procedure uses HMM modeling and classification by using Viterbi decoder [18].
TABLE I .
CONTENT OF THE MEII DATABASE | 2018-09-24T03:23:47.813Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "4b2b0c8ca55e7fef0f415568cf7c8cac6e6551d0",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume9No8/Paper_65-A_New_E_Health_Tool.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b2b0c8ca55e7fef0f415568cf7c8cac6e6551d0",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257300517 | pes2o/s2orc | v3-fos-license | Melanoma Brain Metastases: A Retrospective Analysis of Prognostic Factors and Efficacy of Multimodal Therapies
Simple Summary The treatment strategies of patients with melanoma brain metastases are continually evolving, although this remains a poor prognostic subset. We report a real-life retrospective analysis of 105 patients with melanoma brain metastases aiming to analyze the impact of clinical–pathological features and multimodal therapies, such as neurological symptom occurrence, on overall survival in the pre-combined immunotherapy era. We observed a significant improvement in the survival of patients treated with encephalic radiotherapy (eRT) despite the type of systemic treatment performed. The only subset of patients that did not experience survival improvement from eRT was identified by LDH levels higher than two times the upper limit normal. In our opinion, our results, if confirmed by prospective analysis, may help to identify the correct therapeutic strategy for the worst prognostic subgroup of patients with melanoma brain metastases. Abstract Brain metastasis in cutaneous melanoma (CM) has historically been considered to be a dismal prognostic feature, although recent evidence has highlighted the intracranial activity of combined immunotherapy (IT). Herein, we completed a retrospective study to investigate the impact of clinical–pathological features and multimodal therapies on the overall survival (OS) of CM patients with brain metastases. A total of 105 patients were evaluated. Nearly half of the patients developed neurological symptoms leading to a negative prognosis (p = 0.0374). Both symptomatic and asymptomatic patients benefited from encephalic radiotherapy (eRT) (p = 0.0234 and p = 0.011). Lactate dehydrogenase (LDH) levels two times higher than the upper limit normal (ULN) at the time of brain metastasis onset was associated with poor prognosis (p = 0.0452) and identified those patients who did not benefit from eRT. Additionally, the poor prognostic role of LDH levels was confirmed in patients treated with targeted therapy (TT) (p = 0.0015) concerning those who received immunotherapy (IT) (p = 0.16). Based on these results, LDH levels higher than two times the ULN at the time of the encephalic progression identify those patients with a poor prognosis who did not benefit from eRT. The negative prognostic role of LDH levels on eRT observed in our study will require prospective evaluations.
Introduction
Brain metastases (BMs) occur in almost 50% of patients with cutaneous melanoma (CM) and are the third most common metastatic site [1][2][3][4]. These patients commonly expe-2 of 12 rience a dismal prognosis with a 4 month median overall survival (mOS) [5,6]. Immune checkpoint inhibitors (ICIs) and targeted therapies (TTs) have improved the outcome of unresectable and advanced CM but, in the recent past, patients with active BMs were routinely excluded from clinical trials. Therefore, the evaluation of the intracranial efficacy of systemic therapies was mostly based on retrospective analyses [7][8][9][10]. More recently, however, prospective studies have been designed to include patients with BMs [8,9,[11][12][13]. In particular, the COMBI-MB Phase 2 study explored the impact of Dabrafenib plus Trametinib in BRAF v600-mutant melanoma brain metastases and revealed an intracranial response rate of 58% in the entire cohort. However, the duration of response was at least 6.5 months, and was significantly lower with respect to patients without BMs resulting in at least 12 months [14][15][16]. Both anti-PD1 and anti-CTLA-4 monoclonal antibodies showed poor efficacy for the treatment of BMs when administered as single agents, thus showing a 25% intracranial response rate [17][18][19], whereas the best response was obtained by their combination. The Australian anti-PD-1 brain collaboration (ABC) study showed a rate of response of about 50% with a median duration of response not reached after 34 months [20]. In addition, the CheckMate-204 trial demonstrated an OS rate of 72% after a 3 year followup in untreated and asymptomatic patients while a similar benefit was not reached in those who were symptomatic and showed an intracranial response of 22% and a survival rate of 33% at 36 months of follow-up [21]. Nowadays, many critical issues emerge in the choice of the best treatment for patients with BM, and the best strategy is still debated for those bearing the BRAF mutation. In this regard, preliminary data from the Phase 2 TRICOTEL study underlined the intracranial efficacy of combining TT with ICIs, with particular activity in patients receiving corticosteroids and/or in symptomatic ones [22]. Moreover, recent evidence has proved an early acquired resistance to the combination of ICIs for patients pre-treated with TT [23]. Furthermore, growing interest has emerged in the combination of ICIs with encephalic radiotherapy (eRT) to maximize the antitumor response [24][25][26][27][28][29]. Thus, trials evaluating the putative additive effect of eRT have been designed, but results are not still available (NCT03340129, NCT03430947 and NCT02097732). Apart from neurological symptoms, other factors are questioned to have a prognostic role in brain metastatic melanoma, such as the presence of more than three BMs, a poor performance status and the concomitant presence of extracranial metastases in specific sites, as well as elevated lactate dehydrogenase (LDH) levels at the time of encephalic progression. However, the role of these parameters needs to be further confirmed [30][31][32][33][34]. Herein, we performed a retrospective, real-life, multicentric analysis to explore the prognostic role of clinical, demographic and pathological features and the efficacy of multimodal therapeutic strategies in our cohort of CM patients with BMs.
Study Population
This is a retrospective observational study that enrolled 105 patients with histological CM and radiological diagnosis of BMs treated in two oncological centers of Bari (Medical Oncology Unit, Policlinico Hospital, and Rare Tumors and Melanoma Unit, IRCCS Istituto Tumori Giovanni Paolo II). This study was designed and performed by medical oncologists. Patients received standard treatments according to good clinical practice. Demographic data included histopathological parameters according to AJCC Eighth Edition (e.g., histotype, Breslow depth, ulceration, number of mitoses and lymphocyte infiltration), clinical features (such as neurological symptoms), access and type of systemic or local treatments (single-agent IT, TT with anti-BRAF plus anti-MEK drugs, chemotherapy and eRT performed as stereotactic radiosurgery or whole brain radiotherapy) and time of intra-and extra-cranial metastatic diagnosis and death. Other parameters were age, sex, melanoma primary site, nodal involvement, mutational status of both BRAF and NRAS analyzed on both primary and metastatic specimens, LDH levels, sites of extracranial metastases and leptomeningeal neoplastic infiltration. Moreover, detailed information regarding systemic treatments and sequences, as well as the time of extra-cranial and intra-cranial progression, were assessed. Written informed consent for clinical data collection was collected from all patients.
Statistical Analysis
A comparison of cumulative survival was performed using Kaplan-Meier curves for each variable (symptoms, radiotherapy, type of systemic therapy and LDH levels). Median overall survival (mOS) and its interquartile range (IQR) were determined in each subgroup, aided by the time of BM diagnosis. Median progression-free survival (mPFS) and its IQR were determined in each subgroup, aided by the date of BM diagnosis until the first intracranial progression according to response assessment in neuro-oncology (RANO) criteria. However, due to the retrospective nature of our study and the lack of a centralized radiological evaluation, we decided to consider only mOS for the final evaluation of treatment efficacy. The difference in survival was evaluated via the log-rank test run for each variable and in the other two analyses: for eRT adjusted accounting for symptoms, and for eRT adjusted accounting for systemic therapy and for LDH levels adjusted accounting for eRT. Furthermore, the Cox regression model was applied to evaluate the effect of each variable on the risk of death. All risk factors were evaluated to assess the assumption of a proportional hazard using a multivariable Cox model with the dependent OS. The model-independent variables were as follows: treatment (TT, chemotherapy and IT), eRT (yes/no), site of the primary melanoma (head and neck, trunk, limbs and other sites), neurological symptoms (presence/absence), Breslow (≤1 mm, 1-2 mm, 2.1-4 mm and >4 mm), age classes (the classes were lower than 56 years, 56-65 years, 66-75 years and more than 75) and sex (male = 1, female = 0). All of the analyses were performed using SAS 9.4 for a personal computer via PROC LIFE TEST and PROC PHREG. Statistical significance was set at p < 0.05.
Ethics Approval and Consent to Participate
Ethical review and approval were not required for this study due to its retrospective and observational nature and due to being conducted in accordance with national regulations that strictly impose ethical review and approval only for those observational studies designed as prospective pharmacological observational ones (Official Gazette of the Italian Republic n. 42, 19 February 2022, decree 30 November 2021, art. 6 subparagraph 2). The clinical data collection on human participants is in strict compliance with the ethical standards of the Declaration of Helsinki. All patients provided their written informed consent to participate in the study and to the publication of their data in an anonymous form.
Baseline Demographic Features
Cutaneous melanoma patients with BM were enrolled from 2017 to 2021. The clinical and pathological features are described in Table 1. About 90% of patients (95/105) also had extracranial metastases and 39% of them were diagnosed with more than two extracranial metastatic sites. The onset of BMs was mostly metachronous in 97% of patients after the melanoma diagnosis and 61% after an extracranial progression. Therefore, the spread of melanoma toward the brain mostly occurred in a delayed phase of the disease. The brain metastatic location was supratentorial in 51% of cases and supra-and infra-tentorial in 24% of patients, while infra-tentorial occurred in 6% of patients. About 59% (62/105) of patients showed less than four MRIconfirmed BMs. At the onset of BMs, 18% of patients had LDH levels higher than 2-fold the upper level normal (ULN), whereas 45% suffered from neurological symptoms (47/105) and all of them received steroids. Neurosurgical exeresis was performed on 13 patients (12%) due to emergency reasons or therapeutic strategy. Encephalic RT was completed in 71% of patients, stereotactic radiosurgery (SRS) was performed in 65% of patients and whole brain radiotherapy (WBRT) was performed for 35% of them. With regard to first-line systemic treatments after BM diagnosis, 15 patients did not receive any therapy due to their poor performance status. Among the others (n = 90), 48% received TT and 48% received IT, whereas only 4% underwent chemotherapy. These data are detailed in Table 2.
Neurological Symptoms and Factors Associated with Overall Survival
At the time of analysis, death occurred in 79 patients and 26 were still alive. The mOS from the BM diagnosis was 6.6 months (IQR: 5.1-9.2). The next step of the study explored the features of CM patients with symptomatic BMs (Table 3). (57%) as well as LDH levels ≤ 2 times the ULN (67%) at the time of their onset. Notably, only 47% of patients showed more than three BMs. The presence of neurological symptoms was a negative prognostic factor with a hazard ratio (HR) of 1.6 (95% CI 1.03-2.5). As shown in Figure 1A, mOS was 5.1 months for symptomatic patients versus 9.2 months for asymptomatic ones (p = 0.0354). Therefore, we questioned whether the eRT could have a role in the control of neurological symptoms, delaying clinical-neurological deterioration and then improving prognosis in symptomatic patients. In our analysis, patients with neurological symptoms showed a significant survival benefit from radiation therapy, achieving an mOS from BM diagnosis of 6.9 months with 2.9 months for those that did not receive eRT (p = 0.0234; Figure 1B). However, eRT also improved outcomes in asymptomatic patients (mOS: 11.8 vs. 2.7 months, p = 0.011; Figure 1C).
In our cohort, 45% of patients developed neurological symptoms. Their median ag was 62 years and 57% were male. Many symptomatic patients showed supratentorial BM (57%) as well as LDH levels ≤ 2 times the ULN (67%) at the time of their onset. Notably only 47% of patients showed more than three BMs. The presence of neurologica symptoms was a negative prognostic factor with a hazard ratio (HR) of 1.6 (95% CI 1.03 2.5). As shown in Figure 1A, mOS was 5.1 months for symptomatic patients versus 9 months for asymptomatic ones (p = 0.0354). Therefore, we questioned whether the eR could have a role in the control of neurological symptoms, delaying clinical-neurologica deterioration and then improving prognosis in symptomatic patients. In our analysi patients with neurological symptoms showed a significant survival benefit from radiatio therapy, achieving an mOS from BM diagnosis of 6.9 months with 2.9 months for thos that did not receive eRT (p = 0.0234; Figure 1B). However, eRT also improved outcomes i asymptomatic patients (mOS: 11.8 vs. 2.7 months, p = 0.011; Figure 1C). Other analyses demonstrated the poor prognostic role of LDH levels > 2 times th ULN at the time of encephalic progression. As shown in Figure 2A, by grouping patien by LDH levels we noticed a significant difference in terms of mOS (3.5 versus 9.2 month p = 0.0014). Moreover, limited to the low number of patients, doubled LDH levels seem t Other analyses demonstrated the poor prognostic role of LDH levels > 2 times the ULN at the time of encephalic progression. As shown in Figure 2A, by grouping patients by LDH levels we noticed a significant difference in terms of mOS (3.5 versus 9.2 months, p = 0.0014). Moreover, limited to the low number of patients, doubled LDH levels seem to select patients who do not benefit from eRT. Only patients with LDH levels ≤ 2 times the ULN levels showed an improvement in OS due to eRT. As shown in Figure 2B, OS was 2.5 months in the eRT untreated group with respect to 11.8 months observed in the group that received eRT (p < 0.0001). On the contrary, no survival difference was evidenced in patients showing LDH levels > 2 times the ULN after grouping them according to eRT (3.8 versus 3.5 months, p = 0.998; Figure 2C). The Cox model applied to evaluate factors involved in survival showed that worse outcome was associated with the presence of more than three BMs (in the multivariable model: HR 1.78, 95% CI 1.04-3.06) and with LDH levels higher than 2-fold the ULN (HR 1.85, 95% CI 0.96-3.56). In addition, Table 4 shows that eRT has a protective effect (in the multivariable model: HR 0.37, 95% CI 0.21-0.67). The Cox model applied to evaluate factors involved in survival showed that worse outcome was associated with the presence of more than three BMs (in the multivariable model: HR 1.78, 95% CI 1.04-3.06) and with LDH levels higher than 2-fold the ULN (HR 1.85, 95% CI 0.96-3.56). In addition, Table 4 shows that eRT has a protective effect (in the multivariable model: HR 0.37, 95% CI 0.21-0.67).
Impact of Treatments on Overall Survival
The next set of exploratory analyses investigated the additive effect of systemic therapies with eRT. As shown in Figure 3A,B, patients treated with TT or IT benefit from the concomitant use of eRT (mOS 2.2 vs. 9.5 months, p = 0.0062 and 2.7 vs. 9.9 months, p = 0.001, respectively). Moreover, the poor prognostic role of LDH levels was confirmed in patients treated with TT who experienced a worse outcome when LDH was >2 times the ULN (mOS 3.5 vs. 12.5 months, p = 0.0015; Figure 3C). On the contrary, patients who underwent IT did not show a significant difference in mOS when stratified according to LDH levels (mOS 4.17 vs. 8.43 months, p = 0.16; Figure 3D). patients treated with TT who experienced a worse outcome when LDH was >2 times the ULN (mOS 3.5 vs. 12.5 months, p = 0.0015; Figure 3C). On the contrary, patients who underwent IT did not show a significant difference in mOS when stratified according to LDH levels (mOS 4.17 vs. 8.43 months, p = 0.16; Figure 3D).
Discussion
The present study retrospectively explored the features of CM patients bearing BMs in a real-life cohort and examined the most used and effective multimodal therapeutic strategies in the pre-combined IT era or when clinical contraindications limit its practice.
In our population, BMs from CM occurred most frequently in males with a prevalent involvement of the supratentorial region. Most patients (90%) also evidenced extracranial metastases that in 63% of cases were diagnosed at least three months before the central nervous system (CNS) metastatic involvement. Thus, the intracranial progression apparently represents, at least in the vast majority of our patients, a delayed event in the natural history of CM. The aforementioned phenomenon has already been previously described, although a recent retrospective real-world study has assessed how BMs and extracranial metastases occur synchronously in nearly 70% of cases. This may be a potential consequence of intensive brain surveillance [34,35]. The incidence of BRAF mutations (55%) was similar to that reported in the general CM population. Nearly half of the patients (45%) suffered from neurological symptoms and, among them, only 46% showed more than three BMs, and only 24% were characterized by LDH levels > 2-fold the ULN. Therefore, neurological symptoms were not related to a multifocal metastatic brain disease or elevated LDH values.
The median survival after BM onset was 6.6 months, lower than the OS revealed by the recent Phase 3 randomized clinical trials using either TT or IT. However, our study reflects real-life data of patients usually excluded from clinical trials, such as those with poor performance status, previously treated with systemic therapies before the onset of BMs or those treated in the pre-combined IT era. In this setting, the prognosis is guided by factors mainly evidenced in retrospective cohorts of patients, including elevated LDH levels, neurological symptoms, three or more BMs and three or more extracranial metastatic sites [36]. In our population, the multivariate model was in line with these results.
Of note, neurological symptoms, occurring in 45% of our population, were the most common complication of BMs that negatively influenced both quality of life and survival. The percentage of symptomatic patients detected in our study differs from that evidenced in recent and quoted trials that highlight the high frequency of asymptomatic brain lesions (nearly 80%) [20,21]. In this regard, we have to underline that our study enrolled "real-life" patients who underwent radiological evaluation according to clinical practice with CT scans that have low specificity for BMs. Conversely, the gold standard for BM diagnosis is represented by the encephalic MRI, usually performed only after the evidence of neurological symptoms. Indeed, a retrospective study performed at the Memorial Sloan Kettering Cancer Center (MSKCC) extracted data from 355 real-life CM patients bearing BMs and found that 67% of them had neurological symptoms at BM onset [37]. Therefore, a universally accepted screening program, eventually involving encephalic MRI for high-risk patients, should be designed and diffused in clinical practice in order to detect BMs when the patients are still asymptomatic and improve the percentage of asymptomatic patients, as already happens in RCT.
Furthermore, we evaluated the eRT efficacy in improving outcomes across different subgroups of patients: symptomatic versus asymptomatic, IT-treated versus TT-treated and LDH levels upper versus lower than/equal to two times the ULN levels. The number of patients did not allow us to perform a subgroup analysis based on the different types of eRT. Anyway, this evaluation would not have significantly influenced any results considering that our aim was exploring the eRT additive effect with systemic treatments, its role in controlling neurological symptoms, improving quality of life and allowing access to further lines of systemic treatments, thus improving mOS. Our data showed that eRT improved outcomes both in the symptomatic and asymptomatic group as well as in patients treated with either TT or IT with a single agent. Thus, eRT might play a key role in prolonging OS by stabilizing BM growth and delaying clinical deterioration. Based on these results, while waiting for prospective data concerning the efficacy of eRT with combined IT, eRT should play a part in the therapeutic strategy at least for symptomatic brain metastatic CM patients or those who are asymptomatic and excluded from combined IT due to clinical contraindications.
The last step of our work focused on the negative prognostic role of LDH levels higher than 2-fold the ULN at the time of CNS metastatic spread and data parallel to those of previously published papers [38,39]. However, relevant data from our study concern the inefficacy of eRT in improving survival in patients with LDH > 2 times the ULN at BM onset. Thus, LDH values more than or equal to double the ULN could represent the sign of an aggressive and active brain metastatic disease that does not benefit from eRT despite the systemic treatments. On the other hand, the negative prognostic role of elevated LDH levels is confirmed only in patients treated with TT. Conversely, patients receiving IT with a single agent did not experience a significant difference in terms of OS when stratified according to LDH levels. The putative explanation of this finding could rely on a greater efficacy of IT in CM patients with BMs with LDH levels > 2-fold the ULN, confirming the recently published data regarding CM patients developing extracranial metastases [40]. The biological explanation of these findings relies on the role of LDH in downregulating the immune system due to the production of elevated levels of its oncometabolite, the lactate that was found to be associated with an increased number of metastatic sites and lower survival. Elevated levels of lactate induce an immune-suppressed microenvironment that sustains CM growth by promoting the expression of programmed cell death of protein-1 (PD-1) and the ligand (PD-L1) on tumor cells [41,42]. Once these data are confirmed in larger and prospective cohorts of CM patients bearing BMs, our results could underly that elevated LDH levels identify an aggressive subset of brain metastatic CM that should be oriented to IT-based therapy regardless of BRAF mutational status.
In conclusion, exploring the activity of the combined use of anti-CTLA4 and anti-PD1 agents in brain metastatic CM patients with LDH levels > 2-fold the ULN could represent an interesting strategy for the control of a severe complication that restrains survival in the majority of patients.
Conclusions
The present study demonstrated that neurological symptoms and high LDH values are negative prognostic factors in patients with CM developing BMs. In addition, preliminary results underlined the survival benefit due to the eRT in all subgroups, although patients with LDH levels ≥ 2-fold the ULN did not benefit from IT. These preliminary observations, therefore, suggest that IT may also have an active role in BMs showing elevated LDH levels. This may be at least explained by the immune suppressive microenvironment sustained by lactate, the LDH oncometabolite. However, further prospective studies are needed to understand the effective role of eRT and IT in melanoma patients characterized by BMs and elevated LDH.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study due to its retrospective and observational nature and in accordance with national regulations that strictly impose ethical review and approval only for those observational studies designed as prospective pharmacological observational ones (Official Gazette of the Italian Republic n. 42, 19 February 2022, decree 30 November 2021, art. 6 subparagraph 2).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request. | 2023-03-03T16:04:41.688Z | 2023-02-28T00:00:00.000 | {
"year": 2023,
"sha1": "a3c3006e4f293b685ca6e65574ce62eb7ae3a8ef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/5/1542/pdf?version=1677745546",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d1f4c1ade59ef6f407cc593635b77afcac0763d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4980943 | pes2o/s2orc | v3-fos-license | ISSR Fingerprinting to Ascertain the Genetic Relationship of Curcuma sp. of Tripura
Molecular fingerprints of four different species of Curcuma, viz., C. amada, C. caesia, C. longa and C. zedoaria, found in Tripura were developed using Inter Simple Sequence Repeats. Twenty ISSR primers generated 116 loci amplified in the range of 200 - 5000 bp with an average of 5.8 alleles and 1.6 effective alleles per locus. The percentage of polymorphic band was found to be 86.29 with an average of 5.15 per primer. Based on UPGMA algorithm these four species are placed in two different clusters that validate the classification based on external and internal morphological characters. The polymorphic ISSR markers generated from this study will be useful for understanding the genetic relationship of different species of the genus Curcuma.
Introduction
The genus Curcuma belonging to the family Zingiberaceae comprises ca.80 species and shows the widespread distribution from tropical Asia to Australia and South pacific region [1].The highest diversity of Curcuma has been found in India and Thailand and about 40 species are indigenous to India [2].Different species of Curcuma have immense medicinal value and have been extensively used in indigenous system of medicine [3]- [6].It is now well documented that the position of the spike, presence of coma bract and the color of bract in Curcuma are the major distinctive traits for delineation of species [7].However, variation in the position of the spikes and bract color has also been noted in some species of Curcuma [8].The state of Tripura situated in the sub Hima-layan region of North East India is one of the hotspot of Indo-Burma biodiversity region of the world [9] [10].Previously three species of Curcuma, viz., Curcuma amada, C. longa (=C.domestica) and C. zedoaria were reported from Tripura [11].In our recent survey, we have identified another species C. caesia from West Tripura.To study the evolutionary history of a species knowledge of genetic variation is a prerequisite and it is essential to characterize the plants genetically in order to have a sustainable conservation programme [12].DNA based molecular markers show differences in nucleotide sequence of DNA and are now used as powerful tools in the field of Plant breeding, Taxonomy, Physiology and Genetic engineering [13].ISSR markers are technically simpler than other markers.These markers are mostly dominant except a few which are codominant in nature.In this technique, unlimited number of primers can be synthesized and the advantage lies in long primer length and stringent annealing temperature [14].In higher plants, Inter Simple Sequence Repeat or ISSR markers are therefore frequently used because they are known to be abundant, very reproducible and highly polymorphic [14] [15].Moreover, the ISSR based molecular fingerprinting technique is a good alternative to AFLP when tested on Curcuma species [16] [17].Till now, there is no report on the genetic relationship of Curcuma species grown in diverse habitats of Tripura.An attempt has, therefore, been undertaken for molecular characterization of four species of wild and cultivated Curcuma grown in the state of Tripura using ISSR markers.
Plant Material and DNA Extraction
Rhizomes of four species of Curcuma, viz., C. amada Roxb., C. longa L., C. zedoaria (Christm.)Roscoe and C. caesia Roxb.found in wild state were collected from different geographical locations of Tripura (Table 1) and grown in the experimental garden of Department of Botany, Tripura University for experimental purposes.In addition to these, rhizomes of two populations of cultivated C. longa were also grown in the experimental garden for the present study.Total genomic DNA was extracted according to the manufacturer's protocol (DNea-sy® Plant Mini Kit-Qiagen, part no.69104).DNA concentration was determined using the Nanodrop 2000C spectrophotometer (Thermo Scientific-USA) and qualitative study was performed in 1.5% Agarose gel.
ISSR Analysis
For the genetic diversity study of four different Curcuma species, 20 ISSR markers (Sigma Aldrich, India) were chosen (Table 2).PCR amplification was performed using a mixture of 25 µl containing genomic DNA (30 ng/µl), dNTPs 10mM (Qiagen), 25 mM of MgCl 2 (Sigma), 10× Taq buffer (Sigma), 10 µM primer and 2.5 Unit of Taq Polymerase (Sigma).PCR amplification was carried out in a Thermal Cycler (Applied Biosystems, Gene Amp* PCR System 9700).PCR was performed at an initial temperature of 94˚C for 4 minutes for complete denaturation.The second step consisted of 44 cycles having three ranges of temperature: 94˚C for 1 minutes, 50˚C for 1.30 minutes for primer annealing and 72˚C for primer extension, followed by 72˚C for 10 minutes.All amplified reactions were repeated at least two times for confirmation.The amplified products were visualized using 2% Agarose gel electrophoresis and scanned through a gel documentation system.
Data Analysis
The amplified fragments obtained from the ISSR profile were scored as binary data (1/0 for the presence or ab- sence) of each fragment.Only clear and reproducible bands were taken into account; the intensity of the bands was not considered.The numbers of polymorphic and monomorphic bands were determined for each primer in all species studied.Polymorphic Information Content (PIC) was computed using the formula 2 PIC 1 pi = − ∑ where pi is the frequency of i th allele at a given locus [18] and Marker Index (MI) was calculated [19].The number of observed alleles, mean number of effective alleles [20], Nei's [21] gene diversity index (H) and Shannon index [22] were calculated using the POPGENE software [23].The level of similarity between the species was established using DICE's coefficient [24].Similarity coefficients were used to construct the dendrogram using the SAHN subroutine through the NTSYS pc (Numerical Taxonomy System, 2.21q version) [25].Further, Principal Coordinate Analysis (PCA) was performed with modules of STAND, CORR and EIGEN of NTSYS pc using the Euclidean distances with the help of NTSYS pc-2.21qsoftware.
Results
Twenty ISSR primers that were used to characterize the genetic diversity among the species yielded 116 fragments with an average of 5.8 alleles and 1.6 effective alleles per locus (Table 3).In the present observation, it was found that out of the total amplified products, 13 bands were monomorphic and 103 were polymorphic and these were amplified in the range of 200 -5000 bp.Maximum number of bands were recorded in HB 12, 825, UBC 873 and 807.However, the average number of polymorphic bands obtained per primer was 5.15.The per- 4).
In the present study, genetic relationship among the four species of Curcuma shows two clusters expressed as UPGMA dendrogram using SAHN Neighbor Joining tree (Figure 1).The coefficients on the X axis represent the similarity indices of the different species chosen for the study.Based on UPGMA clustering, the genotype of C. zedoaria and C. caesia belongs to one cluster and that of C. amada and C. longa in a separate cluster.Dice's coefficient showed that C. zedoaria and C. caesia were related to each other with a similarity value of 0.6379 whereas the similarity value between C. longa and C. amada was found to be 0.5593 (Table 5).PCA was analyzed on the basis of ISSR data which shows that the first 3 coordinate components accounted for 38.54%, 23.67% and 18.72% variation (Figure 2).
Discussion
In Tripura so far we have recorded four species of Curcuma and they differ in morphological and anatomical characters to a certain extent [26].A priori, key to the species identification in Curcuma was based on external and internal morphological characters, but relying on morphological characters alone in species delineation has its limitations.While majority of the morphological characters of C. caesia and C. zedoaria are more or less similar, the flower color and the internal anatomy of rhizome differs and the cortical zone of the rhizome of C. caesia shows bluish green color.C. longa is used mostly as an important spice and so, is extensively cultivated throughout Tripura.However, C. longa was also found in the wild state but remain restricted to the higher altitude of Jampui hills of Tripura.Morphologically C. longa and C. amada are almost similar but their rhizomes differ in color and odor [11].The rhizome of C. longa is deep orange yellow in color and that of C. amada is pale yellow having the aroma of mango.ISSR cluster analyses reveal the presence of two distinct clusters in the wild and the cultivated Curcuma species studied; cluster I represents C. caesia and C. zedoaria and cluster II includes C. longa and C. amada and, the results thus obtained are not in full agreement with previous findings [6] [16] [17].The genetic diversity of different species of Curcuma from the North Eastern region of India was also assessed [6] using ISSR fingerprinting but the formation of an independent cluster of C. caesia alone as was reported could not be ascertained in our present study even after repeated experimental trials.The presence of C. caesia and C. zedoaria in the same cluster and their similarity indices indicate that they might have arisen from a common ancestor inspite of their diverse ecological habitats.PCA depicts the variability among the species of Curcuma and three principal components with Eigen value greater than 1 extracted a cumulative of 80.83% variation.In all the taxa studied, there are sequence specific profiles (Figure 3) and the dendrogram shows that the genome of each species is not exactly the same.The somatic chromosome number of C. amada (=42) and C. longa (=63) differs (unpublished) due to the difference in their ploidy level, but differences in somatic chromosome number does not affect the similarity indices between the two species as is evident from the experimental data.Out of the two populations of cultivated C. longa3 (population-II) is genetically closer to that of C. longa1 found in the wild state.This resemblance suggests that probably C. longa1 found in the wild state escaped earlier from the cultivated form.The genetical distance between C. longa2 and C. longa3 may be attributed to varietal distinction.Taken together, our findings support the taxonomic key to the identification of taxon at species level.
Conclusion
The molecular profiling of four species of Curcuma validates the conventional taxonomic interpretation.Interspecific and intraspecific variation observed with respect to degree of polymorphism, number of alleles observed, number of effective alleles, Nei's gene diversity and Shannon's information index are all indicators ascertaining genetic diversity of Curcuma species in Tripura.Thus, ISSR fingerprint can be used not only as an effective parameter to assess the genetic relationship between the species of Curcuma but also provides additional support for establishing the taxonomic position of a species.
Figure 1 .
Figure 1.Dendrogram representing the genetic variability of Curcuma sp. using Dice similarity coefficient.
Figure 2 .
Figure 2. Principal coordinate analysis (PCA) map for the species of Curcuma.
Table 1 .
Different species of Curcuma collected from different locations of Tripura.
Table 2 .
Total number of amplified fragments generated by PCR using ISSR primers.
Table 3 .
Degree of polymorphism and polymorphic information content for ISSR primers in four species of Curcuma.
Table 4 .
Results of polymorphic primers screening in four species of Curcuma.
na = observed number of alleles; ne = effective number of alleles; h = Nei's gene diversity; I = Shannon's information index.
Table 5 .
Dice similarity coefficient among the species of Curcuma. | 2018-04-20T17:44:58.048Z | 2016-02-03T00:00:00.000 | {
"year": 2016,
"sha1": "14bed70e173ca6efcba64a7817695c0a301e591e",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=63306",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "14bed70e173ca6efcba64a7817695c0a301e591e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
255815929 | pes2o/s2orc | v3-fos-license | Erratum to: the link between adjacent codon pairs and mRNA stability
Erratum After publication of this article [1], the authors noticed two errors: In Table 3B and D, the column labels should be “Frame 0”, “Frame 1”, and “Frame 2” rather than “Frame 0”, “Frame 0”, and “Frame 0.” (Table 3). A corrected version of Table 3 is included with this Erratum. In the “Calculation of partial correlation coefficients” section in Methods, the expression of the Pearson’s covariance matrix is incorrect. The denominator should be n − 1 instead of n. In the corrected version, the authors clarify the computational implementation used. Also, the authors now follow the convention where random variables are expressed in uppercase letters. A corrected version of this follows below (references included in the revised portion are referring to the original article):
Erratum to: the link between adjacent codon pairs and mRNA stability Yuriko Harigaya * and Roy Parker Erratum After publication of this article [1], the authors noticed two errors: In Table 3B and D, the column labels should be "Frame 0", "Frame 1", and "Frame 2" rather than "Frame 0", "Frame 0", and "Frame 0." (Table 3).
A corrected version of Table 3 is included with this Erratum.
In the "Calculation of partial correlation coefficients" section in Methods, the expression of the Pearson's covariance matrix is incorrect. The denominator should be n − 1 instead of n. In the corrected version, the authors clarify the computational implementation used. Also, the authors now follow the convention where random variables are expressed in uppercase letters.
A corrected version of this follows below (references included in the revised portion are referring to the original article):
Calculation of partial correlation coefficients
To examine associations of the content of inhibitory codon pairs with various gene expression variables controlling for covariates, we first attempted to use multiple linear regression models with exclusion of outliers and logarithmic transformation of skewed variables. However, we found that the models failed to satisfy the assumption of residual homogeneity (see below). We therefore chose to use non-parametric methods throughout the study.
We computed Spearman's and Kendall's partial correlation coefficients as described previously [16]. Briefly, we let X be a p-dimensional random vector (X = [X 1 X 2 ⋯ X p ] T ) and c ij be the covariance between two random variables X i and X j (1 ≤ i , j ≤ p). We denote the covariance matrix of X as C X , the inverse covariance matrix as D X , and the (i, j) element of D X as d ij . We then let X S be a vector that contains all elements of X except X i and X j .
The partial correlation of X i and X j given the vector X S is The Spearman's and Kendall's covariance matrices were constructed as implemented in the cov() function in the R base package [43].
We computed P values by previously described methods as implemented in the pcor() function in the R ppcor package [16] as well as by permutation tests. To obtain permutation P values, we randomly permuted the predictor variables and computed correlation coefficients. We repeated the procedure for 10,000 times and computed a permutation P value as (B + 1)/(N + 1), where N is the number of permutations. B represents the number of events where the permutation correlation coefficient exceeds the empirically observed value. A) Spearman's partial correlation coefficients controlled for GC content, tAI, dipeptide content, and coding length to assess an association between the fraction of hexanucleotide sequences corresponding to the inhibitory codon pairs in the 0, +1, and +2 frames and various gene expression variables. P values obtained according to Kim [16] and those based on permutation tests are shown. (B) Same as (A) but for Kendall's partial correlation coefficients. (C) Same as (A) but for the presence/absence of the hexanucleotide sequences. (D) Same as (B) but for the presence/absence of the hexanucleotide sequences | 2017-09-09T05:31:45.278Z | 2017-09-08T00:00:00.000 | {
"year": 2017,
"sha1": "545be8c0c34061b247ff42262bb76ae0bc9032ad",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-017-4088-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "545be8c0c34061b247ff42262bb76ae0bc9032ad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
70459170 | pes2o/s2orc | v3-fos-license | 6 The “ ROC ” Model : Psychiatric Evaluation , Stabilization and Restoration of Competency in a Jail Setting
Despite its well-meaning intentions, the movement toward deinstitutionalization has shifted more and more people with serious mental illness and co-occurring disorders from state hospitals to jails and prisons (Lamb and Weinberger, 2005; Human Rights Watch, 2003). There are now more than three times more seriously mentally ill persons in jails and prisons than in hospitals (Torrey, Kennard, Eslinger, Lamb and Pavle, 2010). The trend has intensified in recent years as public mental health resources, both at the state hospital level and at the local community level, continue to shrink. Even before the national recession of 2010 hit government agencies and forced them into profound and drastic cost-saving measures, reductions in public mental health services were already causing high numbers of people with severe and persistent mental illness to land in the criminal justice system. As early as 2007, Wortzel, Binswanger, Martinez, Filley & Anderson (2007) asserted that the systemic decline of public mental health resources had created a national crisis for persons judged Incompetent To Proceed (ITP) who are “log-jammed” in jails and prisons across the country. Calling it “the ITP crisis,” the Wortzel group decried the practice of jailing persons with psychotic disorders, often for long periods of time, without adequate psychiatric treatment because there are not enough forensic beds available in state hospital systems. “Hundreds of patients with severe mental illness deemed incompetent to proceed are languishing in jails around the nation, unable to access meaningful psychiatric care and not moving forward in the legal process as they await admission to grossly undersized and understaffed state hospitals... The combination of inadequate psychiatric care, the stress of incarceration and the long waits involved have yielded nightmarish results...” (Wortzel, et al., 2007, p. 357).
Introduction
Despite its well-meaning intentions, the movement toward deinstitutionalization has shifted more and more people with serious mental illness and co-occurring disorders from state hospitals to jails and prisons (Lamb and Weinberger, 2005;Human Rights Watch, 2003).There are now more than three times more seriously mentally ill persons in jails and prisons than in hospitals (Torrey, Kennard, Eslinger, Lamb and Pavle, 2010).The trend has intensified in recent years as public mental health resources, both at the state hospital level and at the local community level, continue to shrink.Even before the national recession of 2010 hit government agencies and forced them into profound and drastic cost-saving measures, reductions in public mental health services were already causing high numbers of people with severe and persistent mental illness to land in the criminal justice system.As early as 2007, Wortzel, Binswanger, Martinez, Filley & Anderson (2007) asserted that the systemic decline of public mental health resources had created a national crisis for persons judged Incompetent To Proceed (ITP) who are "log-jammed" in jails and prisons across the country.Calling it "the ITP crisis," the Wortzel group decried the practice of jailing persons with psychotic disorders, often for long periods of time, without adequate psychiatric treatment because there are not enough forensic beds available in state hospital systems."Hundreds of patients with severe mental illness deemed incompetent to proceed are languishing in jails around the nation, unable to access meaningful psychiatric care and not moving forward in the legal process as they await admission to grossly undersized and understaffed state hospitals… The combination of inadequate psychiatric care, the stress of incarceration and the long waits involved have yielded nightmarish results…" (Wortzel, et al., 2007, p. 357).
Alternative approaches
Budget cuts to state hospital systems and deficient community-based mental health resources will continue to shift the cost and services burden to local emergency rooms, county jails and law enforcement agencies.Efforts to address the problems of the ITP crisis have been varied and shown mixed results.Assertive Community Treatment has been shown to be effective for releasing high risk forensic patients (Jennings, 2009;Smith, Jennings & Cimino, 2010), but it is expensive and rarely available for most community mental health systems."Outpatient commitment" and "court-to-community programs," used in combination with intensive case management, have been tried with some success to remove mentally ill defendants charged with less serious crimes (Gilbert, Moser, Van Dorn, Swanson, Wilder, Robbins, Keator, Steadman & Swartz, 2010;Loveland & Boyle, 2007;Swartz, Swanson, Kim & Petrila, 2006), but they cannot be used for those charged with violent and dangerous crimes.Housing programs and long-term residential services can help prevent recurrent relapses and reoffending, especially for homeless persons with mental illness (Miller, 2003;Trudel and Lesage, 2006), but these strategies cannot be exercised immediately to avert hospitalizations or detention.In particular, "mental health courts" have multiplied across the country as a way to divert mentally ill defendants and substance abusers away from incarceration and toward appropriate treatment (Redlich, Steadman, Clark & Swanson, 2006;Grudzinskas & Clayfield, 2005).Mental health courts entail a variety of interventions, including non-adversarial process, training judges in mental health and collaborative inter-agency teams (Wortzel, et al., 2007), but there is no clear model that can be applied across jurisdictions and states.As jails and prisons have been forced to take responsibility for greater numbers of persons with mental illness, they have had to increase and expand whatever mental health services they can offer.In fact, the largest facilities that house psychiatric patients in the United States are not hospitals, but jails and prisons (Rich, Wakeman & Dickman, 2011;Torrey, et al., 2010).Adding more psychiatry time or mental health clinic hours is not enough when the jail environment itself is highly stressful and can exacerbate symptoms of mental illness.Large state correctional systems may have more resources than local jails to offer emergency psychiatry, intensive stabilization, addictions treatment and even hospital-level inpatient mental health units, but these increased behavioral health services are proportionate to the ever-growing numbers of inmates with serious and persistent mental illness entering corrections.More importantly, this does not address the need to identify, evaluate, treat and stabilize persons with severe mental illness when they are first arrested and detained -well before they are convicted and incarcerated in long-term state and federal prisons.The Restoration Of Competency ("ROC") model is a new approach to the ITP crisis that can intervene at the earliest point of arrest and detention by delivering forensic psychiatric evaluations and treatment, intensive stabilization and restoration of competency in a local jail setting.The ROC model evolved from a pilot project in Virginia in the late 1990's and has been further developed into a viable alternative to the ITP crisis.It can significantly accelerate needed treatment for mentally ill defendants, cut the demand for costly State Hospital forensic beds, and directly assist local jails and law enforcement in better managing this specialized high-risk population -yielding major cost savings and improved services for all.
Advantages of the ROC model
By diverting state hospital referrals to an alternative short-term restoration program in a local jail, the ROC model can help eliminate waiting lists for state hospital forensic beds, decrease the length of time to restore someone to competency, and relieve local jails from the responsibility of holding mentally ill defendants without adequate mental health resources.It can cost significantly more for a jail to hold an inmate with serious mental illness than a non-mentally ill inmate.This does not include the added liability, cost and personnel strain of managing individuals whose disabilities render them vulnerable to www.intechopen.com The "ROC" Model: Psychiatric Evaluation, Stabilization and Restoration of Competency in a Jail Setting 77 suicide, violence, medical emergencies and trauma in the non-therapeutic setting of a jail and therefore require much more intensive supervision and intervention.The amount of time waiting in jail for a competency evaluation and/or a state hospital bed can be significant.There is the time from initial arrest to the defense counsel's recognition of competence as an issue; time from recognition until the competence evaluation can be done; time to complete the evaluation; and time from the receipt of the evaluation report until the court adjudicates the issue (Christy, Otto, Finch, Ringhoff & Kimonis, 2010).These critical delays in gaining needed psychiatric treatment can exacerbate clinical symptoms and problem behaviors.By accelerating access to skilled forensic psychiatric evaluation and treatment in the jail, the ROC model can make clinical interventions at the earliest onset of illness, which reduces risk and makes it easier to stabilize the individual and restore and maintain competency.Moreover, prompt forensic examinations can differentiate the cases that can be resolved more quickly and will not require full hospitalization (Zapf & Roesch, 2011).In addition, the ROC model has the major advantage of facilitating access to local attorneys and the courts and family support.Individuals with mental illness can be evaluated, stabilized and restored to competency in their home community, eliminating the high cost of transporting patients to and from state hospitals and the courts.In a large and/or rural state, the distances can be enormous and expensive.Finally, there are major cost advantages of performing competency evaluation and restoration in a local jail setting.The cost of a forensic hospital bed is far higher than a jail bed, even a jail bed designated for mental health.For example, currently in Virginia, where this ROC model was developed, the average cost for a patient bed in the state's maximum security forensic hospital is $776 per day; whereas, the average cost to house an inmate in the local Regional Jail is $70 per day (Commonwealth of Virginia, 2010).The challenge, of course, is how to be able to provide an equivalent level of humane and effective psychiatric treatment in a jail or prison space that is not designed, equipped or staffed to provide a therapeutic environment.The following Table 1 summarizes the multiple advantages of the ROC model for state hospital systems, local jails, law enforcement and the persons served.
Transforming a jail pod into a restoration of competency "ROC" unit
Overcoming the jail environment: The main disadvantage of the ROC jail-based restoration model, and it is a major one, is that jails and prisons are simply not designed as mental health units.They are built for security, surveillance and control, not therapeutic calm and comfort.Jail buildings and units are typically austere, grim, noisy, crowded and uncomfortable.Even the few classrooms and program areas that are designated for more positive activities of education, recreation, leisure, visitation, or even treatment, are understandably limited in a jail -in number, size, appearance and amenities.Given the harsh physical plant realities of correctional facilities, the success of the jail-based ROC treatment model therefore depends on how well the available program space can be modified into a therapeutic environment.This entails a creative combination of (1) physical renovations to create a more pleasant and practical space for behavioral health treatment, create a positive environment; (3) specialized behavioral health training and supervision for correctional officers and unit staff; and (4) consistent, well-coordinated interventions by an integrated interdisciplinary team in delivering therapeutic services within the secure setting.while mitigating the environmental risks that mentally ill offenders may use in attempts to inflict injury upon themselves or others; (2) application of behavioral engineering principles to
Challenges Benefits
For State Hospital Systems • Increasing proportion of admissions to state hospitals are forensic patients.
•
State hospital systems have insufficient beds to meet demand.
•
Large and lengthy "waiting lists" for admission to state hospitals delay needed treatment.
•
The need to transport and escort forensic patients over long distances causes costly logistical problems.
•
Increased court pressure and administrative costs due to complications and delays in processing, treating and restoring patients.
•
Litigation from Advocacy agencies.
• Reduces number of individuals waiting for competency evaluation and restoration services.
•
Reduces length of stay for restoration through early intervention and targeted treatment.
•
Eliminates incentive for inmates to malinger by seeking "vacation" from jail or prison.
•
More convenient access for local courts, defense attorneys, prosecutors and law enforcement, which saves time and money and improves outcomes.
•
Seamless transition from ROC program helps maintain competence to stand trial.For Local Jails, Emergency Rooms and Law Enforcement • High numbers of mentally ill patients must wait in the non-treatment jail setting.
•
Jail setting is not designed for treatment and jail personnel are not trained to manage mental illness.
•
It costs much more to house mentally ill inmates than regular inmates.
•
Symptoms and severity of mental illness can exacerbate without prompt psychiatric intervention and can further complicate and extend the time needed to restore competency.
•
Higher risk of suicide, aggression, injury, trauma and litigation in nontherapeutic jail setting.
•
High costs of escort staff and longdistance transportation to and from state hospitals, courts and jails.
•
Increased use of costly hospital emergency room visits to manage mental health crises in the community.
•
Negative cycle of competency restoration, relapse in jail while awaiting trial and re-hospitalization.
•
Local county saves money by reducing the time spent in jail by mentally ill inmates.
•
County jail can gain new revenue to cover the expenses already incurred by holding mentally ill inmates.
•
Eliminates the time and cost of transporting patients to and from state hospitals and jails.
•
Reduces disruptions to jail operations caused by psychotic and disordered behavior.
•
Reduces risk of suicide, violence, injury and litigation.
•
Reduces costly Emergency Room visits.
•
More convenient access for local courts, defense attorneys, prosecutors, law enforcement and family support.
•
On-site clinical support can potentially be extended to support mental health crises for other inmates.Choice of facility: The ideal site for the ROC model is a jail that has many Incompetent to Stand Trial (IST) or Incompetent to Proceed (ITP) defendants, who are either waiting for admission to the state hospital for evaluation and restoration and/or defendants who have been restored and returned from the state hospital to await court proceedings.Based on the available space in the jail, the ROC program requires about 20 beds to be cost effective, but it can be flexed to accommodate a larger capacity of up to 40 or more.
Program space requirements: The ROC provider must work collaboratively with the local Sheriff or jail authorities to assess and configure the pod, unit or area within the jail that can separately house the mentally ill inmates (forensic patients) and provide the primary program space for delivering the restoration of competency services.The main need is to separate the mentally ill inmates from the general population and establish an area that is sufficiently quiet, clean, orderly and safe to serve as the therapeutic environment.As illustrated in the case study below, many activities can be held in the common area of the jail pod, but other cells or multipurpose rooms in the unit or jail can be adapted, if available, into clinician offices, exam rooms and group rooms.For recreation, the ROC patients should have scheduled access to a gym, recreation room or exercise yard separate from the general inmate population.Specially trained security staff: The ROC unit should have its own dedicated staff of specially trained security officers, who are separate from the traditional correctional officers working in the rest of the jail.The ROC provider must work closely with the jail leadership to select, train and coordinate the work of security officers who will be assigned to the ROC mental health unit.Candidates should be carefully interviewed and evaluated to determine if they are suited to using a very different approach to managing and interacting with inmates.They should demonstrate values, attitudes and behavior that will be congruent with the program's therapeutic orientation.They will be trained in the recovery model and the use of positive behavioral techniques and will continually interact with the inmate/patients and clinical staff alike.They are expected to play an active and meaningful role in maintaining the therapeutic milieu.A designated ROC Deputy is also recommended to supervise the other security officers on the ROC unit, serve as an intermediary with jail leadership, and directly participate as a member of the interdisciplinary treatment team.Interdisciplinary treatment team: The ROC treatment team would be interdisciplinary like that of a traditional forensic psychiatric unit, typically including a forensic psychiatrist, forensic psychologist, psychiatric nurse, social worker, rehabilitation therapist and clerk to coordinate scheduling, court dates, transports and forensic reports.A larger ROC program would have a larger team of professionals.The direct care staff would be security officers (see above), who are dually trained in security and treatment functions.Approach to competency restoration: The ROC program uses a recovery model that focuses on individual strengths and targets abilities that are related to competency, including remediation of deficits and alleviation of acute symptoms.The primary goal for most IST patients is to resolve the psychosis, when present, to enable the patient to regain general thinking abilities.The second goal is to educate the patient in court process such that he is able to cooperate with his counsel in mounting a defense.If there is a failure to achieve either of the these goals, the third goal is to compile documentation to credibly opine that the patient is unrestorable to competency.The ROC team combines the proactive use of psychiatric medications, motivation to participate in rehabilitative activities, and multimodal cognitive, social and physical activities that address competency in a holistic fashion.This includes the essential component of providing individual tutorials in competency issues by a psychologist.Some treatment modules/groups can be offered at two cognitive levels to better match higher and lower levels of functioning and understanding.The ROC model also avoids the problem of involuntary psychiatric medication by establishing and delivering incentives that result in voluntary agreement to medication.Motivation using a milieu management system: One of the strongest ways to motivate treatment and medication compliance is the use of a milieu management system that rewards meaningful participation in treatment and positive behaviors with points or privileges, such as points to "buy" various canteen items.It is better to deliver such rewards frequently and at the time of the positive behavior rather than accumulating points over a full day.By breaking the day into short half-hour periods during which one or two points can be gained, patients are better able to comprehend expectations, consequences and progress toward desired goals.For example, if the patient is expected to attend a restoration group at 10 am, he gets one point if he attends and none if he doesn't.But he can earn two points if he exerts earnest efforts to learn the material.Admission/assessment and treatment planning: Treatment begins with the intake assessment.The clinical team evaluates the person's psychological functioning, suicide and behavioral risk, current level of trial competency, and likelihood of malingering.A standard battery of psychological tests is used to evaluate cognitive abilities, social and psychological functioning, psychiatric symptoms and potential malingering.As needed, the ROC psychologist has other tests/screenings available for specific targeted areas of deficit.Assessment continues through the course of the admission to measure response to treatment and identify new problems to target for restoration of competency.A measure such as the self-developed Competence-related Abilities Rating Scale (CARS) can be used to monitor the individual's progress (Hazelwood & Rice, 2011).Based on the assessments, the treatment plan is individualized and geared toward one of two curriculums for lower and higher functioning patients.But treatment planning continues to be flexible and vigorous.It is common for the treatment team to discuss the treatment plan informally on a daily basis and to formally discuss treatment issues at least once a week.Rehabilitative services and coordination of medical care: Individuals in the program typically meet with a treatment professional one-on-one about issues related to regaining their mental health, or competency issues, at least twice daily, and are engaged in 3.5 to 5.5 hours of group-based psychosocial rehabilitative activities each day depending on the individual's current capacities.(Experience showed that the lower functioning patients could not tolerate more than 3 to 4 hours of focused work per day.)For the most part, the clinical professionals can largely work during traditional weekday business hours, but evening and weekend programming is important for maintaining the therapeutic milieu.The clinical team can maintain on-call support during afterhours, and if necessary, come into the jail to evaluate and assist with a psychiatric crisis.Treatment activities are structured and delivered across four domains: restoration of competency, mental illness and medication management, mental/social stimulation, and physical/social stimulation.Basic residential and health care, including all medical care and medications, can be provided on-site through a service agreement with the Sheriff/jail to utilize its existing pharmacy, medical records and medical service delivery system.Discharge planning: Discharge planning begins at the time of admission.The ROC establishes a link with the designated mental health professional at the referring jail to discuss the case and provide aftercare information that will assist the jail in managing the inmate/patient upon return.Information may include continuation of medications based on those available in the jail's formulary; use of resources at the jail to help with behavior management (e.g., available www.intechopen.com The "ROC" Model: Psychiatric Evaluation, Stabilization and Restoration of Competency in a Jail Setting 81 mental health cells, paraprofessional assistance, etc.); and recommended protocols for managing the individual, particularly someone who might use malingering for secondary gain (e.g., restrictions on personal property, defined triggers for acting out behaviors, etc.).Performance measures: The ROC model is organized to track multiple measures of efficiency, effectiveness, access to care, reduction in risk, and consumer satisfaction.Key performance measures can include the timeliness and results of evaluations, length of stay to achieve restoration, diagnostic and demographic data, hours of service by type and clinician, interventions, timeliness of court reports, customer satisfaction (including jail personnel, local law enforcement, courts, defense and prosecuting attorneys, state hospitals, patients and patient families, advocates and other stakeholders), recidivism and more.
ROC Case Study: The Liberty Forensic Unit at Riverside Jail
The pilot program: In 1997, Central State Hospital in Petersburg, Virginia needed to renovate its aging forensic units to accommodate a growing state-wide demand for forensic beds.The Department of Mental Health, Mental Retardation and Substance Abuse Services devised a bold plan to temporarily create a licensed forensic psychiatric hospital unit within the newly constructed Riverside Regional Jail in nearby Prince George County.A private company called Liberty Healthcare Corporation was selected to implement the pilot project.In just four weeks, the jail pod was transformed into an inpatient psychiatric unit with a complete staff of forensic clinicians, medical personnel, security and direct care staff and received initial state licensure as an inpatient behavioral health care facility and subsequent JCAHO certification as an inpatient psychiatric hospital unit.The unit then functioned as the acute, male admission unit for the state's maximum security forensic hospital.Minimal renovation required: The first challenge was to modify the two-level jail pod into an acute inpatient psychiatric unit without impacting its correctional functionality.This was achieved with very minimal renovation.Of the 48 single-occupancy cells within the pod, 35 were simply converted into individual patient bedrooms using the original bed, toilet, sink and dresser/desk.Beds were removed from cells in one quadrant of the pod to create ten staff offices, one treatment team room and two behavior stabilization rooms (i.e., quiet/seclusion rooms).Brighter colored paint replaced the original institutional gray.A non-secure page-fence was added to the mezzanine walkway to prevent anyone from falling or jumping.The creative use of behavioral engineering averted the need for other renovations.As part of the behavior management system, the mezzanine level bedrooms were designated for patients who had earned higher levels of responsibility and privilege in the treatment program.Also, patient movement from the floor to the mezzanine level was restricted to the central ramp, while the stairs on either side were restricted for staff use only.Otherwise patients were free to move about the unit.Boundary lines were marked on the floor using colored tape to delineate the few specific areas where patients were not allowed to travel without permission, such as the medical records room and the staff offices.Use of space for treatment and activities: The pod included one small conference room that could be used for treatment groups, competency groups and other therapeutic activities.Certain subareas of the common area could also be used for community meetings and socialization and group activities at designated hours of the day, such as a "Current Events" discussion group or to watch a psychoeducational videotape or TV program.For recreational activities, the patients could use an enclosed patio/basketball court and enjoyed exclusive use of the prison's gymnasium at scheduled hours every day, separate from the general inmate population.Restoration to competency: Efforts to restore a defendant to competency to stand trial primarily consist of medications to remediate active symptoms of mental illness, when present, and group and individual education about court and criminal justice processes with correlative documentation of response to these efforts at education.Group-based education included mock court run-throughs in which every patient took a turn at playing the various roles in court.Individual tutorials in court procedure were provided by the unit psychologists to move the patient more quickly toward competency and a defensible opinion for the court (when possible), but also helped document the thorough efforts made by ROC for cases that concluded in an opinion of unrestorability.Individual forensic evaluations, psychological testing, clinical interviews and counseling could be conducted in one of the single rooms or the small group room.One-to-one sessions were frequent because the psychologist conducted individual competency tutorials with most patients and each patient would meet regularly with his designated primary therapist.The host jail provided housekeeping, food services, and laundry.The ROC unit provided its own primary medical care and pharmacy and would refer serious and emergent medical issues to the state hospital infirmary or local civil medical hospitals.
Team-based interventions and milieu management:
The ROC program was highly proactive and preventative.Great emphasis was placed on maintaining a therapeutic environment characterized by calm, quiet, safety, predictability and interpersonal respect.A vigorous schedule of therapeutic activities helped to prevent boredom and provided opportunities for positive interactions.The key, however, was use of intensive team-based staff supervision.The security officers/direct care personnel were trained to be mobile, engaged observers, who could promptly identify and respond to precursors of disruptive behavior on the unit.The goal was to intervene gently as a team at the earliest point of concern -well before the patient might escalate into a full-blown episode of disruption and/or violence that could quickly undermine the vital climate of calm and safety for the rest of the unit.When disturbances occurred, as expected with an inmate population that was acutely ill and volatile, the ROC staff were trained to quickly, but quietly migrate to the scene as a team.This was accomplished with subtle cues and nonverbal communication between staff and performed without the need for rushing movements, loud verbal commands or calls for emergency assistance.Effective prevention and early intervention had the tremendous advantages of reducing the need for seclusion/restraint as well as lowering the risk of trauma and injury to patients and staff alike (see outcomes below).As a team, the staff were continually reviewing the therapeutic environment and monitoring patient behavior.This teamwork extended across working shifts.Problematic patient behaviors occurring on one shift were not allowed to carry over onto the next shift.When new risk factors were identified for patients, the team developed strategies to address individual needs.For example, patients themselves were taught and encouraged to use "time out" sessions on a voluntary basis.They understood they could go to a special area with close staff support if they were beginning to feel agitated or losing personal control.For all these reasons, the use of seclusion and restraint was minimal.When necessary, the team used the same calm efficiency in employing physical intervention techniques that were designed to preclude trauma to patients.In fact, the Local Human Rights Commission commended the unit for creating and implementing a Protocol for Recurrently Aggressive Patients because it introduced a lesser restrictive measure than seclusion and restraint, while enhancing the general safety of the unit.
Forensic categories served:
The Liberty Forensic Unit at Riverside Regional Jail provided three basic categories of forensic psychiatric service:
•
The "Evaluation" category was comprised of patients referred specifically for forensic evaluations, including pre-sentence evaluations, Competency to Stand Trial evaluations (CST), Mental Status at time of Offense evaluations (MSO) and combined CST/MSO evaluations.
•
The "Incompetent to Stand Trial" (IST) category was comprised of patients admitted for the purpose of restoring them to competency to proceed with the judicial process.
•
The "Temporary Detention Order" (TDO) category was comprised of pre-sentence and pre-trial jail transfers in need of acute inpatient psychiatric treatment to stabilize them and enable them to be returned and maintained in the jail setting.Note: The unit received acute referrals from dozens of jails across the Commonwealth.Volume of forensic services provided by type: The following chart summarizes the volume of patients served by forensic category over the history of the program operation.It also shows the proportion of patients requiring IST, TDO and Evaluation services shifted from year to year.In particular, the primary focus of the program shifted from the provision of acute psychiatric stabilization (TDO) in the first two years to the restoration of competency in the last two years.Seclusion and restraint rates: The LFU maintained very low rates of seclusion and restraint throughout its five years of operation.Seclusion was almost never used on the unit and was not employed at all in the final year of operation.Using data from the NASMHPD Research Institute for comparison, one study compared the number of restraint hours used in the LFU against the national average for forensic psychiatric units for the same period.Despite the high volatility and acuity of the forensic patients served, use of restraint on the LFU was typically less than half of the national average in the same year.
Number of Patients Served by Forensic Category
Restraint Hours per 1,000 Inpatient Hours Liberty Forensic Unit vs. National Forensic Average Customer satisfaction surveys were given to referring jails, community mental health centers (called CSBs), courts, attorneys and other entities being served.Results reflected the exceptional forensic services, high quality treatment and the collaborative responsiveness of the treatment team.96% of the CSBs affirmed that LFU staff contacted them within one week of admission and provided regular clinical updates on the status of the patients.The clinical and treatment follow-up information provided by the LFU was also highly valued by both local jail staff and CSB staff.90% and 87% respectively indicated that they were better able to manage their patients following treatment at the LFU.96% of the CSB staff were better able to perform service linkages based on the information provided from the LFU.87% of the referring jails affirmed that they were able to participate in both treatment and discharge planning for their patients and 93% acknowledged that the LFU treatment had been helpful.92% of the referring entities received the discharge plan in a timely fashion, 97% acknowledged that aftercare recommendations were helpful, and 97% received s o m e k i n d o f f o l l o w -u p s u p p o r t f r o m t h e L F U t e a m .9 2 % a l s o a f f i r m e d t h a t t h e recommended medication regimens at discharge remained unchanged for the inmate/ patients served.Commonwealth attorneys and defense attorneys were also satisfied with the quality of services received from the LFU unit.Whether on the side of the defense or the prosecution, the attorneys were nearly unanimous in their satisfaction with the clarity, utility and timeliness of the forensic reports received.Likewise, all but one attorney were satisfied that they could readily communicate with the ROC unit about their patients and that their patients had benefited from treatment at the LFU.
Conclusion
At a time when state hospital and community mental health resources are increasingly limited by critical financial realities, more and more people with severe and persistent mental illness and co-occurring disorders are becoming involved in the criminal justice system.In turn, the responsibility of caring for the mentally ill has shifted to the jails and prisons of America.One of the major areas is the ITP crisis in which inmates with mental illness are subjected to extended stays in jails awaiting competency evaluation and restoration.The ROC model is a cost-effective, clinically-effective and more humane model for this common problem.It calls for the provision of intensive psychiatric stabilization, forensic evaluation and restoration and maintenance of competency in the local jail Despite the apparently aversive physical constraints of most jails and prisons, the ROC model shows that mental health providers can transform a jail pod into a true mental health facility with a remarkably therapeutic milieu.By combining an effective behavior management system, a lively treatment schedule, and some simple environmental modifications, such as marking "boundary lines" on the floor, a well-trained team of clinicians and direct care/security personnel can maintained a climate of safety, predictability and respect.The ROC model can accelerate needed treatment and restoration by forensic category: Over a five year period, the LFU discharged forensic evaluation cases in an average of 21 days and provided psychiatric stabilization to return inmate patients to their referring jails in an average of 32 days.The ROC program achieved an overall competency restoration average of 83% while restoring full competency in an average of 77 days.Notably, in its final year and a half of operation, the ROC program was restoring competency in an average of just 69 days.www.intechopen.comThe"ROC" Model: Psychiatric Evaluation, Stabilization and Restoration of Competency in a www.intechopen.comCustomersatisfaction: The Liberty Forensic Unit at Riverside (LFU) was widely respected for the consistent delivery of excellent psychiatric and forensic services.It received formal commendations by the state chapter of the National Alliance of the Mentally Ill and the Local Human Rights Committee and frequent unsolicited praise from patients, patient families, Judges, State and Defense Attorneys, local jails, Community Service Boards and human rights advocates.
Table 1 .
Advantages of the ROC Model www.intechopen.comThe "ROC" Model: Psychiatric Evaluation, Stabilization and Restoration of Competency in a Jail Setting 79 Some of the key ingredients for setting up a ROC program in a local jail include the following: | 2017-08-28T02:06:47.292Z | 2012-01-13T00:00:00.000 | {
"year": 2012,
"sha1": "09f9e0ba3ebc7fa513cc1912273473e7a4078209",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/25947",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c3414c199bac5ff5c332ac216f45dffc7b9cdbcb",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234033951 | pes2o/s2orc | v3-fos-license | A review of MOFs and its derivatives for lithium ion battery anodes
With the development of electric vehicles and clean energy, the demand for lithium-ion batteries (LIBs) has increased significantly in recent decades. As traditional anode materials can only basically meet the requirements of current consumer electronics products, metal-organic framework (MOFs) material and its derivatives attract great attention to become a potential substitute with advanced performance. Pristine MOFs material has greater development potential in terms of high energy density anode. MOFs-derived materials, including porous carbon material, metal oxide and composite material, could exhibit improved electrochemical behaviours and better stability. This article introduces the pristine MOFs and MOFs-derived materials as an anode in lithium-ion batteries, as well as their modification methods that have been widely studied at the current stage. In the end, we discussed the future development trends of various MOFs materials.
Introduction
Lithium-ion battery (LIB) is a very promising energy storage system, which is widely used in phones, computers, and other portable electronic devices. It is also an important power supply source for electric vehicles and emerging smart grids in the future. Anode, an indispensable part in the battery system, is responsible for energy storage by storing as many lithium ions as possible. However, the theoretical specific capacity of today's commercial graphite anode is only 372 mAh/g, which cannot satisfy the requirements of the next generation of high-energy LIBs [1]. Thus, it is urgent to develop new anode active materials with higher energy density, good rate performance and low price.
The concept of MOFs (Metal-Organic Frameworks) was first proposed by Yaghi [2] in 1995, which is defined as a porous material with a periodic network formed by self-assembly of inorganic metal centres and bridged organic ligands [3]. Different from traditional graphite material and alloy material, it has the remarkable characteristics of high specific surface area, high porosity, good thermal stability and ordered crystal structure, etc. As a kind of material receiving tremendous attention, it has been widely used in gas storage and separation, catalysis, sensing, luminescence, drug carrier [4]. Recently, MOFs and its derivatives have been gradually applied in energy storage, and they are considered potential anode materials for high-performance LIBs due to their high specific capacity and high cycle life.
This article introduces pristine MOFs anode, MOFs-derived anode, and their common modification methods that have been widely studied at the current stage.
Pristine MOFs material
Unlike graphite, a commercial LIBs anode material with a layered structure, the pristine MOFs material has a diverse structural framework composed of metal centres and organic ligands bridged with them. Therefore, it exhibits an adjustable porous structure and a large specific surface area that can provide more electrochemical active sites. There are two main methods for pristine MOFs materials to store lithium ions: intercalation/de-intercalation and conversion reaction mechanisms [5].
Based on the regular and rich pore structure of MOFs, a large number of lithium ions can be quickly transferred and intercalate or deintercalated into the pores without causing structural changes of the frame, which is reversible intercalation/de-intercalation. Thus, MOFs shows the potential of large reversible capacity and good rate performance. As for the conversion reaction mechanism, it refers to the conversion of valence. As we know, redox reactions will occur during the charging or discharging cycles to transfer charges. In this case, metal ions or metal clusters in MOFs can be used as redox active substances in the redox reaction process. Generally speaking, the larger the valence range of the material is, the more electrons can be transferred, this is one of the reasons why MOFs could have a higher theoretical specific capacity than graphite [6]. Moreover, the metal elementary substance generated after the redox reaction can further store lithium ions through alloying reaction and could further increase the specific capacity.
In 2006, LI and his collaborators [7] first attempted to prepare Zn 4 O(1,3,5-benzenetribenzoate) 2 (named as MOF-177) by solvothermal synthesis method and used it as the anode of LIBs. Although the specific capacity of the material is low and the cycle stability is poor, it provided infinite possibilities for MOFs to be used as LIBs anode material. As expect, the following decades witness the dramatic development of pristine MOFs. Until now, it is more than 20,000 MOFs have been discovered, and more and more of them were proved an outstanding performance as anode material [8]. Although the MOFs material has advantages in pore structure and high specific capacity compared with the traditional anode electrode material, its inherent low conductivity and high irreversible capacity loss make it face great challenges in the application of high-performance anode active materials.
For the shortages of the pristine MOFs, various effective modification methods have been developed. Among them, selection of suitable ligands, bifunctionalization [9], element doping, and compounding with conductive materials are favoured by researchers.
For MOFs materials containing unsaturated bonds in their organic ligands, choosing a suitable ligand can contribute to increasing the specific capacity through the reversible lithium ion storage process of opening the unsaturated bonds and then reforming bonds in the charge-discharge process. For example, in 2016, Cheng et al. [10] chose aromatic ligand to synthesize [Co 1.5 L(H 2 O) 4 ] n with a one-dimensional chain structure and used it as anode active material for LIBs. After 50 cycles at 50 mA/g, the reversible specific capacity was 431 mAh/g, and the corresponding coulomb efficiency remained constant at 98.3%. Through further research on the electrochemical redox reaction, a mechanism that both metal centers and organic ligands could simultaneously participate in lithium storage was discovered for their high specific capacity. Similarly, Teng et al. [11] synthesized four different pillared-layer structure of MOFs with rich Li-binding sites, the three MOFs materials exhibited high specific capacities (about 600 mAh/g at 100 mA/g) with remarkable rate and cycling performance. It was found that these MOFs materials allowed a high Li insertion only by its aromatic carboxylate ligands and pillared-layer structure, which suggested metal centers were not involved in the lithiation process and had no significant influence on lithium ion storage for those MOFs materials.
In addition, in complex working conditions like the electrochemical environment of LIBs, bifunctional MOFs materials (BMOF) are favored to be synthesized for its better stability and performance. Usually, the rich inner pores of MOFs can be functionalized by chemical grafting and its modified frameworks could be achieved by the post-synthetic method. In 2015, Chen Liang and his collaborators [12] designed and then synthesized a new BMOF, Zn(IM) 1.5 (abIM) 0.5 with the hydrophobic and polar functionalities. The as-synthesized BMOF showed remarkable thermal and chemical stability in the experiment test, and it performed a reversible Li storage capacity of 190mAh/g at a current density of 100mA/g after cycling for 200 times (Fig. 1). Although more and more pristine MOFs with different topology structures or compositions are reported as anode material for LIBs, there are few reports on the improvement of the conductivity of the material itself. Therefore, based on considering the specific capacity and cycling performance, exploring the topology structure of pristine MOFs with faster charge transfer will be one of the future research directions.
MOFs derived materials
MOFs-derived materials refer to a type of porous nanostructured materials generated by heat treatment or chemical treatment using MOFs as sacrificial templates [13]. The derived material can, to some content, inherit the characteristics of MOFs themselves, such as high specific surface area, large pore volume and special framework structure. Because of these, MOFs-derived materials have special properties and novel functions in many fields. Till now, lots of MOF-derived anode materials have been reported, involving porous carbon material, metal oxide, and their composites. All of them could demonstrated impressive electrochemical behaviours in LIBs.
Porous carbon material
Various traditional carbon materials have been successfully synthesized and used in anode account to their abundant reserves and good chemical stability. However, because the energy density and power density of them are difficult to improve, the electrochemical performance is not good enough for future demand. In recent years, porous carbon materials derived from MOFs have broken this embarrassing situation. MOFs derived carbon materials can be pyrolyzed at high temperatures in an inert atmosphere, the carbon materials generated will be endowed with nano-cavity and the open access for small molecules [14]. To a large extent, MOFs-derived porous carbon materials could maintain the characteristics of the precursor high specific surface area, uniform pore size distribution and rich active sites. These advantages make MOFs-derived carbon material a potential to reach larger lithium-ion storage and faster charge transfer.
With the deepening of the study, the researchers found that the porous carbon materials derived from MOFs by heat treatment also have some shortcomings and limitations. This is mainly due to the poor thermal stability of MOFs itself. Even if the heat treatment process is controlled at low temperature, the MOFs-derived carbon material will still collapse to a certain extent [15]. These collapses will impede the insert of lithium-ion in the process of charging and result in a poor capacity.
Therefore, to obtain carbon materials with as much MOFs morphology as possible, element doping has become one of the most effective means to enhance the stability of its skeleton. In the preparation of MOFs-derived porous carbon, heteroatoms such as boron, nitrogen and sulfur could be doped into the carbon skeleton by selecting appropriate organic ligands, which greatly affects its physical properties. Among them, nitrogen doping is most famous. On the one hand, the introduction of nitrogen can increase the charge density of the material, which promotes the transfer of charge and ion, and enhances the [16]. On the other hand, nitrogen will cause the product with plenty of nanopores, providing more space for the storage of lithium ions. Zheng [17] reported the nitrogen-doped graphene particle analogues obtained by direct pyrolysis of a nitrogen-containing MOFs under a nitrogen atmosphere. Used as electrode material in lithium-ion batteries, these particles had a specific capacity of 2132 mAh/g after 50 cycles at 100 mA/g (Fig. 2). This excellent electrochemical performance is attributed to it doped with nitrogen within the hexagonal lattice and edges resulting high nitrogen content of 17.72% and large specific surface area of 634.6m 2 /g. Figure 2. a) Cycling performance and corresponding Coulombic efficiency at 100 mA/g. b) Rate performance at various currents from 100 mA/g to 1600 mA/g.
It is worth mentioning that hybrid-pores, a method achieved by adjusting treatment parameters, has also affected the performance of lithium-ion batteries. Chang et al. [18] obtained MOF-525 crystals with different particle sizes by adjusting the concentration of the regulator, and then annealed at 800 degrees Celsius to obtain a series of nanoporous carbon with different ratio of micropores to mesopores. By voltammetric cycle test, the nanoporous carbon materials with particle size of 185nm with optimized ratio of micropores to mesopores showed the best electrochemical performance. The remarkable performance resulted from the hierarchical micro/mesoporous structure reducing the ion diffusion resistance and increasing the contact surface area, which has a positive effect on the lithium storage. Coincidentally, by the vacuum carbonization of a zinc-based MOFs at 1000 degrees Celsius, Li et al. [19] synthesized a highly porous pure carbon material consisting of macropores, mesopores, subnanopores and closed pores, which has an amazing lithium storage capacity of 2458 mAh/g at the current of 74 mA/g and a favourable high-rate performance.
Porous carbon materials by using MOFs as a sacrifice template have promising performance as anode material in LIBs, but improper carbonization temperature will affect the graphitization degree centigrade of carbon components and then affect the electrical conductivity of the material obtained. The key problem of accurately controlling the carbonization process is bound to attract much more researchers' attention in the future.
MOFs-derived metal oxides
Transition metal oxides, Fe 2 O 3 , Co 3 O 4 , CuO, Tio 2 , etc, which can store lithium ions through the conversion reaction mechanism, has been reported as environmentally friendly electrode materials with a high theoretical specific capacity. However, it encountered a bottleneck in practical applications. During the charging or discharging process, the transition metal oxide undergoes a huge volume change and serious particle aggregation. These cause severe powdering and cracking of the electrode, resulting in rapid capacity degradation and poor rate performance. With the use of MOFs as sacrificial templates in calcination, a new type of metal oxides with controlled structure and composition could be obtained to solve this problem. Compared with other non-porous metal oxides, MOFs-derived metal oxides can retain the structural characteristics of the precursor. Thereby, they have the following advantages as electrode material: 1. Controllable particle size and morphology 2. High porosity and larger active surface improve electrolyte penetration and further improve the utilization rate of active site [20]. 3 continuous pore structure shortens the diffusion distance of ions and charges, which is beneficial to molecular dynamics to achieve a better rate performance.
In order to further strengthen the advantages of MOFs-derived metal oxides as an anode, several modification strategies have been developed, including bimetallic oxides, building smart structures, nitrogen element doping and so on.
Among them, bimetallic oxide refers to two kinds of metal oxides of different metal cations, which is different from the physical mixture of two metal oxides. The synergistic effect of the two active metals reduces the activation energy of electron transfer to achieve the purpose of improving conductivity. Besides, the bimetal oxide can also shorten the electron transmission path and ensure the stability of the electrode material under high current [21]. According to reports, Wang and co-workers [22] prepared Co/Ni-MOFs nanorod with uniform distribution and high structural integrity via a one-step facile microwave-assisted solvothermal method, and then obtained Co-Ni-O bimetallic oxide through heat treatment. The initial discharge-charge capacity of this material were 1737 mAh/g and 1189 mAh/g, respectively. At a low current of 100 mA/g, it delivered a reversible capacity of 1410 mAh/g after 200 repetitive cycles, which was significantly larger than the theoretical capacity of corresponding metal oxide. (Theoretical capacities of Co 3 O 4 and NiO are 890 mAh/g and 718 mAh/g, respectively) Moreover, the porous ZnO/ZnCo 2 O 4 nanosheets obtained by Xu et al. [23], which using Zn-Co-MOF as a precursor in one step of annealing, also proved the advantages of bimetal oxide. By using it as anode in LIBs, a reversible capacity of 1016 mAh/g was maintained after 250 cycles at 2A/g, even at a high current density of 10A/g, the capacity could attain over 600 mAh/g (Fig. 3). These outstanding performances have been mainly attributed to the synergistic effect that makes the electrode materials have higher conductivity and better electrochemical reactivity. Building a smart structure is another common strategy. When converting MOFs into metallic oxides, it is often accompanied by a large volume shrinkage, which leads to the appearance of internal cavities. Meanwhile, the gas generated during heat treatment will cause pores in the shell, resulting in porosity increasing. Reasonable application of this feature can construct a hollow porous or special complex structure, helping to reduce the loss of irreversible capacity. Guo et al. [24] used Co[Fe(CN) 6 ] 0.667 as a precursor in calcinate and successfully fabricated CoFe 2 O 4 nanocubes with a hollow porous structure. As an anode material, the specific capacity could be retained at 1115 mAh/g after 200 cycles, and it also performed good rate performance. It is because the hollow structure promoted the uniform distribution of stress and improved the adaptability of the volume change during lithium insertion/extraction. Also, the pores of the outer shell provided extra charge storage sites for redox reactions and enhanced ion diffusion at the same time, so that the fabricated material exhibited excellent cycling stability. Compared with the simple hollow structure, the complex structure can maximize the use of space. One example is MOF-derived Co 3 O 4 with twin hemispherical and flower-like structures, which was obtained by Zhang and co-workers [25]. Tested as an anode in LIBs, this active material has an initial discharge capacity up to 1325.5 mAh/g, and it can still arrive a reversible capacity of 470 mAh/g after 90 cycles. From authors' point, it possesses favourable features, such as a special complex structure for fast charge transfer, smaller particle size and compact porous structures for promoting the electrolyte to enter the electrode material.
At present, there are more and more research reports on the porous, hollow and complex structures of MOF-derived metal oxides. In addition to enjoying the application advantages brought by the structural complexity, exploring simple and facile synthesis methods is also one of the potential research directions.
Metal oxide / carbon composite materials
Based on the inherent structure of MOFs, it is both a metal source and a carbon source. In the inert gas atmosphere, the organic ligand components after calcination of MOFs usually decompose to form Co 2 , No x and other gases, and carbon will be generated when oxygen is insufficient [26]. Therefore, suitable control of heating parameters can obtain two components at the same time, namely metal oxide/carbon composite materials. This method makes full use of the particularity of the MOFs skeleton, so that the metal oxide can be uniformly dispersed in the porous nano-carbon matrix. As a result, it not only solves the problem of difficulty in controlling particle size in traditional synthesis, but also effectively alleviates the volume expansion of metal oxides during cycling and improves electrical conductivity to a certain extent. It is very promising in high-performance lithium battery electrode materials. Sun et al. [27] obtained Fe/Mn-MOF-74 by a one-step microwave method and further used it to synthesis hollow Fe-Mn-O/C microspheres. Endowed with the MOF-derived carbon-coated nanoparticle-assembled hollow structure, this metal oxide/carbon composite material can maintain 1294 mAh/g after 200 cycles at 100mA/g and behaved a reversible capacity of 521mAh/g at 1A/g, which is considered to have good cyclic stability and rate performance.
Generally, metal oxide/carbon composite materials can also be obtained by compounding flexible, strong, and conductive carbon materials with MOFs derivatives, such as carbon nanotubes, graphene, etc. Common strategies are surface coating or structuring three-dimensional conductive skeletons. Zhang et al. [28] used flexible carbon cloth with the characteristics of high mechanical strength and strong conductivity, and then grown MOF-derived ZnO@ZnO quantum dots/C core-shell nanorod arrays (NRAs) on it. This material obtained not only satisfies fast ion transfer but also has excellent mechanical strength and good corrosion resistance, so it has excellent rate capability and cycle stability in the laboratory test. Refer to data, it showed a reversible capacity of 1055 mAh/g at a current of 100 mA/g, and only 11% capacity got lost after 100 cycles at a current of 500 mA/g. Similarly, Yin et al. [29] employed a temperate coprecipitation method with ZIF-67 rhombic dodecahedron as a template and GO as a substrate, and successfully fabricated GO-MOFs derived rGO coating/sandwiching Co 3 O 4 . Due to the presence of the rGO coating, the electron transfer speed was improved and the volume expansion was alleviated, resulting in the material having excellent lithium storage performance (reversible capacity of 974 mAh/g at the current of 0.1 mA/g after cycling 100 times) and great cycle stability (the retention rate exceeded 95% after 100 cycles) (Fig. 4). These all indicate that the compound of MOFs derivatives and carbon-based materials can play a synergistic role, which is an important development direction for the construction of new electrodes in the future.
Conclusion
MOFs material and its derivatives are considered as two promising families in the field of energy storage. Their high porosity, controllable chemical compositions and structures offer immense possibilities in searching for suitable anode material for LIBs. Although pristine MOFs material has surpassed the commercial graphite anode in terms of specific capacity, till now, its practical application is hindered by its high cost and low conductivity. Therefore, exploring topology structure with fast charge transfer is considered as the most urgent goal. Comparatively, MOF-derived materials, including porous carbon, metal oxide, and their composites, show much promise as anode active materials due to better rate capability and improved stability. Moreover, a series of strategies have been developed to strengthen their practical performances for meeting the requirements of next-generation batteries. At present, for many of them, their electrochemical performances still have great development prospect.
Overall, the utility of MOFs material or its derivatives as anode materials in LIBs is an emerging research topic, facing great challenges and opportunities. Throughout the future development of the MOFs as anode active materials, researches generally move in the direction of great rate capability and lower cost on the basis of high energy density. In terms of practice application, researchers favor to use methods such as modification, composite materials to address the stability and conductivity of LIBs to achieve the goal of commercialization. | 2021-05-10T00:03:32.763Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "5f90e5d6b02367a7a74569e7118d2e27ba6a5ad7",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/634/1/012042",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3bf7022900d2c19af99b563f3e0b9fb349bddc8c",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
255939494 | pes2o/s2orc | v3-fos-license | A case report of autoimmune GFAP astrocytopathy presenting with abnormal heart rate variability and blood pressure variability
Background Autonomic dysfunctions including bladder dysfunction, gastrointestinal dysfunction and orthostasis are common symptoms of autoimmune glial fibrillary acidic protein astrocytopathy (A-GFAP-A); however, cardiac autonomic dysfunction and abnormal circadian rhythm of blood pressure, which can lead to poor prognosis and even sudden cardiac death, has never been reported in A-GFAP-A patient. Case presentation A 68-year-old male Chinese patient presented to our hospital with headache, fever, progressive disturbance of consciousness, dysuria, and limb weakness. Abnormal heart rate variability and non-dipper circadian rhythm of blood pressure gradually developed during hospitalization, which is rare in A-GFAP-A. He had positive GFAP IgG in cerebrospinal fluid (CSF). Enhanced brian MRI showed uneven enhancement and T2 hyperintense lesions of medulla oblongata; Cervical spine MRI showed T2 hyperintense lesions in medulla oblongata and upper margin of the T2 vertebral body. A contrast-enhanced thoracic spine MRI showed uneven enhancement and T2 hyperintense lesions of T1 to T6 vertebral segments. After treatment with intravenous immunoglobulin and corticosteroids, the patient’s symptoms, including autonomic dysfunction, alleviated dramatically. Finally, his heart rate variability and blood pressure variability became normal. Conclusions Our case broadens the spectrum of expected symptoms in A-GFAP- A syndromes as it presented with heart rate variability and blood pressure variability.
autonomic dysfunction and abnormal circadian rhythm of blood pressure had rarely reported in A-GFAP-A.
Autoimmune encephalitis (AE) with dysautonomia, of course includes A-GFAP-A, may cause more morbidity and mortality [4], because autonomic dysfunction, especially cardiac autonomic dysfunction and abnormal circadian rhythm of blood pressure (CRBP), can complicate intensive care and involve a ventilator [5], even lead to hemodynamic shock sudden cardiac death (SCD). Therefore, we should pay more attention to cardiac autonomic dysfunction and abnormal CRBP in A-GFAP-A patients.
Heart rate variability (HRV) refers to changes in the heartbeat cycle [6], which mainly reflects a sensitive indicator of the dynamic balance state of cardiac autonomic nerve regulation [7][8][9] . It is widely used for assessing autonomic nervous function for decades, especially evaluating the balance between sympathetic and vagal nerves. Deceleration capacity of heart rate (DC) is an advanced marker of HRV and can quantitatively detect the function of vagal nerves [10]. As a method implementing the concept of DC, deceleration runs of heart rate (DRs) can quantify heart rate deceleration and vagal regulation of the sinoatrial node [11].
Herein, we report a rare case of A-GFAP-A resembling infectious encephalitis with obvious autonomic nervous disorder, especially including abnormity in HRV and CRBP as clinical manifestation, to broaden the spectrum of autonomic dysfunction types in A-GFAP-A.
Case presentation
A 68-year-old male Chinese patient presented with headache and fever of up to 39 °C, accompanied by chills, paroxysmal cough and sputum, and systemic myalgia at the end of September 2021. The patient was treated with antiviral and antibacterial therapy that included moxifloxacin combined with piperacillin, tazobactam, and oseltamivir, and his body temperature declined and headache disappeared. The lumbar puncture (LP) revealed a opening pressure of 150 mmH 2 O, white blood count (WBC) of 149 × 10 6 /L (96.6% lymphocytes), decreased glucose level of 1.99 mmol/L, and significantly increased protein level of 2356.3 mg/dL. Initial brain magnetic resonance imaging (MRI) and electroencephalography were unremarkable. However, 1 week later, the patient gradually developed disturbance of consciousness with dysuria, and then was transferred to our hospital. The patient had the medical history of hypertension, and he worked as a farmer with frequent contact with pigs and sheep. Neurological examination revealed a lethargic state, stiff neck, weakened tendon reflex, positive Kernig's sign, and positive bilateral pathological signs.
Differential diagnoses included inflection etiology (e.g., neurobrucellosis and tuberculous meningoencephalitis) and autoimmune etiology. As atypical bacterial meningitis or viral meningitis could not be ruled out, the patient received treatment with ceftriaxone sodium, rifampicin, doxycycline, and ganciclovir infusion. On admission, his serum inflammatory indicators were normal. Tests for antinuclear antibody, anti-neutrophil cytoplasmic antibody subtype, lupus anticoagulant, rheumatoid factor, autoantibody screen, anti-Hantavirus antibody (IgG, IgM), and Brucella antibody (IgG) were negative. Serum Epstein-Barr virus (EBV) DNA test indicated prior infection.
After 3 days of treatment, the patient's consciousness gradually became clear and his neck stiffness improved, but he exhibited tremor in both upper limbs and weakened strength in both lower limbs, mainly in the right lower limb, which was accompanied by paresthesia. Enhanced brian MRI showed uneven enhancement and T2 hyperintense lesions of medulla oblongata ( Fig. 1 a, b); Cervical spine MRI showed T2 hyperintense lesions in medulla oblongata and upper margin of the T2 vertebral body ( Fig. 1 c). A contrast-enhanced thoracic spine MRI showed uneven enhancement and T2 hyperintense lesions of T1 to T6 vertebral segments ( Fig. 1 d, e, f ), reminiscent of autoimmune and demyelinating diseases. Electromyography was normal. LP showed a pressure of 80 mmH 2 0, WBC count of 76 × 10 6 /L (97.4% lymphocytes), normal glucose level (2.73 mmol/L), and protein level of 2356.3 mg/dL. Next-generation sequencing (NGS) of CSF showed 15 reads of EBV; Xpert, T-SPOT, and brucellosis antibody in the CSF and serum were negative. Autoimmune antibodies in the CSF and serum, including anti-N-methyl-D-aspartate receptor antibodies, anti-aquaporin 4, anti-myelin oligodendrocyte glycoprotein, and anti-contactin-associated protein-like 2 antibodies, anti-leucine-rich glioma-inactivated 1, antigamma-aminobutyric acid type receptor antibodies, anti-α amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid 1/2 receptor antibodies were negative. However, GFAP-IgG in CSF was positive in both cell-and tissuebased assays (Fig. 2). The patient was diagnosed with A-GFAP-A, and then he was administered intravenous immunoglobulin (IVIg) and methylprednisolone.
The patient's lower limb weakness and fever gradually improved, but developed dizziness, fatigue, and sweating after prolonged sitting (approximately 30 min). Ambulatory blood pressure monitoring showed a drop in systolic blood pressure of approximately 30 mmHg after 30 minutes of sitting, suggesting postural hypotension. The mean circadian rhythm variability in systolic blood pressure was non-dipper. 24-hour ambulatory electrocardiogram (Holter) showed a standard deviation of normal-to-normal interval (SDNN) of 84 ms, standard deviation of the average of normal-to-normal interval (SDANN) of 52 ms, and percentage of R-R interval difference > 50 ms (pNN50) of 4.4% (Fig. 3). Additionally, DRs suggested a medium risk of SCD (DR4 = 0.0649%, DR2 = 5.6803%, DR8 = 0.0024%). Electrocardiogram and echocardiography was normal. No abnormalities were observed on the thoracolumbar MRI.
After treatment with a standard IVIg regimen (0.4 g/ kg body weight/day) for 5 days and pulses of 500 mg/d Astrocyte fluorescence is widespread in the cortex (d) and hippocampus (e), and meningeal fluorescence is enhanced. Linear fluorescence is observed throughout the molecular layer in the cerebellum (f), and astrocyte fluorescence is observed in the granular layer and white matter methylprednisolone for 3 days, 250 mg/d methylprednisolone for 3 days, 125 mg/d methylprednisolone for 3 days, and 60 mg/d prednisone with a dose reduction of 5 mg every 2 weeks, the patient's fever disappeared and lower limb weakness improved. 3 months later, his orthostatic hypotension disappeared, SDNN, SDANN and pNN50 increased to normal levels ( Fig. 3), blood pressure variability (BPV) became normal. Patient condition gradually improved and he could mobilise independently.
Discussion
Clinically, the incidence of headache and fever in A-GFAP-A patients was 63.2 and 52.6%, respectively [12]. The initial clinical presentation of our case was fever, headache, and neck stiffness, accompanied by increased WBC count, decreased glucose, and increased CSF protein, making it easy to misdiagnose central nervous system infections, especially tuberculous meningoencephalitis. In addition, due to our patient's history of exposure to pigs and sheep, neurobrucellosis was also considered. However, the results of comprehensive etiological examination (metagenomic NGS, Xpert, bacterial culture of CSF, and brucellosis antibody) were negative, there was minimal response to antimicrobial therapy, and extensive nervous system involvement appeared, suggesting the possibility of immune inflammation. At this moment, AE should be considered. To date, the diagnosis of A-GFAP-A mainly relies on the detection of GFAP antibodies in CSF or serum, which is not widely used in clinical practice. Clinicians face many challenges in the diagnosis of A-GFAP-A prior to GFAP-IgG testing. Therefore, when the etiology of patients suspected of having a central nervous system infection is negative, autoimmune inflammation should be considered when multiple parts of the nervous system are involved alongside empiric resistance to infection, GFAP-IgG should be tested to exclude A-GFAP-A.
Autonomic dysfunction is another common symptom of A-GFAP-A. For AE patients, accompanying with autonomic dysfunction are more in need of intensive care units hospitalization and mechanical ventilation, and is more likely associated with poor prognosis; The possible reason is dysregulation of blood pressure, arrhythmia and SCD [13]. By parity of reasoning, in patients with A-GFAP-A, autonomic dysfunction should be received more attention to obtain earlier therapy, improved quality of life and have lower mortality.
In our case, the most prominent symptoms were abnormal HRV and CRBP, which are rarely reported in A-GFAP-A. HRV, one of cardiac autonomic function indicator, is an important maker for predicting malignant arrhythmia and SCD. Its common data analysis index parameters include SDNN, SDANN, rMMD and pNN50 [14]; in addition, DRs, an advanced indicator of HRV, refers to the continuous occurrence of decelerating heart rate, which is the specific performance of sinus rhythm regulated by vagus nerve in a short period of time. In the present case, SDNN and SDANN decreased, suggesting enhanced sympathetic nerve activity, while the abnormal decrease in pNN50 suggested decreased vagal nerve activity; Abnormal DRs indicated that the vagal nerve excitability decreased, the protective effect on the body heart was weakened, and the risk of SCD increased in our patient. The mechanism of A-GFAP-A's involvement in cardiac autonomic nervous function remains unclear. It has been reported that NMDA receptor antibodies may destroy the sympathetic circuits by controlling brainstem vagal nerve output and modulate sympathetic nerve output in hypothalamus and spinal cord [4], resulting in cardiac sympathetic dysfunction of anti-NMDAR encephalitis. A-GFAP-A is one type of autoimmune encephalitis, whether GFAP antibodies can attack sympathetic circuits through cytotoxic T cell-mediated autoimmune responses, thus affecting the stability of HRV, needs further research. Since GFAP can also be expressed in Schwann cells, peripheral nerves may also be a potential target for immune attack.
CRBP helps reveal circadian variations in blood pressure. According to the decrease of mean nocturnal systolic blood pressure compared with daytime (nocturnal drop rate), CRBP is divided into dipper (nocturnal drop rate ≥ 10%) and non-dipper (nocturnal drop rate < 10%). Dipper is thought to be a normal physiological condition. Non-dipper can be produced by autonomic dysfunction and is associated with target organ damage and an increased risk of cardiovascular death [15]. The CRBP of our case was non-dipper. The co-occurrence of abnormal CRBP and abnormal HPV further confirmed the existence of autonomic dysfunction. Weakened or disappeared CRBP can increase the risk of target organ damage and cardiovascular and cerebrovascular events. Early identification and treatment of abnormal HRV and BPV may improve the prognosis of A-GFAP-A.
In addition, our patient presented with urinary retention and orthostatic hypotension. It is unclear why A-GFAP-A has obvious phytophilic features. The limitation of this study was lack of sympathetic skin response test.
The etiology of A-GFAP-A is unknown, and approximately 30-40% of patients have tumors and infections [1,2,16]. In this study, the patient developed fever and headache, and NGS of CSF suggested EBV infection, but the patient's serum results suggested that it might have been a previous infection. It is unclear whether A-GFAP-A in this patient was triggered by viral infection, and further clinical cases and studies are needed to confirm the correlation between EBV and A-GFAP-A. The possibility of other unknown microbial infections, potential tumors (no evidence of tumor was found in the examination of the common tumor in this case, and further follow-up is needed), or other causes for the immune response cannot be ruled out. The role of GFAP antibody as an antibody to intracellular antigen has not been unanimously recognized. It has been reported [17] that absence of astrocytes involvement was found in autopsies of GFAP-positive patients. GFAP encephalitis is sensitive to hormones, but the effect of human immunoglobulin on GFAP encephalitis is unclear and needs to be further studied in depth.
Conclusion
Although A-GFAP-A has been newly discovered, it is gradually being recognized. When fever, encephalitis, myelitis, and autonomic dysfunction occur, A-GFAP-A should be considered. For patients with GFAP, HPV and ambulatory blood pressure monitoring should be detected, especially when they have vegetative neurological dysfunction such as dizziness and dysuria, to prevent the risk of SCD or fainting and falling. Conversely, when patients have unexplained autonomic neuropathy, GFAP antibody testing should be performed to confirm the diagnosis.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: | 2023-01-18T06:17:23.937Z | 2023-01-17T00:00:00.000 | {
"year": 2023,
"sha1": "a2d0d2f464fa5a406f7c9a11589c604ab2fe64af",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d32875427c1adee97abbc1f4a02bb4aee770732f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
120929740 | pes2o/s2orc | v3-fos-license | Paramagnetic Meissner effect at high fields in YCaBaCuO single crystal
We report on systematic magnetization experiments in an Y1-xCaxBa2Cu3O7−δ (x = 0.25 at%) single crystal. The magnetization experiments were made using a superconducting quantum interference device magnetometer (SQUID). Magnetic moments were measured as functions of the temperature according to the zero-field cooling (ZFC), field-cooled cooling (FCC), and field-cooled warming (FCW) prescriptions. The time-dependence of the FC magnetization at fixed magnetic fields was studied. Magnetic fields up to 50 kOe were applied and a paramagnetic response related to the superconducting state was observed when strong enough fields were applied parallel to the c axis. The magnitude of the high field paramagnetic moment (HFPME) increases when the field is augmented. The effect shows strong and anomalous time dependence, such that the paramagnetic moment increases as a function of the time. An YBa2Cu3O7−δ single crystal exhibiting the same effect was used for comparison. We discuss our results in terms of the flux compression scenario into the sample modulated by Ca concentration.
Introduction
The Meissner paramagnetic effect (PME) is characterized by a paramagnetic response of the field cooled (FC) magnetization when a superconductor sample is field cooled through its critical temperature, T c [1][2][3][4][5][6][7]. In the literate [1][2][3][4][5][6][7], the PME is distinguished between: the low-field PME (LFPME) [1][2][3][4] and the high-field PME (HFPME) [5][6][7]. The LFPME usually has its paramagnetic response gradually increased when H ≤ 10 Oe are applied. On the other hand, when H > 10 Oe are applied the LFPME is rapidly suppressed and conventional Meissner magnetization response is observed. [1][2][3][4] There are successful theories explanations for the LFPME such as randomly oriented π junctions [8], flux compression [9] and the giant vortex state [10]. The HFPME shows some noticeable differences when compared to the LFPME [. For instance, the magnitude of the HFPME moment increases when strong magnetic fields (H ≥ 1 kOe) are applied [5][6][7], its FC magnetization time-dependence is strong and anomalous. On the other hand, the theories approaches applied to the explanation of the LFPME are unsuccessful to explain the HFPME [5][6][7]. According to some authors [5][6][7] a detailed description of the relation between HFPME and pinning mechanisms still lacking since the existing flux-compressed scenarios do not take into consideration the crucial role of the pinning mechanism, nor the importance of vortex dynamics. single crystals with the proposal of study the role of the chemically introduced pinning mechanisms on the HFPME behavior contrasting these results with a no doped sample.
Experimental techniques and method of analysis
The single crystal samples of the studied systems were grown by the self-flux method [11]. The crystals were analyzed by X-ray diffraction [11]. The polarized light microscopy specifies that our single crystals are heavily twinned [11].
DC magnetization measurements were performed with a Quantum Design MPMS-XL SQUID magnetometer in DC mode. The SQUID signal and the sample centering were visually monitored during the experimental run. Magnetic fields from 1 to 50 kOe were applied perpendicular to c axis of the single crystals. Magnetic moments were measured as function of the temperature according to the zero field cooling (ZFC), field cooled cooling (FCC), and field-cooled warming (FCW) protocols. The time dependence of the FCC moment at a fixed temperature was studied up to a time of the 50.000s. All results were corrected for the corresponding demagnetization effect and sample holder signal contributions. In both samples, for H < 1 kOe, not showed in figure 1, the LFPME is absent and FCC and FCW magnetizations decrease in the whole temperature range below T c showing the usual Meissner behavior. However, when H ≥ 1 kOe the FCC and FCW magnetizations exhibit a diamagnetic "dip" in the temperature, T d that became more pronounced in YBaCuO sample as H is augmented. On the other hand, for T < T d the FCC and FCW magnetic moments increase steadily as temperature is lowered and attain a positive value which essentially exceeds the magnetization values in the normal state when H ≥ 10 kOe are applied. This behavior is the signature of the HFPME and was observed in YBa 2 Cu 3 O 7-δ single crystals [6] and melted textured samples [7]. It is important to note that the FCC and FCW magnetizations for our samples show a weak irreversibility as well as the HFPME magnitude of the Ca doped sample shows no tendency to saturate as temperature decreases towards to zero and improves the magnitude of this effect as compared to the YBaCuO sample [6,7]. Figure 2 shows for both single crystals the FCC magnetization time-dependence as a function of the applied magnetic field. The magnetic moment time dependence was measured for approximately 50.000 s after field cooling the samples at 10 K/min from 100 K to 40 K. In the figure, M 0 represents the first measured value of the FCC magnetization time-dependence. The Ca doping acts as an efficient pinning centres in the microstructure YBa 2 Cu 3 O 7-δ [12,13]. Particularity, Ca substitutions give rise to formation of rows of oxygen vacancies with pronounced strain fields in the CuO 2 superconducting planes [12,13]. The defects thus formed seem to be analogous to array of normal nanodots in CuO 2 planes which are expected to have strong intragrain pinning interactions with vortices state structure. HFPME results reported from Rykov [6] for an optimally doped and an overdoped YBaCuO single crystal display that the HFPME magnitude is suppressed in the overdoped sample. According to the authors, it is justified because HFPME magnitude is correlated to the structural disorder, like oxygen vacancies, that in the overdoped sample is smaller than optimally doped sample. The HFPME results showed in the Figure 1 are in agreement with this prediction once the HFPME magnitude is more intensity in Ca doped sample where the structural disorder character, enhanced by chemically substitution, is stronger than that one of the YBaCuO sample.
Results and discussion
The behaviour of the FCC moment time-dependence, displayed in Figure 2, for our single crystals shows that as applied magnetic field is augmented more extra flux is allowed to penetrate the sample supporting an establishment of a flux compression scenario. A similar behaviour was observed in melted textured YBa 2 Cu 3 O 7-δ samples [7].
The weak irreversibility displayed by FCC and FCW magnetizations of our samples where stronger magnetic filed were applied as well as the enhancement of the HFPME by Ca doping suggest the establishment of a flux compressed state modulated by pinning as a probably responsible to the origin of the HFPME in our samples. In agreement with this scenario, we believe that in some regions of the samples, where pinning is strong, the vortex density can be depleted below that expected from equilibrium state. Consequently, this state opens a place for the admission of extra vortices into the sample, originating the HFPME. Thus, it may be possible that pinning plays an important role to explain the HFPME in our samples. More studies are necessary in order to understand the correlated mechanism in both samples. The time dependence of FCC magnetization and the crucial role of the pinning mechanism in the YCaBaCuO single crystal are under current investigation. | 2019-04-18T13:09:30.035Z | 2010-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "a0f970a23c5b009766948371ddc8e17a30021d10",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/391/1/012124",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "10c16b5bded01f6218672b637c7bd005a0ac4461",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
20692966 | pes2o/s2orc | v3-fos-license | Evaluating the effectiveness of antiviral treatment in models for influenza pandemic
Abstract We study the effectiveness of antiviral treatment in simple susceptible–exposed–infectious–removed models that are at the base of models used for influenza pandemic. The strategy is assessed in terms of the value of the reproductive ratio R0. We consider a general framework and analyse six different specific cases. The same antiviral strategy is simulated in all models, but they slightly differ in the compartmental structure. These differences correspond to different underlying assumptions concerning the timing of the intervention and the selection of individuals who receive treatment. It is shown that these details can have a strong influence on the predicted effectiveness of the strategy: for instance, with R0 = 1.8 in absence of treatment, different models predict that with treatment R0 can become as low as 0.4 or as high as 1.3; still, in all models 70% of infected individuals are treated and the infectiousness of treated individuals is reduced by 80%. A particular assumption that can be included when modelling influenza is time-varying infectivity. We consider a specific model to verify if the predicted effectiveness of antiviral treatment is influenced by the inclusion of this assumption. We compare the results obtained with constant and variable infectivity, in relation also to the time of intervention. It is likely that existing differences in the predictions of the effect of control measures depend on such modelling details. This finding stresses the need for carefully defining the structure of models in order to obtain results useful for policymakers in pandemic planning.
Introduction
The recent emergence of a highly pathogenic avian influenza virus and its subsequent transmission from infected poultry to humans have raised concern about a future pandemic risk. An intensive preparedness planning is occurring in many countries and possible control measures are evaluated, often with the help of mathematical models. Since a pandemic vaccine is unlikely to be promptly available, other control measures have been considered to contain the pandemic in its earliest phases, while waiting for vaccine production and distribution. In this context, antiviral drugs are expected to play a major role both in prevention and in treatment (Balicer et al., 2004;Monto, 2003). They can be 70-90% effective as prophylaxis and shorten the duration of the infectious period by 1-1.5 days when used in treatment (Cooper et al., 2003;Monto, 2003;Longini et al., 2004;Hayden, 2001;Regoes & Bonhoeffer, 2006;World Health Organization, 2004). Antiviral use has been widely investigated in mathematical modelling (Arino et al., 2006;Colizza et al., 2007;Cooper et al., 2006;Ferguson et al., 2005Ferguson et al., , 2006Flahault et al., 2006;Gani et al., 2005;Germann et al., 2006;Longini et al., 2004Longini et al., , 2005Wu et al., 2006). Some attention has been given also 360 A. LUNELLI AND A. PUGLIESE to the possible emergence of an antiviral-resistant influenza strain (Ferguson et al., 2003, Lipsitch et al. 2007, Regoes & Bonhoeffer, 2006, a real threat to the effectiveness of antiviral-based policies. But results of different studies are often in disagreement: if some authors draw positive conclusions about the possibility of slowing the spread of the infection and reducing the attack rate (Barnes et al., 2007;Colizza et al., 2007;Gani et al., 2005;Germann et al., 2006;Longini et al., 2004Longini et al., , 2005Roberts et al., 2007), even in circumstances in which a resistant strain spreads widely (Lipsitch et al., 2007), others are more reluctant and suggest that a containment policy based on antivirals alone is unlikely to be successful (Ferguson et al., 2005(Ferguson et al., , 2006Flahault et al., 2006). These differences depend basically on the model considered and the assumptions used in the model regarding the intervention.
Since many countries plan to rely on antivirals to face the pandemic during the early months and antivirals will probably be the only pharmaceutic intervention available in the initial phase, there is an evident need to clarify if and how the evaluation of antiviral efficacy is influenced by the model assumptions.
Instead of evaluating different strategies, we focus on one strategy and investigate its effectiveness in relation to the structure of the model and its underlying assumptions. We consider the simplest scheme that can be considered to model influenza spread, i.e. a deterministic homogeneous susceptibleexposed-infectious-removed (SEIR) model. Alexander et al. (2008) have recently studied the optimal scheduling of antiviral treatment through the analysis of a homogeneous SEIR model with continuous age of infection. We extend the model in the direction of considering several different options about which infected individuals are treated and when; on the other hand, we subdivide the infectious period in discrete stages. We then analyse quantitatively specific subcases of the general model; some of the cases correspond to models used in previous studies. Each case derives from the general model making precise assumptions on the time and way of intervention (in particular on the duration of infectivity before being testable and before the diagnosis), but always assuming the same fraction of infected being treated and the same efficacy of antivirals. All the models considered are easy to analyse mathematically, so that it is possible to quantify the effect of different modelling choices.
These models have all constant infectivity. Through the comparison to a model that includes timevarying infectivity, we then investigate the effect of variable infectivity and how it influences the evaluation of antiviral efficacy.
Many authors have investigated the effect of different antiviral-based interventions using rather complex models, including social and spatial structures, stochastic fluctuations and other factors. Even if a homogeneous model may be inappropriate to simulate a realistic influenza pandemic, it constitutes the basis of most models considered in the literature. Its transparency allows to evaluate how the structure of the model influences the conclusions about the effectiveness of antiviral treatment. Complex models are definitely more realistic and suitable to simulate a pandemic, but they may obscure the role of underlying assumptions.
Our results may be useful when structuring more complex models, such as microsimulation models, and highlight the attention that should be paid to details of the model. Since the SEIR framework is always the skeleton of more complex models, the comparison between the results found with these models may help to understand the role of model assumptions in the evaluation of the efficacy of antiviral-based policies in pandemic containment.
Methods
In compartmental models with an SEIR structure, the population is divided in four classes according to the disease state: susceptibles (S), contains all the individuals who can be infected; exposed (E), contains the individuals who have been infected but are not infectious yet and do not show symptoms; infectious (I ), contains infected people who can transmit the infection, and immune or removed (R), contains all the individuals who have recovered or, in the worst cases, died. For influenza, the mean latent and infectious period have been estimated to approximately 1 (Ferguson et al., 2005) and 4 (Cauchemez et al., 2004;Hyman & LaForce, 2003;Longini et al., 2004;Mills et al., 2004) days, respectively.
When modelling influenza, many authors (Alexander et al., 2008;Arino et al., 2006;Chowell et al., 2007;Colizza et al., 2007;Ferguson et al., 2003;Nuno et al., 2007;Wu et al., 2006) divide the infectious period in phases to allow for asymptomatic stages, differences in infectivity or in symptoms severity. This allows also to structure treatment as administered at certain phases of infection. An alternative, which introduces an element of complexity into the model, would be to explicitly use the time-sinceinfection as a variable (Alexander et al., 2008;Grais et al., 2003;Roberts et al., 2007).
We propose a model with a general structure in which the infectious period is divided in three phases. If no treatment is modelled, individuals progress through three infectious subclasses (I 1 , I 2 and I 3 ) and finally recover. We assume that infected individuals, during the second infectious phase, may be classified (with probability p) as individuals who can receive treatment and therefore enter class Y (individuals potentially selected for treatment) at the end of the period; individuals not classified for treatment enter the third infectious stage (class I 3 ). From class I 3 individuals recover spontaneously. Individuals in class Y (suitable for treatment) have the possibility to be treated, or may recover spontaneously, before actually receiving treatment. The compartmental representation of the model is shown in Fig. 1. Our model includes, we believe, a great variety of cases considered in the literature. Simpler models with fewer infectious stages can be obtained by formally setting equal to ∞ the exit rate from the missing stages.
When simulating antiviral treatment of infected individuals, we ignore preventive antiviral prophylaxis of their contacts, which is generally part of the recommended intervention strategies. Indeed in compartmental models, as the ones we are considering (Colizza et al., 2007;Gani et al., 2005;Germann et al., 2006;Longini et al., 2004Longini et al., , 2005, individual contacts are not defined so that such an intervention cannot be modelled exactly, although it can be approximated by an appropriate reduction of withinhousehold transmission rates (Rizzo et al., 2008).
The classification between individuals who can receive treatment and those who cannot could depend on the severity of symptoms or behavioural or social features (geographical isolation, limited access to medical resources, tolerance of disease symptoms). We suppose in what follows that there is no difference in infectiousness between the two groups. Several authors (Colizza et al., 2007;Alexander et al., 2008) have assumed that individuals not selected for treatment are asymptomatic infectives and that they have a lower infectiousness; on the other hand, we stress the relevance of the potential presence of infectives who are as infectious as the others, but cannot be reached by treatment. Asymptomatic FIG. 1. Compartmental representation of the general model considered. Individuals are divided in classes according to the disease state: S (susceptibles), E (exposed), I 1 (infectious during the first stage), I 2 (infectious during the second stage), I 3 (infectious during the third stage), Y (infectious who can receive treatment), T (treated) and R (removed).
362
A. LUNELLI AND A. PUGLIESE infectives with low infectivity add little to the reproduction ratio of the infection, so that ignoring them does not affect strongly the results.
The reproductive ratio of the model can be easily computed using the method of Diekmann & Heesterbeek (2000) and van den Driessche & Watmough (2002) and it is given by where S 0 is the fraction of individuals initially susceptible, r represents the reduction in the transmission due to treatment (corresponding to AVE I in Longini et al. (2004), 80% in the numerical example), β is the transmission rate, γ 1 , γ 2 , γ 3 and γ Y are the exit rates from class I 1 , I 2 , I 3 and Y , respectively, α is the treatment rate of selected individuals and λ is the recovery rate of treated individuals. We also assume that antivirals shorten the infectious period of treated individuals (by 1 day in the numerical example).
The model can be viewed as an age of infection epidemic model of the type proposed by Kermack & McKendrick (1927) and analysed using the approach suggested by Brauer (2005). The analysis shows that R 0 is a threshold value: if R 0 1, starting from any initial state S 0 , only a few new infections will occur, without a major epidemic; if R 0 > 1, starting from a large enough susceptible fraction S 0 , a major epidemic will occur; during the outbreak the number of susceptibles can only decrease and, when the epidemic dies off, will finally settle to an equilibrium value that depends on the value of R 0 . The larger the R 0 the smaller is the number of individuals who escape the infection (Diekmann & Heesterbeek, 2000).
It seems therefore adequate to judge the efficacy of antiviral treatment through the resulting reduction of R 0 , as computed from (1). An antiviral treatment is generally measured by the reduction in infectivity (r ), the reduction of the period of infectivity (that will be parameterized later), and by the fraction P of the infected who are treated. A standard computation shows that this is given by It is, however, clear from (1) that P, r and the length of the period of infectivity of treated individuals are not sufficient to obtain R 0 . In order to understand better which are the factors leading to larger or smaller reductions of R 0 , we have considered several submodels, most of which have been chosen by other authors to investigate the effect of antivirals. The compartmental representation of each model is shown in Fig. 2.
Although the formulae given above apply to the general model, all the cases we will consider in detail belong to one of two model structures: either p is equal to 1, so that all infected individuals enter the class Y and can be selected for treatment; or γ Y is equal to 0, so that all individuals entering class Y (those potentially selected for treatment) are actually treated. It will be seen later that choosing one structure or the other changes substantially the estimate of the efficacy of antiviral treatment.
In the first model, all individuals are potentially treatable: when they leave the latent class they go directly to class Y , the only infectious class. From then on, they either enter the class of treated individuals (at rate α) or recover (at rate γ Y ). This is a special case of the general model and may be obtained by letting γ 1 and γ 2 go to infinity and setting p = 1. According to (2), the overall probability of being treated is given by α/(α + γ Y ). A model with this structure has been previously used by Flahault et al. (2006) to simulate antiviral treatment of cases.
In the second model, we assume that as the individuals leave the latent class, they are immediately classified either (with probability p) as individuals who will be treated (subgroup Y ) or (with probability 1 − p) not (subgroup I 3 ). Individuals in subgroup Y will enter the group of treated individuals at rate α, FIG. 2. Compartmental representation of the six models considered. Individuals are divided in classes according to the disease state: S (susceptibles), E (exposed), I 1 , I 2 and I 3 (infectious in different stages), Y (selected for treatment), T (treated) and R (removed). The transmission rate β is assumed to be constant. while those in subgroup I 3 will recover at rate γ 3 . This model may be obtained by letting γ 1 and γ 2 go to infinity and setting γ Y = 0 in the general model. It has the same structure as the models proposed by Alexander et al. (2008) and Chowell et al. (2006), even if in their models only a fraction of individuals selected for treatment are actually treated (i.e. γ Y = 0) and individuals in class I 3 are considered as asymptomatic with reduced infectivity.
Influenza is characterized by a short incubation period, a high attack rate and a lack of diseasespecific symptoms (Balicer et al., 2004). All these epidemiological characteristics can impose difficulties in identifying cases promptly when they enter the infectious class and may cause a delay in treatment. This aspect has been considered in several studies (Ferguson et al., 2005(Ferguson et al., , 2006Germann et al., 2006;Longini et al., 2005), in which intervention has been postponed to the second or third day after symptoms onset. Therefore, all the following models include a delay in treatment, assuming that some time is needed to identify cases and organize treatment. This period might be considered also as an infectious but asymptomatic stage.
In the third model, we assume that at the end of this phase, infectious individuals are either identified and treated or enter the class I 3 and will not be treated. A similar model has been proposed by Gani et al. (2005), and Ferguson et al. (2003) have introduced an analogous mild asymptomatic infection stage in their model. The model is obtained by letting γ 1 and α go to infinity.
Models 4 and 5 integrate the presence of the first phase of unrecognized infection with the treatment scheme used in Models 1 and 2, respectively. In Model 4, after a first infectious phase, all individuals are potentially treatable (class Y ). Then they either enter the treated class (at rate α) or recover (at rate γ Y ). To obtain this model, we have set p = 1 and let γ 2 go to infinity in the general model. In Model 5, after a first infectious phase, individuals are assigned either to subgroup I 3 or to subgroup Y . Individuals in subgroup Y will enter the group of treated individuals at rate α, while those in subgroup I 3 will recover at rate γ 3 . This is obtained by letting γ 2 go to infinity and setting γ Y = 0. A model with this structure has been considered by Wu et al. (2006) (with hospitalized instead of treated individuals) to include an initial asymptomatic phase of the infectious period, while similar infectious stages, but in a more complex model, have been proposed by Nuno et al. (2007).
As Models 4 and 5 correspond, respectively, to Models 1 and 2, Model 6 is comparable to Model 3. After the phase I 1 , individuals enter class I 2 during which they are either identified and treated or enter the third stage of the infectious period and then recover. This model is obtained from the general model by letting α go to infinity.
For each model, we have computed the average infectious period T I in absence of intervention, the average infectious period T AV for a treated individual and the average infectious period T noAV for a non-treated individual. We will use their mathematical expressions, given in Table 1, for parameter calibration. The reproductive ratio R 0 is computed directly from (1), using the assumptions on the parameters made for each model.
Parameter calibration
In order to compare the values for R 0 found in different models that include different parameters, it is necessary to properly calibrate the parameters. We have estimated the values of the parameters to investigate the effect of intervention on the value of R 0 and how this effect vary when we consider different models.
First of all, we require that, in absence of antiviral treatment, the mean infectious period has to be the same (4 days) in all models. This implies the condition T I = 4, where T I is the mean infectious period in absence of treatment and is given, for each model, in Table 1.
Secondly, the probability of receiving treatment, computed using (2), is the same (P = 0.7) in all models. In Models 1 and 4, this probability is given by α/(α + γ Y ), and thus determines the value of α, while in Models 2, 3 and 5 it is represented by p and so we are free to set α. In Models 1 and 2, individuals receive treatment, on average, 1/α days after leaving class E; therefore, in Model 2 we have kept α as in Model 1. Analogously, in Model 5 we have taken it as in Model 4 (individuals receive treatment, on average, 1/α days after leaving class I 1 ).
In Model 3, we assume that individuals have the possibility to be treated 1 day after becoming infectious. This could be due, e.g. to a first asymptomatic phase. Therefore, we set 1/γ 2 = 1. Then from the relation 1/γ 2 + 1/γ 3 = 4, we can estimate γ 3 .
In Models 4-6, we have introduced a 1-day delay in treatment administration, which gives 1/γ 1 = 1. Using the assumption of a 4-day natural infectious period, we can estimate γ Y and γ 3 .
As for the effect of antivirals, we assume that the infectiousness is reduced by 80% by antivirals, hence r = 0.2.
Finally, we need to establish the value of λ, reflecting the shortening of the infectiousness period of treated individuals. A commonly used assumption (see, e.g. Colizza et al., 2007) is that the infectious period of treated individuals T AV is 1 day shorter than T noAV , the infectious period of untreated individuals. However, Table 1 shows that, for Models 1 and 4, T AV > T noAV ; hence, it is not possible to require T AV = T noAV − 1. In other words, the time spent in the infectious class by a treated individual is on average longer than the infectious time of an individual who does not receive treatment. Nevertheless, if 1/λ = 1/γ Y , i.e. if treatment has no effect on the duration of the infectious period, on average the individuals will stay in the infectious class 1/γ Y days, as one would expect; however, those being treated stay there longer than the average, while those not being treated less than the average. This apparently bizarre fact comes from the assumption that being treated and recovering are two competing risks; hence, individuals who receive treatment are those who naturally would have a longer infectious period.
In Models 1 and 4, we actually treat individuals with an infectious period longer than the average T I , so the assumption T AV = T I − 1 may be too optimistic. Another possibility would be to consider T * AV , TABLE 1 Results found by analysing Models 1 to 6. T I , T AV and T noAV are, respectively, the average infectious period in absence of intervention and of treated and untreated individuals with intervention. R 0 is the reproductive ratio of the model, found from (1) as β S 0 times the expression reported in the last column; β is the transmission rate of untreated individuals and S 0 is the initial fraction of susceptible individuals; other parameters can be seen in Fig. 2 Model no. defined as the infectious period of treated individuals when treatment has no effect (i.e. λ = γ Y ), and to require T AV = T * AV − 1, which corresponds to 1/λ = 1/γ Y − 1. Results do not differ significantly; the highest variation is found with Model 1: if we assume a reproductive ratio of 1.8 in absence of treatment, we obtain R 0 = 0.65 with the hypothesis T AV = T I − 1 and R 0 = 0.74 with the hypothesis T AV = T * AV − 1. Therefore, we present only the results obtained with the first hypothesis. The parameter values used are given in Table 2.
Reduction of R 0
The general model without intervention is characterized by an average infectious period 1/γ = 1/γ 1 + 1/γ 2 + 1/γ 3 , the sum of the average duration of each infectious phase. Its reproductive ratio is given by R 0 = β S 0 /γ (i.e. (1) with p = 0), where β is the transmission rate and S 0 the initial fraction of susceptible individuals.
Using the parameter values given in Table 2, we have evaluated the effectiveness of the same intervention strategy when implemented in different ways, as described by the models considered. As discussed above, this evaluation has been made in terms of R 0 : we have computed the ratio between the R 0 of the model without intervention and the reproductive ratio of the model with intervention for each of the models investigated. This ratio tells how much the reproductive ratio is reduced under antiviral treatment. Results are given in Table 2 and show how the reduction is very sensitive to the assumptions made when modelling the intervention. In Models 1 and 2, individuals have the same probability of receiving treatment and, on average, they are treated after 1.7 days in both models. But in Model 1 the intervention seems to be much more effective. With an hypothetical R 0 of 1.8, a value commonly used to simulate a future pandemic (Ferguson et al., 2005), using Model 1 we would conclude that antivirals are able to contain the pandemic, reducing the value of R 0 below 1. The same conclusion is not reached using Model 2.
As expected, the introduction of a delay in antivirals administration reduces significantly the effectiveness of the control measure. This can be observed by comparing Models 1, 2 and 3 with Models 4, 5 and 6, which are their respective refinements. Assuming hypothetically R 0 = 1.8, antiviral treatment 367 would reduce it to 0.65, 1.17 and 0.99 if simulated with Model 1, 2 or 3 and to 0.9, 1.3 and 1.24 with Model 4, 5 or 6, where a 1-day delay has been included.
We have assumed a treatment delay of 1 day, but some authors (Ferguson et al., 2005) have considered a delay of 2 days. To investigate the effect of a longer delay, we can compare, for example, Model 3 and Model 6. With a 1-day delay (Model 3), we have R 0 = 2.2β S 0 . A 2-day delay (Model 6) gives R 0 = 2.76β S 0 . As expected, a longer delay in antivirals administration reduces significantly the effectiveness of the intervention. Isselbacher et al. (1994) have observed the natural course of influenza and have reported its clinical characteristics in an otherwise healthy 28-year-old male. According to them, the virus shed is maximal 2 days after the onset of illness and then decreases and reaches a minimum on day 5. Taking these results into consideration, it is reasonable to assume that the infectivity of an individual varies in time, determining a variability in the transmission rate. To assess the importance of considering different levels of infectivity, we have designed a specific model that allows us to compare the results obtained by assuming constant or varying infectivity. The model follows basically the structure of Model 6, but we assume that treated individuals follow an infection path similar to the untreated ones, with two phases characterized by different infectivity. Although other model structures are certainly possible, this allows us to understand the interaction of treatment timing with variable infectivity.
A comparison between constant and varying infectivity
Precisely, we make the following assumptions: without treatment, after the latent period individuals go through three infectious stages, each characterized by a specific infectivity, and then recover. According to the results of Isselbacher et al. (1994), we assume a first low-infectivity stage lasting 1/γ 1 = 1 day, followed by a second stage with high infectivity lasting 1/γ 2 = 2 days and by a third stage again with low infectivity lasting 1/γ 3 = 1 day. Varying infectivity is translated in non-constant transmission rates. Mimicking the results of Isselbacher et al. (1994), we have assumed β 1 = β 3 = 3/5β 2 , thus representing lower infectivity during the first and third stage. β 1 , β 2 and β 3 are the transmission rates during the three infectious stages, respectively. Infected individuals can receive treatment at the end of the first stage (with probability p 1 ), thus entering class T 2 , or after the second stage (with probability p 2 ), entering class T 3 . The transmission rate of treated individuals is reduced by a factor r , as in previous models, and therefore it will be equal to rβ 2 in class T 2 and rβ 3 in class T 3 . We further assume that individuals stay in class T 2 1.5 days before advancing to class T 3 , while individuals treated after the end of the second infectious stage recover after 0.5 days. The compartmental representation of the model is given in Fig. 3. FIG. 3. Compartmental representation of the model considered to include varying infectivity. Individuals are divided in classes according to the disease state: S (susceptibles), E (exposed), I 1 , I 2 and I 3 (infectious in different stages), T 2 (treated at the end of the first infectious stage), T 3 (treated at the end of the second infectious stage) and R (removed).
A. LUNELLI AND A. PUGLIESE
The reproductive ratio of the model is given by where λ 1 and λ 2 are the recovery rates of treated individuals. The probability of receiving treatment in the model considered is given by P = p 1 + (1 − p 1 ) p 2 and we have set it equal to 0.7, coherently with the previous numerical examples. Defining Q = p 1 P as the proportion of individuals treated after the first infectious phase, we have investigated the dependence of the effect of antiviral treatment on the timing of intervention. Namely, varying Q between 0 and 1 we change from a scenario where all the treated individuals receive prophylaxis after the second infectious stage (very late) to a scenario where treatment is administered to all selected individuals after the first infectious stage (i.e. 2 days in advance). Further, for a given Q, we can compare results obtained with varying and constant infectivity, obtained by setting β 1 = β 2 = β 3 . Figure 4 shows that introducing variable infectivity can influence the results, although to a limited extent quantitatively, and makes the time of intervention even more crucial in the evaluation of the effectiveness of antiviral treatment. As expected, the higher the proportion of individuals treated after the first phase, the more effective the intervention is, both with variable and with constant infectivity. For example, with varying infectivity, assuming R 0 = 1.8 in absence of treatment, we obtain R 0 = 0.92 if we treat all the selected individuals after the first phase (Q = 1) and R 0 = 1.6 if we treat all the selected individuals 2 days later (Q = 0). From Q = 0 to Q = 1, R 0 decreases linearly. In the case of constant infectivity, the results are analogous, but R 0 varies only from 0.97 to 1.51. FIG. 4. Relative effectiveness of antiviral treatment, computed as the ratio between the reproductive rate with intervention (model in Fig. 3) and the reproductive rate of the plain SEIR model. In the model, individuals may be treated at the end of the first or second infectious phase. Q represents the proportion of treated individuals who receive prophylaxis after the first phase. The graph shows results for varying and constant (i.e. β 1 = β 2 = β 3 = β T = β) infectivity.
Conclusions
We have considered different models for an epidemic with antiviral treatment. All models have an SEIR structure and derive from the same general model. We have shown that details in the model assumptions can strongly influence the evaluation of antiviral treatment as a containment measure for pandemic influenza. It must be remarked that although the compartmental structure of some models considered may appear unusual, they are all quite natural and suitable to simulate the intervention; some have indeed been used in previous studies.
As discussed in Section 2, there is an implicit difference between Models 1 and 4, on one side, and Models 2, 3, 5 and 6, on the other: in Model 1 (and 4), the individuals who do not get treatment are those who recover faster than they can be targeted for treatment; this has the consequence, already discussed, that the average infection period of untreated individuals is shorter than the average infectious period in the absence of intervention. In Model 2 (and 3, 5 and 6), it is assumed that infectives can be in principle distinguished between those who will be treated and those who will be not; the average infection period of untreated individuals (as well as their infectivity) is exactly the same as the average infectious period in absence of interventions.
From the results shown in Table 2, it can be seen that there is indeed a corresponding difference in the reduction of R 0 because of antiviral treatment between the two groups of models. This can also be seen in the formula for R 0 : in Models 1 and 4, the probability of receiving treatment is given by P = α α+γ Y and the mathematical expression of R 0 can be rewritten as γ 1 in Model 4 . In Models 2 and 5, P = p and R 0 = β S 0 (1 − P) 1 γ 3 + P( 1 α + r λ ) + β S 0 1 γ 1 in Model 5 . Considering that γ Y = γ 3 in Models 1 and 4, we can see that the difference between them is in the term Pβ S 0 α , the force of infection of treated individuals during the period before treatment starts. In other words, the value of R 0 in Models 1 and 4 looks as if we were ignoring the fact that treated individuals are infectious before receiving treatment.
These results show that the question of who is treated is decisive: it is very different if treated individuals are those, for one reason or another, outside the reach of the health system, if they are asymptomatic with low infectivity, or those who recover faster. These assumptions are often implicitly included into the structure of the model that should therefore be chosen carefully.
A second factor strongly affecting the effectiveness of intervention is the timing of treatment. This can be seen by comparing Models 1, 2 and 3, on one side, with Models 4, 5 and 6, that are analogous, except that a first infectious period is added, where no treatment is possible. Clearly, the inclusion of a time delay in drug administration reduces significantly its impact on the dynamics of the epidemic.
Time-varying infectivity makes timing of intervention even more crucial. In fact, if infectivity is lower in the first and last stage of the infectious period and higher in the middle stage, a late intervention is even less effective than in the case of constant infectivity: at the end of the middle stage, an individual will have already infected almost all the individuals he would eventually infect. On the other hand, missing treatment in the first infectious stage is less crucial, since few individuals would be infected anyway during that stage. This can be seen from Fig. 4 that shows the effectiveness of intervention as a function of the proportion Q of individuals treated after the first stage: there exists a threshold value Q t (in the numerical example Q t ≈ 0.53) such that if Q < Q t , the intervention is more effective if infectivity is constant than if it is variable (most individuals treated after the second stage), while it is less effective if Q > Q t (most individuals treated after the first stage).
Our study shows that when studying the effectiveness of antiviral treatment, much attention should be paid to the assumptions (often implicit) about the timing of intervention and the individuals who get treatment: even if the same intervention is apparently being modelled, different models can lead to different conclusions. The detailed structure of the model is very relevant and should be carefully evaluated and specified when assessing the importance of the results.
Although the models considered are all SEIR-type models for a homogeneous population, the results immediately translate to more complex SEIR models used to simulate an influenza pandemic. In fact, R 0 for an epidemic in a metapopulation is strongly influenced by the value of R 0 in each population in isolation (Diekmann & Heesterbeek, 2000) and may even be the same under some special choices of the contact matrix (Colizza et al., 2007). Individual-based models (Ferguson et al., 2005) are more flexible and can incorporate detailed assumptions about the timing of infectiousness and antiviral use, as well as allowing for antiviral prophylaxis of case contacts. Still, the results of this paper stress the need of making consistent and realistic choices when building any kind of model, and especially of making them transparent. Different results on the evaluation of containment strategies may depend on hidden assumptions in the model structure. Hence, the structure of models has to be carefully defined in order to obtain results that can be useful for policymakers in pandemic planning. | 2018-04-03T01:45:54.525Z | 2008-09-17T00:00:00.000 | {
"year": 2008,
"sha1": "b778355cd1571ce47c0108f5fed9a747ddc10d90",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc7314048?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4816ff7b6b07da5f741e3cbd3f03021c05429962",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220260536 | pes2o/s2orc | v3-fos-license | ADMA mediates gastric cancer cell migration and invasion via Wnt/β-catenin signaling pathway
Objective To explore the role of ADMA in gastric cancer. Methods The specimens of 115 gastric cancer patients were analyzed by ELISA and survival analysis. Functional assays were used to assess the effects of ADMA on gastric cancer cells. Experiments were conducted to detect the signaling pathway induced by ADMA in GC. Results Gastric cancer patients with high ADMA levels had poor prognosis and low survival rate. Furthermore, high level of ADMA did not affect the proliferation while promoted the migration and invasion of gastric cancer cell. Moreover, ADMA enhanced the epithelial–mesenchymal transition (EMT). Importantly, ADMA positively regulated β-catenin expression in GC and promoted GC migration and invasion via Wnt/β-catenin pathway. Conclusions ADMA regulates gastric cancer cell migration and invasion via Wnt/β-catenin signaling pathway and which may be applied to clinical practice as a diagnostic and prognostic biomarker. Electronic supplementary material The online version of this article (10.1007/s12094-020-02422-7) contains supplementary material, which is available to authorized users.
Introduction
Gastric cancer (GC) is one of the most common malignant tumors of the digestive tract. According to the National Central Cancer Registry of China, there were about 679,000 new cases of gastric cancer in China in 2015, and about 498,000 deaths, with the mortality rate of GC being the second highest among all malignant tumors. The early detection rate of gastric cancer in China is low [1]. Some patients have lymph node metastasis at the first diagnosis, and even distant organ metastasis, which leads to the loss of surgical cure and affects the prognosis and survival rate of patients [2]. Despite the advancement of modern medical technology, the continuous development of endoscopic techniques and the remarkable progress of immunotherapy, the current treatment methods have limited efficacy in the treatment of gastric cancer, with the mortality rate of gastric cancer in China remaining high [3][4][5][6]. The incidence of gastric cancer in China accounts for about 42.6% of the world and the death of gastric cancer accounts for about 45.0% [7]. Therefore, identification of novel biomolecules and signaling pathways may provide potential therapeutic strategies for GC.
A healthy adult produces 300 µmol (~ 60 mg) of ADMA per day [8]. ADMA is mainly synthesized by protein arginine methyltransferases (PRMTs). PRMTs use S-adenosylmethionine as a methyl donor to transfer methyl groups to the nitrogen atom of the guanidinium group of arginine, catalyzing the methylation of arginine residues [8][9][10]. About 80% of ADMA are degraded into citrulline and dimethylamine by DDAH1 and about 50 µmol of ADMA is excreted by kidney as a prototype [11,12]. It has been reported that ADMA serum levels were high in a variety of patients with tumor, including lung cancer, hematopoietic tumor, breast cancer, gastric cancer, esophageal cancer and colon cancer, but its role in tumor development is still unclear [13,14]. Some studies in colon cancer indicated that ADMA treatment attenuated cell death in LoVo cells induced by serum starvation (SS) and doxorubicin hydrochloride [15]. Some studies in pheochromocytoma cells have suggested that ADMA can reduce the glutamate-induced cytotoxicity, apoptosis and Caspase-3 activation and reverse the down-regulation of glutamate-induced bcl-2 expression [16]. Also, it has been reported that ADMA levels may be increased in nononcologic processes such as radial artery spasm, coronary artery disease and so on [17,18]. Although, ADMA plays a significant role in promoting tumor progression in colon cancer and esophageal cancer, the role of ADMA in gastric cancer has not been well investigated. Additionally, we have found that DDAH1 inhibitor (PD404182) promoted the epithelial-mesenchymal transition(EMT) progression as well as the migration and invasion via Wnt signaling pathway in GC cells [19]. ADMA are mainly degraded by DDAH1,and DDAH1 inhibitor treatment led to accumulation of ADMA in vitro [20]. Therefore, to determine whether the ADMA promote GC migration and invasion via Wnt signaling pathway as well, we performed a series of experiments with different concentrations of ADMA to support our guess.
A slice of evidences stated that the Wnt/β-catenin pathway was one of the key factors inducing metastasis [21,22]. Wnt proteins constitute a large family of secreted lipid modified glycoproteins. The Wnt family is implicated in a variety of cellular processes, such as proliferation, apoptosis, differentiation, and migration [23]. A great quantity of evidences have suggested that the canonical Wnt/β-catenin pathway plays a critical role in inducing cancer steam cell to undergo EMT. And EMT is a highly conserved and fundamental process that is critical for embryogenesis and some other pathophysiological processes, particularly tumor genesis and progression. Aberrant gastric EMT activation could endow gastric epithelial cells with increased mesenchymal characteristics and less epithelial features, and promote cancer cell stemness, initiation, invasion, metastasis and chemo-resistance with cellular adhesion molecules repressed, which allows tumor cells to disseminate and spread throughout the body. EMT is modulated by diverse micro-environmental, membrane, and intracellular cues, and could be triggered by various overexpressed transcription factors, which are downstream of several vital cross-talking signaling pathways including Wnt/β-catenin [21,24,25]. Therefore, our work was aim to elucidate the relationship between ADMA and EMT in GC dissemination. The result may further provide insights into the potential and mechanism of ADMA in promoting GC invasion and metastasis.
Cell lines and clinical samples
Six human GC cell lines (NCI-N87, MKN74, AGS, NUGC3, MGC803, HGC-27) were obtained from the Type Culture Collection of the Chinese Academy of Sciences (Shanghai, China). All cell lines were maintained in RPMI-1640 supplemented with 10% FBS except AGS in DMEM/Ham's F12 medium and incubated at an atmosphere containing 5%CO2 at 37 °C. Gastric adenocarcinoma patients (n = 115) were randomly enrolled in between January 2013 and December 2014 at the Department of General Surgery of the Second Affiliated Hospital of Fujian Medical University. All patients were diagnosed pathologically according to the American Joint Committee on Cancer (AJCC) criteria [26]. No patient had received chemotherapy or radiotherapy before surgery. Tumor samples and the corresponding noncancerous mucosal tissue were collected from all patients immediately after resection and were frozen in liquid nitrogen. Samples were stored in the Center Laboratory of Second Affiliated Hospital of Fujian Medical University for studies.
Evaluation of serum ADMA levels
The serum ADMA levels were measured using the enzyme linked immunosorbent assay (ELISA) method (Immunodiagnostik, Bensheim, Germany), following manufacturer's instructions. In this study, we examined serum from 115 patients with Gastric adenocarcinoma patients and 110 noncancerous cases were used as controls. Differences of serum ADMA levels between cancer and noncancerous cases were tested with 2-tailed t-test. Relations between variables were investigated by Pearson's correlation test. 2 × 10 8 of six human GC cell lines were also measured.
Cell proliferation assay
Cell proliferation was assessed using the Cell Counting Kit-8 (CCK-8; Dojindo, Kuma-moto, Japan). The GC cells were seeded in 96-well plates at a density of 1000 cells per well and incubated at 37 °C, 5% CO2 for 1, 2, 3, 4, 5 days. 10 ml of CCK-8 solution was added into each well and incubated at 37 °C for 1 h. The absorbance at 450 nm was measured using a microplate reader.
Cell migration and invasion assay
For the migration assay, 6 × 10 4 cells in serum free media were placed into the upper chamber of an insert (8-mm pore size; BD Bioscience). For the invasion assay, the transwell 1 3 insert was coated with Matrigel (BD Bioscience) and 6 × 10 4 cells were plated onto the top of the coated filters. The medium containing 10% fetal bovine serum was placed to the lower chamber. After 24 h of incubation, the cells that did not migrate or invade through the transwell insert were removed with cotton swabs and then the insert was stained with 0.1% crystal violet, imaged, and counted using an a Qimaging Micropublisher 5.0 RTV microscope camera (Olympus).
Wound-healing assay 8 × 10 5 of GC cells were plated into 6-well plates and reached to 100% confluence. Wounds were scratched onto the monolayer of cells with a 10ul pipette tip. Then the cells cultured at 37 °C in 5% CO2 and the images were captured at 0, 36 h.
Western blot assay
Western blot was performed to analyze the expression of proteins. The following antibodies were used for this analysis, E-cadherin antibody, Vimentin antibody, β-catenin antibody, Snai antibody, Slug antibody, Twist antibody, (Cell Signaling Technology, Beverly, MA). Protein expression was quantifed by densitometric analysis, and the expression levels were normalization against that of β-actin (Sigma Aldrich).
RNA extraction and real-time quantitative PCR
Total RNA were isolated from cell lines or frozen tissues with the Qiagen RNeasy kit as described by the manufacturer, and 1 mg RNA was reverse transcribed using miScript Reverse Transcription Kit (Qiagen, Hilden, German) for first strand complementary DNA synthesis. Quantitative PCR was performed using SYBR Premix EX Taq kit (Takara, Shiga, Japan). The specifc primers used were designed to detect the mRNA expression of E-cadherin, Vimentin, Twist, Slug and Snail. β-actin was including as an internal control. The comparative threshold cycle (Ct) method was used to determine the relative level of gene expression. Primers used for qRT-PCR analysis of EMT related markers and β-actin were listed in the following:
Immunohistochemistry and scoring methods
We constructed tissue microarrays, which include GC tissues and corresponding non-cancerous tissues extracted from 115 GC patients. Tissue microarray chip were included two cores of 1 mm diameter per sample. Immunohistochemical staining with β-catenin antibody (1:100, CST). Nuclear/cytoplasmic β-Catenin staining was considered positive if > 30% of cells showed yellow or brown staining.
Dual-luciferase reporter assay
The TCF-responsive luciferase construction of Top-Flash and its mutant Fop-Flash (Addgene, USA) were used to study the β-catenin transcriptional activity. The GC cells were seeded into 24-well plates. After 12 h incubation, target cells were co-transfected with Top-Flash and PRL-TK report vector or Fop-Flash and PRL-TK report vector. The relative luciferase activity was determined using a dual-luciferase reporter assay kit (Promega, USA). The PRL-TK report vector was used as an internal control vector.
Statistical analysis
Statistical analysis was examined using SPSS 22.0 for Windom. Student's t test was used to analyze the results expressed as mean ± SD from three independent assays. The associations between the level of ADMA and the clinicopathological parameters of the GC patients were analyzed using the χ 2 test or Fisher's exact test. The survival curves were plotted using Kaplan-Meier analysis. Differences were considered significant when P < 0.05.
Results
The serum ADMA levels were higher in gastric carcinoma and inversely correlated with prognosis We first measured serum ADMA level in 115 GC patients and 110 normal subjects using ELISA. As shown in Fig. 1a, the serum ADMA levels in patients with GC were significantly higher than normal subjects (P < 0.05). According to our result, the average of serum ADMA in the normal subjects was 0.447 μM (μM meaning μmol/L), so we defined GC patients with ADMA more than 0.447 μM as ADMA high level group, otherwise, as ADMA low level group.
3
Then correlation analysis of serum ADMA levels with patients clinicopathological characteristics (Table 1) demonstrated that the ADMA level was positively associated with the depth of tumor invasion (P = 0.004) and clinical stage (P < 0.001), and negatively with differentiation status (P < 0.001). And serum ADMA level presented no significant association with gender, age, tumor size and lymph node metastasis. Kaplan-Meier survival analysis showed that GC patients with high level of ADMA had observably shorter survival than low level ADMA group (Fig. 1b). In summary, these results indicated that ADMA may function as a tumor activator and it may promote the development and progression of GC.
ADMA did not affect the proliferation, but promoted migration and invasion potential of GC cell
Based on the finding that ADMA level was associated with prognosis of GC, we determined the functional role of ADMA in GC malignant behaviors in vitro. First, we measured ADMA level in six cell lines (Fig. 1c). According to the level of ADMA, we chose AGS and MGC803 as objects of our following research. As showed in Fig. 2a and b, AGS and MGC803 were treated with different concentrations of ADMA(0, 1, 2, 5, 10, 20, 40, 80 μM), and we found ADMA did not affect the proliferation rate of AGS or MGC803 cells as evaluated by CCK-8 assay. As shown in Fig. 2c,2D and Supplementary Fig. 1A, migration assay showed that AGS and MGC803 were treated with different concentrations of ADMA (0, 1, 2, 5, 10, 20 μM), and ADMA enhanced the cell migratory ability. At the same time, a wound-healing assay was used to confirm the changes in cell migration. ADMA remarkably promoted the wound closure than the control cells (Fig. 2e, f and Supplementary Fig. 1B). And next, the invasive potential of the GC cells was assessed by a modified Boyden chamber invasion assay. Likewise, ADMA increased the number of cells that invaded through Matrigel compared with the control cells (Fig. 2g, h and Supplementary Fig. 1c). Taken together, these results clearly suggest that the important role of ADMA in GC metastasis. It is worth mentioning that ADMA(10μΜ) is optimum concentration.
In the above mentioned experiments, as the concentration of ADMA increases from 0 to 10 μM, the experimental results become more obvious. While the ADMA concentration was higher than 10 μM, the experimental results were similar. Therefore, we chose ADMA (10 μM) as experimental group in the following assays.
ADMA enhanced epithelial-mesenchymal transition (EMT)
Many studies have shown that EMT is considered a key event in the initial invasion step of cancer metastasis. which allows polarized epithelial cells to become mesenchymal cells. After a series of biochemical changes that induce a morphological transformation, epithelial cells reduced intercellular adhesion and enhanced migratory and invasive capabilities. As shown in Fig. 3a, bright field images of the morphology of GC demonstrated that ADMA (10 μM) endowed the GC cell with more fibroblast-like morphological features. Then Western blot analysis and qRT-PCR were used to quantify the effect of ADMA on protein and mRNA expression of EMT-related markers in GC cell lines. ADMA (10 μM) down-regulated the expression of epithelial markers (E-cadherin), while up-regulated the expression of mesenchymal markers (Vimentin) and EMT regulator (Snail, Slug and Twist) in protein and mRNA level (Fig. 3b-d).
ADMA positively regulated β-catenin expression and promoted migration and invasion via Wnt/ β-catenin pathway in GC
Wnt/β-catenin pathway is well known to play an important role in inducing EMT and promoting migration and invasion of GC. To investigate whether ADMA affected β-catenin expression in GC, As shown in Fig. 4a, ADMA level was Fig. 1 The serum ADMA levels were higher in gastric carcinoma and positively correlated with poor prognosis. a The serum ADMA levels in 115 GC patients before surgery and 110 normal subjects. Elisa test was carried out to determine the level of serum ADMA. The graph showed the concentrations of the serum ADMA levels in GC and normal subjects. Unpaired T test was used to compute P value (*P < 0.05). b Kaplan-Meier survival analysis of 115 GC patients with the high or low serum ADMA levels (P < 0.05, log-rank test). c the serum ADMA levels in six GC cell lines positively correlated with the expression of β-catenin in AGS and MGC803. Then, to investigate whether ADMA affected β-catenin transcriptional activity in GC, Top-Flash reporter assays were used to assess the effect of ADMA on β-catenin activity. As showed in Fig. 4b, ADMA significantly increased the transcriptional activity of β-catenin in AGS and MGC803. To further confirm the positive correlation of ADMA and β-catenin expression. As shown in Fig. 4c, IHC assay showed ADMA level was closely correlated with the expression of β-catenin. Given the full evidence that ADMA positively regulated β-catenin expression in AGS and MGC803, we speculated that one route by which high level ADMA promotes GC migration and invasion may be through activation of β-catenin. To explore this, we transiently used WNT inhibitor to treat the GC cells, then measured the effect of pharmacological inhibition on the AGS migration and invasion. As showed in Fig. 4d, treatment of GC cells with WNT inhibitor XAV939 abrogated the effect of ADMA enhanced cell migration and invasion. Given to the data, we can make the conclusion that the effect of ADMA on GC migration and invasion is mediated primarily by the enhanced β-catenin.
Discussion
The central observation of this report is that the level of serum ADMA is higher in GC samples and its high level is closely related to depth of tumor invasion, clinical stage, poor differentiation and the poor clinical outcome. In our study, serum ADMA level was elevated in patients with GC and high concentration of AMDA in GC cells enhanced cell migration and invasion in vitro via Wnt/β-catenin pathway. Given to these evidences, we speculated that ADMA functions as a tumor activator in GC and may benefit clinical practice. We found that ADMA did not make any effects to the rate of proliferation in vitro. Li et al. also showed that ADMA treatment did not impact the proliferation rate of LoVo cells, which was similar to our findings [15]. According to the clinical data, we found serum ADMA level is positively correlated with depth of tumor invasion and clinical stage. Given to these evidences aforementioned, we speculated that ADMA may be associated with the migration and invasion of GC cell. Then, we performed an analysis of cells migration and invasion using Boyden two chamber assay and wound-healing/scratch assay. In our study, we found ADMA increased the motility and invasion of GC cell. There is no doubt that it is the first time to report that ADMA increased GC cell motility and invasion. In our present study, we also found ADMA induced morphological changes of GC. ADMA treatment caused that GC lost their epithelial cobblestone-like morphology to acquire a more elongated fibroblast-like shape. The result appeared to the theory of epithelial-mesenchymal transition (EMT). The EMT of GC, a process by which epithelial cells lose their orientation and cell-cell basement membrane contacts and then acquire mecenchyme features, contributes to invasion and cancer progression of cancer [21,27,28]. Therefore, it is tempting to speculate that ADMA enhance the migration and invasion in GC media EMT. Furthermore, a series of experiments were performed. We discovered that the expression of E-cadherin was significantly reduced in GC cell with ADMA. Inversely, the expression of mesenchymal markers (Vimentin) and transcription factors (snail, slug and twist) were up-regulated. According to some studies, EMT is activated by a number of transcription factors, including Snail, Slug, and Twist, and also by the repression of E-cadherin expression [29,30]. One of the most common features of EMT is the loss of E-cadherin expression. Snail and Slug have been reported to be associated The ADMA(10 μM) in GC cell positively correlated with β-catenin activity in TOP-Flash reporter assay. Expression was normalized with Renilla luciferase activity. The experiments were performed three times independently(*P < 0.05). c IHC assay showing the relationship between serum ADMA level and β-catenin in the 115 GC samples. Scale bar, 50 μm. In the GC samples with high ADMA level, the percentage of β-catenin positive expression was 64%, significantly higher than those with low ADMA level (22.9%) (*P < 0.05). d The stimulatory effect of ADMA on the GC cell migration and invasion blocked by WTN inhibitor XAV939, as the representative images were presented (*P < 0.05; NS, no statistical significance) with tumor cell migration and invasion. As a key regulator of EMT, Snail was first discovered in Drosophila as a zinc-finger transcription factor and repressed E-cadherin transcription by binding to the E-box site in the promoter of E-cadherin [31,32]. Slug belongs to the Slug family of zing-finger transcription factors and plays a major role in EMT during embryonic development and metastasis of various cancers by inhibiting E-cadherin [33]. Thus, our findings indicated the possible role of Snail/Slug associated EMT in the pathogenesis and development of GC, which was associated with ADMA. Twist was a basic helix-loop-helix domain-containing transcription factor and a highly conserved protein which can suppress apoptosis, whose functions include inducing EMT and enhancing migration and invasion of tumor cells, inhibiting cell apoptosis, promoting tumor angiopoiesis, as well as causing chromosome instability. Some studies have reported the mechanism underlying gene transcriptional activation by Twist [34][35][36]. Likewise, the results of Cao's study suggested that Snail and Twist work synergistically to induce EMT [34]. In conclusion, ADMA induces GC cell migration and invasion in vitro likely by means of activation of an EMT process.
We next aimed to identify the signaling mechanism by which ADMA mediates migration and invasion in GC cells. Additionally, it had reported that loss of DDAH1 in GC promoted the EMT progression as well as the migration and invasion via Wnt signaling pathway. Similarly, DDAH1 inhibitor (PD404182) came to the same result [19]. ADMA are mainly hydrolyzed by DDAH1 [37]. DDAH1 inhibitor (PD404182) treatment resulted in the accumulation of ADMA in vitro. So we speculated whether ADMA promoted migration and invasion of GC cell via Wnt signaling way as well. Therefore, a series of assays were performed to support our guess. In our studies, western blot analysis and dual-luciferase reporter assay manifested that ADMA significantly increased the level and the transcriptional activity of β-catenin in GC cell. To further confirm the positive regulation of ADMA on β-catenin expression, immunohistochemical staining was performed on the GC tissues. The result is indicative of the closely relationship between ADMA and the expression of β-catenin. Given the full evidence that ADMA positively regulated β-catenin expression in GC. We speculated that one route by which ADMA promotes migration and invasion of GC may be through activation of β-catenin. Consequently, we found that treatment with a specific Wnt inhibitor (XAV939) abrogated the stimulatory effect of ADMA on migration and invasion in the GC cells. Together, these present results support a concept that ADMA mediates migration and invasion of GC cells via Wnt/β-catenin signaling pathway.
In summary, this is the unprecedented study exploring the effect of ADMA in GC and demonstrate for the first time that ADMA is likely to act as a tumor activator in GC. GC patients with high serum ADMA level is strongly correlated with tumor progression and clinical prognosis. With the development of medical technology, we are able to apply it to clinical practice as a diagnostic and prognostic biomarker. In addition, it is difficult to regulate the level of ADMA in tumor until now, but we offer insights into the therapy of GC. | 2020-06-30T15:30:26.695Z | 2020-06-30T00:00:00.000 | {
"year": 2020,
"sha1": "c3ba6e675828c1b251d489fe7be15072d8030667",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12094-020-02422-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3ba6e675828c1b251d489fe7be15072d8030667",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247782809 | pes2o/s2orc | v3-fos-license | Editorial: Therapeutic Effects of Herbal Medicines: How Can We Best Investigate Bioactive Metabolites?
Department of Pharmaceutical Analysis, Xuzhou Medical University, Xuzhou, China, Tianjin State Key Laboratory of Modern Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin, China, Affiliated Hospital of Integrated Traditional Chinese andWesternMedicine, Nanjing University of ChineseMedicine, Nanjing, China, College of Medicine, Howard University, Washington, DC, United States, Peking Union Medical College Hospital (CAMS), Beijing, China, University of Studies G. d’Annunzio Chieti and Pescara, Chieti, Italy
Since ancient times, natural products have always been used as remedies for more or less serious pathologies. The great advantage of traditional medicine lies above all in the wealth of experience obtained "in the field" by experimenting with different natural products and different preparations to deal with specific diseases.
Only recently, starting from the information of traditional medicine, an attempt has been made to apply a more "scientific" approach, trying to actually evaluate which molecules present in the natural preparation have the therapeutic effect.
The awareness of being able to "take a cue" from the natural world in the process of developing new drugs has also developed from this approach, especially as these compounds are generally well tolerated and with reduced (or no) side effects.
In this context, therefore, traditional medicine plays a predominant role in the discovery of new drugs based on natural products, leading to the continuous need to study new herbal matrices for pharmaceutical and nutraceutical purposes, coupled with a continuous progress of the techniques applied to the characterization of natural matrices and to the evaluation of the observed biological activities, in order to better identify the bioactive compounds.
Herbal medicines contain hundreds or even thousands of primary and secondary metabolites, and it is a vital task for pharmacologists to explore which components contribute to the therapeutic effects of herbal medicines and which compounds do not. The complexity and low content of the chemical constituents of these metabolites in herbal medicines pose complex challenges. Up to now, the active components of most herbal medicines remain obscure, which hinders further pharmacological study and development of herbal medicines. In this scenario, the possibility of evaluating and characterizing herbal medicines is of great importance in order to obtain a product that is safe for human health, standardized, whose effects have been studied and evaluated from all points of view.
In general, the absorption of these metabolites needs to be understood in order to evaluate their potential therapeutic effects. Up to now, pharmacologists have tried many methods and techniques to explore the pharmacodynamics of herbal medicines. This includes the in vivo characterization of metabolites by pharmaco-metabonomics techniques or ex vivo models focusing on the delivery, for example, in the gastrointestinal tract.
In this Research Topic, the main goal aims to attract innovative original contributions in the interdisciplinary area in order to understand the relative impact of different compounds/compound classes to reported pharmacological effects, but also to highlight the state of the art on profiling of metabolites' pharmacokinetics in vivo, on new unreported biological activities or biological targets, and on new bioactive compounds as leads for the pharmaceutical industry.
This result can be achieved through a multidisciplinary approach that involves not only pharmacology and botany, but also disciplines such as analytical chemistry (which guarantees the quality and reproducibility of data), pharmaceutical chemistry, physiology, and biochemistry.
Through an integration of these disciplines and knowledge, it is possible to describe and characterize most of the observed.
In the papers accepted after peer review in this Research Topic, it is also highlighted that in recent years the search for products of natural origin that can be used as they are or as lead compounds for the pharmaceutical development of new drugs is increasingly a central element of scientific research.
What we have seen so far has foundations in the traditional use of many products of plant origin, as highlighted by Zeng
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The reviewer AM declared a past co-authorship with the author ML to the handling editor.
Publisher's Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Copyright © 2022 Ji, Yang, Chen, Lin, Song and Locatelli. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | 2022-03-30T14:02:12.055Z | 2022-03-29T00:00:00.000 | {
"year": 2022,
"sha1": "fbf0cf13927686f04bdbd20641661db249e2995f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "fbf0cf13927686f04bdbd20641661db249e2995f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139378851 | pes2o/s2orc | v3-fos-license | Compressive Strength and Dimensional Stability of Palm Oil Empty Fruit Bunch Fibre Reinforced Foamed Concrete
. Rapid drying shrinkage is an important factor in causing cracks of concrete. This research was aimed at studying the effects of Palm Oil Empty Fruited Bunch (POEFB) fibre on the drying shrinkage behaviour and compressive strength of foamed concrete (FC) under two different curing conditions. The adopted curing conditions were air curing and tropical natural weather curing. Two volume fractions of POEFB fibre were used, which were 0.25% and 0.50% based on dry mix weight with 12 cm in length. The dimensional stability of the control specimen and POEFB fibre reinforced FCs was obtained by cumulating the measured linear shrinkage or expansion due to different curing conditions. The results from the two different specimens were compared. The results showed that specimens reinforced with POEFB fibre and cured under tropical natural weather condition attained lesser variations of dimensional stability and higher 90-day strength performance index than the reference mix without POEFB fibre. This improvement was attributed to the ability of POEFB fibre to bridge the cement matrix, and irregular wetting process under tropical natural weather curing condition had enabled more production of Calcium Silicate Hydrate gels that gradually blocked the penetration of water into the specimens and increased the compressive strength. It is observed that 11.43% and 4.46% of improvement in 90-day strength performance index were obtained in natural weather cured 0.5% of POEFB fibre reinforced specimen, with corresponded to the reference mix and 0.25% of POEFB fibre reinforced specimens, respectively.
Introduction
It is commonly known that foamed concrete (FC) has the advantages of giving lighter dead load, higher thermal insulation characteristic and better acoustical properties than those normal weight concrete. However, FC possesses some weakness that limit it being extensively use in building, such as limited compressive, flexural and tensile strengths, and great dimensional instability, while maintaining its lightness. Several researchers [1][2][3][4][5][6][7][8] have successfully provided a viable solution by adding fibres as the reinforcing agent to enhance the mechanical properties and dimensional stability of lightweight concrete. Among the type of fibres, metallic and synthetic fibres are being commonly used in their research. According to Brandt [9], opening and propagation of micro-cracks in a concrete can be effectively controlled by introducing short dispersed fibres in cement matrix. Whilst, long fibres (50-80 mm) are much able to control larger cracks and therefore contribute to higher final strength. However, the optimum quantity of fibres has to be investigated prior adding into cement matrix. The reasons are high volume of fibre content easily leads to conglomeration of fibre during the composite mixing, while too little fibre content is unable to provide sufficient reinforcing mechanism in cement matrix.
Instead of using metallic and synthetic fibres in lightweight concrete, some researchers have used natural fibres as alternative fibres in lightweight concrete [10][11][12][13]. These natural fibres include date palm, equisetum, coconut and jute fibres. Their use, as a construction material, in improving the properties of the composites, does not increase much of the overall composite cost. Besides, the flexibility of POEFB makes it easier to be mixed with cement matrix compared with using harder steel fibre in cement matrix [11]. However, the reinforcing effects from natural fibre in lightweight concrete are varied due to different characteristic among the natural fibres.
Currently, Malaysia is one of the largest palm oil producers in the world, and it contributes about 57.6% of the total supply of palm oil in the world. According to Roslan et al. [14], Malaysian palm oil industry produces approximately 19 million tonnes (wet weight basis) of Oil Palm Empty Fruit Bunch (OPEFB) in year 2010. Approximately 65% of OPEFB is incinerated and the bunch ash is recycled back as fertilizer [15]. Therefore, it will be more sustainable and environmental friendly if the fibre extracted from OPEFB, can be further explored and used as construction material. Although some studies and researches on POEFB fibre reinforced concrete have been reported, but the range of the studies and researches in POEFB fibre reinforced FC is extremely limited. For that reason, this research was initiated and focuses on the compressive strength and dimensional stability of FC incorporated with different content of POEFB fibre.
Materials
Ordinary Portland cement (OPC), quarry sand, water, synthetic foaming agent and POEFB fibre were used to prepare FCs. The OPC used in this experiment is produced by YTL Sdn. Bhd and it complied with the Type I Portland cement as per in ASTM C150 (2007) [16]. Quarry sand which was passed through 600 μm of sieve analysis, was used as fine aggregates in this study. The sand was oven-dried at 105°C for 24 hours before sieving in order to avoid inconsistency of its moisture content. Normal tap water and locally available synthetic foaming agent were used for production of the FCs in this study. POEFB fibre was torn and cut into 1-2 cm length. The fibre was oven dried at 105ºC for an hour to eliminate any moisture content which contributed to its mass and engineering property. Mechanical Properties of POEFB fibre are shows in Table 1.
Mix proportions and preparation
The details of the mixtures for this experiment are tabulated in Table 2. Series 1 was a laboratory trial mix, where a total number of thirteen mixes were prepared using watercement ratio ranging from 0.54 to 0.60, with 0.02 incremental intervals. POEFB fibre was not added for control mix specimens. However, 0.25% and 0.50% of POEFB fibre were added into PF25 and PF50 mixes respectively, in order to study its effect on 28-day FC compressive strength under water curing condition. The mixes with optimum strength to 1000 kg/m³ density ratio [17] were selected for further investigation in Series 2. Series 2 focuses on the dimensional stability and 90-day compressive strength of the FC specimens as a result of two different curing conditions (air curing and tropical natural weather curing). Before exposing to different types of curing, all specimens in Series 2 had undergone the first 7 days initial water curing. Subsequently, the curing was continued with either air curing or tropical natural weather curing, for the remaining days until Day-90.
Water curing was done with the temperature in the range of 25-28°C. For air curing, the specimens were placed in the laboratory at ambient room temperature (29-32°C) with 65% of average relative humidity. For tropical natural weather curing, the specimens were cured under Malaysian tropical climate. The temperature ranges 29-35°C and with 50-90% of relative humidity. The last two digits = hundredths of respective w/c ratio For all the specimens, cement to sand ratio was fixed at unity (by weight) and the foaming agent was diluted with water in a ratio of 1: 30 (by volume). The designated density of FCs in this study was fixed at 1300 ± 50 kg/m³. Therefore, a required amount of stable foam, produced by dry prefoamed method [18], was added into the slurry cement mortar mix in order to obtain the required density. 100 mm × 100 mm × 100 mm cubic mould and 100 mm × 200 mm × 400 mm prismatic mould were used to produce specimens for compression test and dimensional stability test. All specimens were demoulded 24 hours after casting.
Testing methods
Before casting, the fresh cement mortar and foamed concrete were tested for their flowability and consistency using flow table spread test and inverted slump test in accordance to ASTM C1437 (2007) [19] and ASTM C1611 (2007) [20], respectively. The diameters for four angles of the spread concrete were measured and the average reading was recorded. Compression test was conducted by using Instron 5582 Testing Machine in accordance with BS EN 12390-3 (2002) [21]. Dimensional stability was performed by measuring linear shrinkage and expansion of concrete block according to RILEM CPC9 (1994) [22]. Four strain measuring discs were affixed to the concrete surface (400 mm × 200 mm), two discs were affixed parallel to concrete specimen vertical side (100 mm × 200 mm) and the other two were affixed parallel to the horizontal side (100mm × 400 mm), using epoxy adhesive. Strain gauge meter was used to measure dimension changes of the specimens from the very first day of air curing or tropical natural weather curing until 90 days of age. These cumulative dimensional changes exhibited in this experiment show trends of contraction/expansion corresponding to variation of surrounding condition and temperature. The degree of deformation for the specimen can be calculated using Equation (1).
Where, Δ is deformation, ε is strain or equals to measured scale, δ × 10 -5 ), L is original length of specific surface [mm]. Table 3, the fluidity of FCs is significantly affected by the w/c ratio and percentage of POEFB fibre being added into the mixture. Higher w/c ratio increases the inverted slump cone spread values in each category of the trial mix. Whereas, incorporating higher content of palm oil fibre decreases the inverted slump spread value. This is due to the hydrophilic nature of the dried palm oil fibre, which absorbed portion of the water content required for cement hydration [12]. Since the required amount of water was not made available for cement hydration, therefore overall fluidity for PF50 and PF25 is lower than that of control mix. Referring to Table 3, the best performance ratio for the Ctrl, PF25 and PF50 specimens were obtained at 0.56, 0.56 and 0.58 of w/c ratios, respectively. Theoretically, as the density of the concrete increases, its compressive strength would increases correspondingly. However, the hardened densities among the FC specimens are slightly different, although the targeted density was within 1300 ± 50 kg/m³. Therefore, mixes that have optimum strength to density ratio without compromising their stability and consistency were selected for further investigation [17].
Series 2
In Series 2, further investigation was concentrated on the dimensional stability, and 90-day compressive strength of the specimen with the highest strength performance index obtained in Series 1, namely Ctrl-0.56, PF25-0.56 and PF50-0.58. All the tests for FC were done in triplicate, but only the average values were reported.
Dimensional stability
The results of dimensional stability of POEFB fibre reinforced FCs that cured under air curing and tropical natural weather curing condition are shown in Figures 1 and 2, respectively. For air curing condition, the specimens shrunk with a decreasing rate with advancing age. This is due to the loss of water in concrete via evaporation, leading to specimens' shrinkage at early age. Subsequently, less water was available at the later age due to discontinuous water supply for specimens cured under air curing condition. Therefore, the shrinkage of air cured specimens increased gradually.
On the other hand, for specimens cured under natural weather condition, the dimensional stability was varying depending on outdoor humidity. Rain water was absorbed into the FC specimens through capillary pores during rainy season and it caused the specimen to expand. Conversely, internal water evaporated when the specimens exposed to hot scorching sun caused the specimens to shrink. This irregular wetting process during rainy days under tropical natural weather curing condition allowed continuity of hydration reaction in specimens and produced more Calcium Silicate Hydrate gels that gradually blocked the penetration of water into the specimens. As a result, tropical natural weather cured specimens had lesser volume changes at the later age, compared with air cured specimens, as shown in Figure 2. In addition, it was found that specimens incorporated with POEFB fibre encountered lesser dimensional changes than the control mix for both curing conditions. This result indicated that POEFB fibre has the bridging ability inside the cement matrix, reduced capillary pores, and therefore reduced the dimensional changes of the specimens.
Compressive strength
The compressive strength and strength performance index for Ctrl-56, PF25-56 and PF50-58, are shown in Figures 3 and 4, respectively. Table 4 shows the comparison of strength performance index among the FCs with and without POEFB fibre, in both air curing and tropical natural weather curing conditions. Based on the results, specimens with POEFB fibre obtained higher 90-day compressive strength than that of reference specimen, regardless of the type of curing condition being adopted. This is due to fibre inside the FC was functioned as reinforcing agent to bridge the cement matrix firmly than plain concrete [12].
It was found that tropical natural weather cured specimens achieved higher compressive strength and strength performance index than those of air cured specimens, regardless of the presence of POEFB fibre in the specimens. This could be due to the presence of water under tropical natural weather condition enables continuity of the hydration process and the production of more Calcium Silicate Hydrate gels, which reduced the porosity in hydrated cement paste. These findings are supported by the results shown in Figure 3, Figure 4, and Table 4. Based on these experimental investigations, some conclusions were drawn and are listed as below: (a) Foamed concrete (FC) reinforced with 0.50% of Palm Oil Empty Fruit Bunch (OPEFB) fibre (1-2 cm in length) shows better enhancement in compressive strength and dimensional stability under tropical air curing, compared to reference mix without POEFB fibre and 0.25% POEFB fibre reinforced FC. (b) The inclusion of dried POEFB fibre inside FC requires higher water content in order to achieve a consistent mix compared with that of reference mix with higher water content. (c) Tropical natural weather cured POEFB fibre reinforced FCs achieved higher compressive strength than those of air cured POEFB fibre reinforced FCs. This could be due to the presence of water under tropical natural weather condition enables continuity of the hydration process and the production of more Calcium Silicate Hydrate gels, which reduced the porosity in hydrated cement paste.
Effort and contribution by Ms Hew Yi Wen and Mr Li Siew Wu on these experimental investigations are highly appreciated. | 2019-04-30T13:08:41.094Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "6cd2bb3ee920d968f86f70b92dc9352158b8830e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/40/e3sconf_iccee2018_02001.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f3275527b47ff9ddd42d9b77408e300cdc4c5ab4",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
221121415 | pes2o/s2orc | v3-fos-license | Diagnostic utility of the amyotrophic lateral sclerosis Functional Rating Scale—Revised to detect pharyngeal dysphagia in individuals with amyotrophic lateral sclerosis
Objective The ALS Functional Rating Scale–Revised (ALSFRS-R) is the most commonly utilized instrument to index bulbar function in both clinical and research settings. We therefore aimed to evaluate the diagnostic utility of the ALSFRS-R bulbar subscale and swallowing item to detect radiographically confirmed impairments in swallowing safety (penetration or aspiration) and global pharyngeal swallowing function in individuals with ALS. Methods Two-hundred and one individuals with ALS completed the ALSFRS-R and the gold standard videofluoroscopic swallowing exam (VFSE). Validated outcomes including the Penetration-Aspiration Scale (PAS) and Dynamic Imaging Grade of Swallowing Toxicity (DIGEST) were assessed in duplicate by independent and blinded raters. Receiver operator characteristic curve analyses were performed to assess accuracy of the ALSFRS-R bulbar subscale and swallowing item to detect radiographically confirmed unsafe swallowing (PAS > 3) and global pharyngeal dysphagia (DIGEST >1). Results Although below acceptable screening tool criterion, a score of ≤ 3 on the ALSFRS-R swallowing item optimized classification accuracy to detect global pharyngeal dysphagia (sensitivity: 68%, specificity: 64%, AUC: 0.68) and penetration/aspiration (sensitivity: 79%, specificity: 60%, AUC: 0.72). Depending on score selection, sensitivity and specificity of the ALSFRS-R bulbar subscale ranged between 34–94%. A score of < 9 optimized classification accuracy to detect global pharyngeal dysphagia (sensitivity: 68%, specificity: 68%, AUC: 0.76) and unsafe swallowing (sensitivity:78%, specificity:62%, AUC: 0.73). Conclusions The ALSFRS-R bulbar subscale or swallowing item did not demonstrate adequate diagnostic accuracy to detect radiographically confirmed swallowing impairment. These results suggest the need for alternate screens for dysphagia in ALS.
Introduction
Amyotrophic lateral sclerosis (ALS) is a progressive and fatal neurodegenerative disease affecting both upper and lower motor neurons within the cortex, brainstem and spinal cord [1]. Dysphagia, or swallowing impairment, occurs in a reported 85% of patients with ALS at some point during the disease process and is associated with malnutrition, weight loss, reduced quality of life, aspiration pneumonia and death [2][3][4][5][6]. Early detection and consistent monitoring of dysphagia provides the opportunity to mitigate associated risks and improve survival with timely interventions [7].
A universally accepted and validated clinical test battery to accurately assess and monitor bulbar disease progression is currently lacking [8]. A 2017 survey of Northeast ALS (NEALS) centers in the United States revealed highly variable practice patterns for the evaluation of bulbar function in patients with ALS [9]. Both clinical and instrumental swallow evaluations were found to be underutilized in multidisciplinary ALS clinics with less than 60% of respondents utilizing clinical swallow assessments and only 27% referring for the gold standard videofluoroscopic swallowing evaluation. Importantly, this survey revealed that the only clinical test routinely performed to evaluate bulbar function (>90% of sites) was the revised ALS Functional Rating Scale (ALSFRS-R).
The ALSFRS-R is a 12-item questionnaire with each question rated on a 5-point ordinal scale used to monitor progression of disability in patients with ALS. The scale currently represents the most widely used ALS outcome measure in Phase II and III clinical trials and longitudinal studies [10]. More recently, the psychometric properties of the ALSFRS-R have been evaluated with evidence suggesting multidimensionality, and the utilization of individual subscale scores rather than a total score has been recommended [11][12][13]. These bulbar, motor and respiratory subscores are intended to provide more precise prognostic information, as the individual's domain scores have been demonstrated to be more clinically robust when reported as subscores rather than a combined score [14]. Specifically, individuals with bulbaronset disease demonstrated slower rate of decline on the motor subscore and hastened decline on the bulbar subscore compared to those with spinal onset disease [14]. One study of 18 individuals with motor neuron disease (MND) investigated the relationship between the ALSFRS-R bulbar subscore and radiographically confirmed airway invasion; however, individuals who aspirated during swallowing were not included in the study cohort, limiting the generalization of the results [15]. The discriminant ability of the bulbar subscore and swallowing item score to detect radiographically confirmed pharyngeal dysphagia in ALS has not yet been determined. We therefore sought to evaluate the discriminant ability of the ALSFRS-R bulbar subscale and swallowing item scores to classify early radiographically confirmed pharyngeal dysphagia in patients with ALS. Given the that the scale is a five-point ordinal scale that lacks linearity, we hypothesized that the ALSFRS-R would not demonstrate adequate sensitivity to detect mild changes in pharyngeal swallowing function in individuals with ALS.
Participants
All eligible ALS patients who attended the University ALS clinic were informed of the study and invited to participate, representing a convenience sample. Two-hundred and one individuals were enrolled in this study. Inclusion criteria were: 1) confirmed diagnosis of ALS (Revised El Escorial criteria) by a neuromuscular neurology specialist; 2) not pregnant, 3) no allergies to barium, and 4) still consuming some form of foods and liquids by mouth. This study was approved by the University of Florida Institutional Review Board and conducted in accordance with the Declaration of Helsinki. All participants provided informed written consent. Participants attended a single testing session which included completion of the ALSFRS-R (index test) and a standardized videofluoroscopic swallowing examination (VFSE, gold standard reference test).
Index test. The ALSFRS-R is a 12-item questionnaire validated to monitor functional disease progression across four subscales of activities of daily living that include bulbar, fine motor, gross motor and respiratory domains [10]. Three items assessing speech, salivation, and swallowing comprise the bulbar subscale with each item scored on a five-point ordinal scale (0-4) for a total of 12 points, with higher scores indicating better self-reported function (0 = total loss of function, 12 = normal functioning). A single question on swallowing is scored as follows 4-Normal eating habits, 3-Early eating problems; occasional choking, 2-Dietary consistency changes, 1-Needs supplemental tube feeding, 0-NPO (exclusively parenteral or enteral feeding) [10]. Participants completed the ALSFRS-R in interview fashion with an opportunity for input by their caregivers. All research personnel conducting these interviews completed training in the administration and scoring of the ALSFRS-R.
Reference standard. VFSE was completed by a trained research speech-language pathologist (SLP) with participants comfortably seated in an upright lateral viewing plane using a properly collimated Phillips BV Endura fluoroscopic C-arm unit (GE 9900 OEC Elite Digital Mobile C-Arm system type 718074). Fluoroscopic images and synced audio were digitally recorded at 30 frames per second using a high resolution (1024 x 1024) TIMS DICOM system (Version 3.2, TIMS Medical, TM, Chelmsford, MA) for subsequent analysis. A standardized bolus presentation was administered utilizing a cued instruction to swallow and included: three 5 mL thin liquid barium, one comfortable cup sip of thin liquid barium, three 5 mL thin honey barium, two 5 mL pudding consistency barium, and a ¼ graham cracker square coated with pudding consistency barium (Varibar1, Bracco Diagnostics, Inc., Monroe Township, NJ). If the patient was unable to self-feed due to upper extremity weakness, clinician assistance or alternative methods (i.e., straw) were employed, consistent with the individual's feeding methods routinely utilized at home. SLPs enforced standardized bailout criteria requiring administration of thicker consistencies following two instances of aspiration and discontinuation if an additional aspiration event occurred during the exam. VFSE recordings were saved to a secure server and blinded for subsequent analysis.
VFSE outcome measures
Each VFSE was rated in duplicate by two trained, blinded and independent raters. Complete agreement (100%) was required for all ratings, with a discrepancy meeting utilized to finalize any inconsistent ratings between raters.
Swallowing safety. The Penetration-Aspiration Scale (PAS) was utilized to evaluate swallowing safety (Fig 1) [16]. This validated eight-point ordinal scale indexes the depth of contrast material entering the airway during swallowing events, the presence of a protective response, and if aspirate material was ejected from the airway [16]. All elicited swallows within a given bolus trial were ascribed a PAS score and the worst PAS score utilized for statistical analysis. Global pharyngeal swallowing. The dynamic imaging grade of swallowing toxicity (DIGEST) is a validated five-point ordinal scale created to assess both efficiency and safety of bolus flow [17] and recently utilized in ALS [18]. The DIGEST (Fig 2) yields a global grade of pharyngeal dysphagia evaluated on bolus transport during the entirety of the videofluoroscopic swallow study to determine clinically relevant categories of overall pharyngeal dysphagia severity levels. DIGEST total scores are a composite of two subscores (scored 0-4) addressing: (i) swallowing efficiency based on degree of bolus clearance, and (ii) airway safety based on severity and frequency of PAS scores. DIGEST scores of zero indicate normal swallowing while total and subscore grades of 4 indicate life-threatening dysphagia.
Statistical analysis. Descriptives were performed to summarize participant demographics and outcomes of interests. A receiver operator characteristic (ROC) curve analysis was then performed on the index test (ALSFRS-R bulbar subscale and swallowing item) to identify unsafe (PAS > 3) and global dysphagia (DIGEST > 1). Area under the curve (AUC) with bootstrapped 95% confidence intervals, sensitivity, specificity, positive predictive value (PPV), and negative predictive values (NPV) were calculated using JMP Pro Version 14.1.0 (SAS Institute Inc., Cary, NC). Optimal classification cutoffs for index test were determined by values that maximized both sensitivity and specificity.
Participant demographics
Complete ALSFRS-R and VFSE data were missing in four participants resulting in 197 patients in the final analysis. Mean age was 62.9 (SD = 10.3) and average ALS disease duration was 26.6 months from symptom onset (SD = 23.6). Fifty-three percent were male (n = 106) and 58.1% presented with a spinal onset (n = 111). Mean ALSFRS-R score was 35.3 (SD = 7.4). Frequency data for the ALSFRS-R bulbar subscale and swallowing item scores are presented in histogram plots in Fig 3A and 3B respectively. Mean ALSFRS-R bulbar subscale score was 9.1 (SD = 2.4) and mean swallowing item score was 3.05 (SD = 0.79). Radiographically confirmed unsafe swallowing was identified in 38.9% of patients (n = 76, Fig 3C) and prevalence of global pharyngeal dysphagia was 58.9% (n = 116, Fig 3D).
Discriminant ability of the ALSFRS-R to detect swallowing impairment
Scatterplots depicting relationships between the ALSFRS-R and swallowing outcomes of interest are shown in Fig 4. ROC curve results for the ALSFRS-R bulbar subscale and ALSFRS-R swallowing items to detect radiographically confirmed penetrators/aspirators are presented in Table 1 and Fig 5A and 5B. Classification accuracy for both ALSFRS-R outcomes to detect global pharyngeal dysphagia are presented in Table 2 and Fig 5C and 5D. Optimized classification cutoff of ALSFRS-R swallowing score of � 3 and ALSFRS-R bulbar score of � 9 were found for both outcomes.
Discussion
To our knowledge, this represents the first investigation to compare ALSFRS-R bulbar outcomes to those of the gold standard reference test for swallowing (VFSE). The ALSFRS-R bulbar subscale and swallowing item demonstrated poor to fair diagnostic accuracy to detect radiographically confirmed pharyngeal swallowing impairment in the 197 ALS patients examined (AUC: 0.68-0.76). No cut score emerged for ALSFRS-R outcome with an acceptable level of classification accuracy to distinguish normal versus disordered swallowing. Thus, the ALSFRS-R did not demonstrate adequate clinical utility as a screening tool to detect early pharyngeal dysphagia and demonstrated insufficient sensitivity as a marker of change in pharyngeal swallowing function for research clinical trials. These findings highlight the need for the development of sensitive tools to adequately screen relative risk of swallowing impairment for use in multidisciplinary ALS clinics and research settings alike.
Bulbar subscale
Classification accuracy of the ALSFRS-R bulbar subscale to detect global pharyngeal dysphagia was considered poor to fair when comparing our results to accepted screening tool criterion levels [19]. No clear score or threshold emerged that yielded an acceptable balance between sensitivity and specificity when examining ROC outcomes across bulbar subscale scores. An effective screening tool should accurately and quickly identify at risk individuals to triage for further comprehensive evaluation and ideally minimize false negatives (i.e., missing individuals with impairment) while at the same time avoiding over identification of individuals without the disorder being screened (i.e. false positives). While generally specificity is sacrificed at the cost of increased sensitivity; a screening tool with high sensitivity but very low specificity will create undue strain on health care workers, lead to overutilization of resources and unnecessary testing, and increase patient and caregiver burden. To this end, an ALSFRS-R bulbar subscale score of �11 correctly identified 87% of ALS patients with global pharyngeal dysphagia; however, misclassified two-thirds of patients as dysphagic who demonstrated normal swallowing on VFSE. Use of a lower cut-point of �10 decreased sensitivity to an unacceptable level without significant improvements in specificity, PPV or NPV. This cut point would miss one- quarter of patients with global dysphagia (i.e. false negatives) and would over-refer 52% of patients without dysphagia for additional testing. Finally, a bulbar subscale score of < 9 derived the most balanced degree of sensitivity and specificity of 68%, however would misclassify one-third of individuals with global pharyngeal dysphagia (false negatives).
Similarly, no cut score emerged for the ALSFRS-R bulbar subscale to detect penetration or aspiration. Although sensitivity of the bulbar subscale to detect unsafe swallowing was good-excellent for higher scores of 10 and 12 (> 86%) they were associated with low specificity (30% and 45%), high false positives, and ow PPVs. These findings are in agreement with observations the bulbar subscale is not sensitive to detect early speech impairment in ALS patients when compared to objective physiologic speech metrics [20].
Swallowing item
The ALSFRS-R swallowing item demonstrated poor overall screening accuracy to classify both global swallowing and safety status. Unlike the bulbar subscale, however, a clear score emerged to optimize obtained sensitivity and specificity. A swallowing item score of � 3 accurately classified only 68% of individuals with confirmed global dysphagia missing approximately onethird of impaired individuals and representing a PPV that is not acceptable. Further, this score
PLOS ONE
Discriminate ability of ALSFRS-R to detect dysphagia misclassified 36% of patients with confirmed normal swallowing as being dysphagic. Clearly this 'optimal' score is not acceptable for distinguishing global swallowing status in ALS. Examination of the swallowing item's classification accuracy to differentiate safe vs. unsafe ALS swallowers was noted to be higher, with a cut score of �3 yielding a sensitivity of 79% and a specificity of 60%. Although improved, diagnostic utility at this optimal score threshold remained suboptimal for a useful screening tool given that it would miss one in every five penetrator/aspirators and would over-refer 40% of patients without impairment for further evaluation. An important consideration when interpreting these data is the fact that individuals with ALS may not be fully aware of subtle dietary adaptations or modifications they may implement to compensate for a progressive decline in function [21][22][23]. This is highly relevant given that the ALSFRS-R is a patient report outcome that asks patients to select the descriptor for a function being queried.
Given that the ALSFRS-R is commonly used in research as a baseline stratification tool or outcome to measure change in function over time; researchers are advised to consider these findings for future clinical trials. Further, clinical adoption of these scores as a dysphagia screen could create unnecessary burden for patients and their caregivers and facilitate inappropriately timed referrals for instrumental swallowing evaluations.
Although no published screening tool exists for dysphagia in ALS, two reports have examined the clinical utility of another patient-reported outcome measure (PROM) and of voluntary cough testing to detect aspiration. The Eating Assessment Tool (EAT-10) is a validated 10-item swallowing specific PROM that is available in 13 languages [24]. A cut score of >8 on the EAT-10 demonstrated a sensitivity, specificity and likelihood ratio of 86%, 72% and 3.1, respectively to detect radiographically confirmed aspiration [25]. However, the discriminant ability of the EAT-10 to detect global pharyngeal dysphagia in ALS, has not yet been established. In addition to PROMs, voluntary cough function is noted to significantly differ in individuals with ALS compared to healthy age and gender matched controls, contributing to the impaired ability to effectively expel tracheal aspirate and manage secretions in this population [26]. Given that peak expiratory flow is noted to be reduced by 50% in ALS patients with unsafe swallowing [27], voluntary cough peak expiratory flow (commonly known as peak cough flow testing) has been suggested as a screen to index one's physiologic airway defense capacity [28,29]. Future work is needed to identify additional sensitive clinical markers in order to develop and validate a pragmatic and accurate dysphagia screening tool for use in ALS clinics [8,9,29].
While this work represents the first attempt to examine the discriminant ability and clinical utility of the ALSFRS-R for detecting radiographically confirmed dysphagia, limitations need to be acknowledged. First, following typical analytic methods used in dysphagia research [30,31], the worst PAS score was utilized to determine swallowing safety status, which may have skewed outcomes towards impairment [26]. Given that we were interested in catching early impairment however, we feel that any potential bias was warranted. Further, the global pharyngeal dysphagia metric (DIGEST scale) incorporates both the frequency and severity of penetration and aspiration and therefore mitigated potential bias for this specific outcome [17]. Second, the global dysphagia outcome only examines pharyngeal phase swallowing impairments. Therefore, our exam was specific to pharyngeal phase deficits. It is possible that a patient may have rated the ALSFRS-R swallowing item to reflect or communicate perceived impairment in the oral phase that were not detected in this study with use of the DIGEST or PAS scales. Third, given practical and ethical considerations and constraints, our sample represented individuals with mild-moderate ALS severity and bulbar dysfunction with only one patient 100% dependent on non-oral nutrition. Therefore, this cohort may not represent the complete spectrum of ALS swallowing severities. Fourth, other important non-physiologic aspects related to dysphagia such as mealtime enjoyment, mealtime duration, caregiver burden and fatigue were not indexed in this study. Finally, although these data represent the largest VFSE dataset presented to date, further work in additional patients is warranted to validate these findings.
Conclusion
Early detection of dysphagia is paramount to guide timely clinical management decisions to mitigate or delay development of known sequalae. Given the widespread use of the ALSFRS-R to index bulbar and pharyngeal swallowing function, we aimed to determine the discriminant ability of the bulbar subscale and swallowing item to detect radiographically confirmed impairments in swallowing safety and global pharyngeal swallowing function using the gold standard VFSE. Overall accuracy of the ALSFRS-R was poor to diagnostic accuracy for swallowing safety and global pharyngeal swallow function did not meet acceptable standards across any score criteria. We therefore do not recommend use of the ALSFRS-R in isolation to screen for pharyngeal swallowing function and encourage the development of a disease specific screening tool that can accurately triage high-risk individuals for instrumental swallowing evaluation. | 2020-08-15T13:05:44.495Z | 2020-08-13T00:00:00.000 | {
"year": 2020,
"sha1": "2b5a32f2eee0525848a3980d1ed831639c0be714",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236804&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee77a8429e93701256ce73a04febe1560f6ade0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248823199 | pes2o/s2orc | v3-fos-license | An open-label, single-arm trial of cryoneurolysis for improvements in pain, activities of daily living and quality of life in patients with symptomatic ankle osteoarthritis
Objective Cryoneurolysis, cold-induced reversible conduction block of peripheral nerves, is an effective treatment for reducing knee osteoarthritis (OA) symptoms and opioid use following knee arthroplasty. There are however, limited data concerning its use for ankle OA. Our aim was to assess clinically significant long-term symptomatic relief of ankle OA with cryoneurolysis. Method This single-center, open-label trial included participants aged >18 years with radiographic tibiotalar OA, unilateral ankle pain ≥5/10 on Numerical Rating Scale (NRS), and with no ankle surgery within 6-months of screening. Following ultrasound-guided cryoneurolysis of nerves in the participant's pain distribution (sural, saphenous, superficial and/or deep fibular nerves), outcomes were assessed at clinic visits (6, 12 and 24-weeks) and by telephone interview (3, 9, 18-weeks). The primary endpoint was change in Foot and Ankle Outcome Score (FAOS) (pain subscale) at 12-weeks. Change in quality of life (FAOS-QoL), activities of daily living (FAOS-ADL), NRS-pain, and physical performance measures were also assessed. Longitudinal mixed models were constructed to evaluate changes from baseline at 6, 12- and 24-weeks post-treatment. Results Forty participants enrolled (50% female, mean ± SD age 63.0 ± 12.8 years). At 12-weeks post treatment, FAOS-pain (20.8, p < 0.0001), ADL (18.1, p = 0.0003), QoL (19.9, p = 0.0003) and NRS-pain (−2.6, p < 0.0001) were significantly improved from baseline. No difference in 40-m fast-paced walking test was detected at 12-weeks post-treatment (−1.2sec, p = 0.59). For all outcomes, similar findings were observed at 6- and 24-week visits. Conclusion Cryoneurolysis resulted in statistically significant improvements in ankle pain, physical function and QoL for up to 24-weeks in participants with unilateral, symptomatic ankle OA.
Introduction
Osteoarthritis (OA) is a global health concern, a leading contributor to loss of function, disability and painful symptoms [1,2], and affects approximately a third of adults [3]. The prevalence of symptomatic, radiographic ankle OA in community-dwelling adults is estimated at 3.4% [4]. Further, the prevalence of ankle pain has been reported to exceed 7% in general practices among adults in the UK [5]. However, despite a moderate prevalence combined with the disabling nature of ankle OA [6], the development of treatments to manage ankle OA symptoms has been slow. Finding a safe treatment that could alleviate symptoms and improve activities of daily living in people with ankle OA, while avoiding the side effects of pharmacological and surgical therapies, is a key objective.
Osteoarthritis represents the biggest unmet medical need of all musculoskeletal conditions [7] and, despite efforts, there are currently no licensed disease-modifying osteoarthritic drugs (DMOADs) [8]. Subsequently, attention has turned to identifying treatments that best manage symptoms and/or improve function and quality of life. Typically, management of OA involves pharmacologic and non-pharmacologic interventions [9,10].
For ankle OA, rates of pharmacological management have been reported to be much higher than non-pharmacological strategies (e.g. lifestyle advice and allied health referral) [11] with treatment algorithms mirroring procedures for the management of other lower-limb joint sites [12]. There are, however, well recognized adverse side-effects and limitations to both pharmacological and surgical treatments. For instance, whilst NSAIDs have shown moderate effects on OA symptoms [13] with data from a recent meta-analysis of 36 randomised clinical trials reporting an effect size of À0.30 (95% CI -0.40 to À0.20) for pain relief and À0.35 (95% CI -0.45 to À0.24) for functional improvement following 2-12 weeks of treatment [14], NSAIDs have been shown to carry an increased risk of gastrointestinal [15], cardiovascular [16], and renal [17] adverse events. Furthermore, whilst joint replacement has shown to be an effective surgery to manage symptoms, it may not be suitable for all patients [18,19]. Therefore, there is a need to identify therapeutic options that alleviate pain and improve physical function whilst also reducing opioid use and outpatient healthcare costs.
Cryoneurolysis has demonstrated promise as a novel [20], effective therapeutic technique for providing long-term analgesia [21], including for cervicogenic headache [22], neuropathic pain [23,24] and phantom-limb pain [25]. The mechanism of action is well understood and has been described previously [26]. In short, cryoneurolysis creates a reversible conduction block of pain signals by the formation of precise, cold-zone lesions that cause Wallerian degeneration [27] while the nerve bundle remains intact, allowing for complete regeneration and functional recovery [26]. The duration of the pain relief depends on the individual and the distance of the treatment from the terminal axon.
Cryoneurolysis was found to significantly reduce knee OA pain in a recent randomized, double-blind, sham-controlled, multicentre trial of 180 patients with mild-to-moderate knee OA. In that study, a single cryoneurolysis treatment of the infrapatellar branches of the saphenous nerve resulted in a reduction in pain and symptoms for at least 90 days [28]. Further, in a single-center, randomised controlled trial of pre-operative cryoneurolysis in patients scheduled to undergo primary unilateral total knee arthroplasty for OA, per-protocol analyses showed that compared to standard care, cryoneurolysis significantly reduced opioid consumption post-operatively [29]. Using cryoneurolysis in participants with ankle OA could provide immediate symptomatic relief with a greater duration of analgesia, and with a lower rate of side-effects than other non-operative treatments. Given the promising results for alleviating knee OA pain [28,29], combined with the limited therapeutic options for ankle OA, our aim was to evaluate the effectiveness of cryoneurolysis for improving ankle symptoms and function in participants with unilateral, symptomatic ankle OA.
Study design and recruitment
We conducted a single-center, single-arm open-label clinical trial with up to 24-weeks follow-up to examine the effectiveness of cryoneurolysis for reducing pain, improving activities of daily living (ADL) and quality of life (QoL) in patients with evidence of unilateral symptomatic ankle OA. We hypothesised that treatment of the superficial nerves and/or deep fibular nerve would result in significant improvements in function and pain at 24-weeks follow-up. This study was conducted at the University of Kansas Medical Center (Kansas City, KS, USA) from August 13, 2018 to January 26, 2021. Participants were assigned to undergo cryoneurolysis of either the Superficial Fibular Nerve (SFN), Sural Nerve (SN), and/or Saphenous Nerve, or the Deep Fibular Nerve (DFN) using ultrasound-guided cryoneurolysis. This study was performed in accordance with the provisions of the Declaration of Helsinki, and the protocol was approved by the University of Kansas Institutional Review Board (STUDY00142298). Written informed consent was obtained from all participants before inclusion. The trial was registered at clinicaltrials.gov (NCT03567187). Participants were identified through hospital records, physician referrals, mass mailings, and advertisements.
Study participants
Eligible participants were men and women aged >18 years, had radiographic evidence of ankle OA (Kellgren-Lawrence (KL) Grade !2; measured on weight-bearing mortise views with 20 internal rotation), were limited by unilateral ankle pain rated on a Numerical Rating Scale (NRS) as !5/10 on most days over the last month, had a Foot and Ankle Outcome Score (FAOS) [30] of <75 (0-100, 100 ¼ worst possible pain) in at least 1 domain, body mass index (BMI) 50 kg/m 2 , were ambulatory and able to comply with study procedures, had undergone at least one prior conservative OA treatment (e.g. physical therapy, analgesics, ankle brace) and were willing to abstain from the use of protocol-restricted medications during the trial, and analgesics, other than acetaminophen, 1 week prior to the beginning of the trial.
Participants were excluded if they had another functional impairment that limited their walking ability to a greater extent than their ankle, had clinical signs/symptoms of active or recurrent infection in the index ankle joint (or overlying skin), intra-articular (IA) corticosteroids within 3-months of screening, oral corticosteroids within 2-weeks of screening (unless on a chronic stable dose for !3 months prior to enrolment), women who were pregnant, ankle pain due to a condition other than OA, arthroscopy or open surgery of the ankle joint within 6 months of screening, planned/anticipated surgery of the index ankle during the 6month trial period or had a diffuse pain condition (e.g. diffuse pain including the bilateral upper and lower limbs; confounding pain such as knee, hip or back pain; or fibromyalgia). A full list of the exclusion criteria is included in Supplementary Table 1.
Study duration
All participants were followed for up to 24-weeks following cryoneurolysis. Demographic and clinical characteristics for all participants were captured at baseline, 3, 6, 9, 12, 18 and 24-weeks follow-up. Fig. 1 depicts the flow of how participants were treated and offered an alternative treatment if their initial treatment failed to provide durable benefit (i.e. were identified as a 'non-responder').
'Responder' status was assessed using the NRS for pain at clinic visits (6-, 12-and 24-weeks post treatment) and via web-form or telephone interviews (as per the participants preference) at 3-, 9-and 18-week post treatment. Non-responders were defined as reporting <20% pain relief with respect to their baseline NRS pain score. Participants who were identified as non-responders to initial treatment were eligible to receive cryoneurolysis of the other nerve group at the next clinic visit. For example, if a participant was a non-responder 9 weeks after superficial nerves were treated, then at the 12-week clinical follow-up the participant would be offered treatment of the deep fibular nerve and if received, the 12-week visit would then become the new baseline visit for assessing outcomes of the new treatment.
Treatment & intervention
Depending on the location of the participants ankle pain, either the superficial nerves (sural nerve for lateral ankle pain, superficial fibular nerve for anterior ankle pain, and/or saphenous nerve for medial ankle pain) or the deep fibular nerve for deep ankle pain was treated with cryoneurolysis. A sonographically-guided diagnostic nerve block using 1% lidocaine with 1:100,000 epinephrine was performed for each nerve to confirm analgesia prior to treatment. Cutaneous sensation in the distribution of each nerve treated was assessed with a standardized nylon monofilament prior to and following cryoneurolysis. The anatomical locations of the respective nerves of interest and the cutaneous distribution tested are described below.
i) Superficial Fibular Nerve
The superficial fibular nerve (SFN) was sonographically located in the distal third of the leg at a location approximately 12 cm above the tip of the lateral malleolus, where it exited the fascia of the lateral compartment of the leg. Sensory function was tested on the dorsum of the foot.
ii) Sural Nerve
The sural nerve (SN) was sonographically located approximately 12 cm proximal to the posterior border of the lateral malleolus, and lateral to the Achilles' tendon. Sensory function was tested on the lateral side of the ankle and fifth ray of the foot.
iii) Saphenous Nerve
The saphenous nerve was sonographically located approximately 7 cm proximal to the medial malleolus, and adjacent to the greater saphenous vein. Sensory function was tested just distal to the medial malleolus.
iv) Deep Fibular Nerve
The deep fibular nerve and anterior tibial artery were sonographically located by following the anterior surface of the tibia distally in transverse view. Generally, the deep fibular nerve was located lateral to the anterior tibial artery approximately 6 cm proximal to the ankle joint line. Sensory function was tested over the dorsum of the webspace between the first and second toes.
Cryoneurolysis device
The iovera cryoneurolysis device (Pacira CryoTech, Inc, Fremont, CA, USA) is approved by the United States Food and Drug Administration (510(k) clearances K133453 and K161835), and is used in surgical procedures to alter nerve function by forming precisely controlled, subdermal cold zones. Exposure to localised temperatures of À20 to À80 C temporarily disrupts peripheral nerve function by axonal and myelin degeneration known as Wallerian degeneration [26,27]. When sensory nerves are treated, their ability to convey sensory signals, such as pain, is immediately interrupted thereby producing an analgesic effect; this is followed by nerve functional restoration. Cryoanalgesia is an established principle and has been described previously [28,31]. The iovera system uses liquid nitrous oxide (N 2 O) that is contained within the handpiece and no gas is injected into the body [26].
Assessments
At the screening/treatment visit, body mass index (BMI, kg/m 2 ) was calculated from body mass (kilograms) divided by the square of the participant's height in meters (stadiometer, Holtain, Wales, UK); as measured by a trained member of the research team.
Outcome measures
The primary endpoint was change in ankle pain, assessed using the Foot and Ankle Outcome Score (FAOS), from baseline to 6, 12, and 24 weeks post-treatment respectively, with the 12-week outcome being the a priori primary study outcome. Based on animal studies, it is estimated that the rate of axonal regeneration following cryoneurolysis is approximately 1.0-3.0 mm/day [32,33]. Based on previously observed rates of recovery, although highly variable, we hypothesised a duration of pain relief in the ankle of approximately 90 days. Secondary outcomes included improvement in quality of life (FAOS-QoL subscale), activities of daily living (FAOS-ADL subscale) and Numerical Rating Scale (NRS) for pain. The tertiary outcome was change in physical performance on the 40-m fast paced walking test.
I) Joint Symptoms, Activities of Daily Living and Quality of Life
Pain, ADL and QoL were assessed at baseline, 12 and 24 weeks posttreatment using the FAOS [30], a self-reported, region-specific questionnaire used for the assessment of foot and ankle joint health. The FAOS questionnaire comprises 42 items across 5 subscales of pain, other symptoms, ADL, sport and recreational function, and foot and ankle-related QoL [30] with a worst possible score of 0 and best possible score of 100. Pain severity was also reported using the NRS at baseline, 12-and 24-weeks post-treatment. Participants scored their ankle pain over the past 7 days from 0 to 10 (0 ¼ no pain, 10 ¼ worst pain imaginable).
Physical performance
Gait speed was measured in meters per second (m/sec) using the 40m fast paced walking test (40 m FPWT). The 40m-FPWT is routinely used in clinical practice for the assessment of physical function and is one of the Osteoarthritis Research Society International (OARSI) recommended functional performance-based outcome measures for OA research [34].
Adverse events
Adverse events (AE) were systematically assessed throughout the study by phone and at each visit. In addition, participants were encouraged to contact the study team with information about AEs that occurred between each respective visit. When an AE was reported to the study staff, data including the type of event, onset/end dates, duration, severity, and outcome were collected and reported to the Principal Investigator (PI). The PI determined the severity of the event using the CTCAE version 5.0 guidelines [35]. Following treatment with cryoneurolysis, expected symptoms included numbness of the skin (ankle and/or foot) with possible skin redness, swelling, bruising or pain at the site of insertion.
Statistical analysis
To assess the primary hypothesis that cryoneurolysis reduced ankle pain for at least 12 weeks in people with unilateral, symptomatic ankle OA, a sample size was estimated based on published effect sizes of 1.06 for FAOS-pain and FAOS-QoL, and 0.65 for FAOS-ADL [36]. At an effect size of 0.65 and a single-sided alpha of 0.025, a sample size of 32 would provide a power of 90%. While 80% power is customary, this study was powered at 90% to reduce the probability of missing an effect if one was present, and to provide sufficient power for comparisons at 2-time points. Assuming up to 20% dropout, 40 participants were recruited. This sample size was considered to be more than sufficient since the effect size for the other 2 outcomes was reported to be larger at 1.06 [36].
The a priori primary analysis examined the change in FAOS-Pain between baseline and 12 weeks following cryoneurolysis. The analyses of secondary (i.e. FAOS-ADL, FAOS-QoL, NRS pain) and tertiary (i.e. 40m-FPWT) outcomes involved longitudinal data collected at baseline, 6, 12, and 24 weeks. All response variables were modeled using longitudinal mixed models. Each model was based on two main effects: a random subject effect, and a fixed time effect consisting of four levels (baseline and three follow-up time points). The Akaike information criterion was used to determine an appropriate variance/covariance structure for each model. Each hypothesis was tested by examining appropriate contrasts and estimated linear forms in the overall mean and the main effects for time, using an alpha level of 0.05 to determine statistically significant change. All analyses were participant-based, with one ankle per participant, and were conducted using "PROC MIXED" in SAS Version 9.4 (SAS Institute, Cary, NC) with results presented as least-square means (LS means) and 95% confidence intervals (95% CI).
Results
Forty-three potentially eligible participants were screened. Of these, two were found by radiographs or weight-bearing CT (WBCT) scan to not have tibiotalar OA (KL ¼ 0) and were excluded. In addition, one participant was excluded due to an unstable ankle that required bracing and opioid therapy. Subsequently, 40 participants with unilateral, painful ankle OA were recruited. Participants (50% women) had a mean AE standard deviation (SD, range) age of 63.0 (12.8, range 28-84) years and a BMI of 31.7 (6.8, range 20.4-46.8) kg/m 2 . Most study participants localized their pain to the superficial nerve group. Five participants reported pain deep into the ankle joint for whom we performed deep fibular nerve treatment with cryoneurolysis. Thirty-one participants were initially treated with cryoneurolysis of superficial nerves (24 superficial fibular nerves, 29 sural nerves, 25 saphenous nerves) and 4 were initially treated at the deep fibular nerve. One participant who was initially treated at the deep fibular nerve was later treated in the superficial group and four participants who were initially treated with the superficial group later received a treatment of the deep fibular nerve. Baseline clinical and demographic characteristics of eligible participants are presented in Table 1. There were no obvious systematic differences in baseline characteristics between participants who completed 24 weeks of the study following initial treatment and those who either dropped out following the initial intervention or transitioned to the other treatment, except for a greater proportion of ankles graded as KL4 in those who dropped out vs. in all enrolled (12/18 vs. 19/40).
Twenty-two participants completed the full 24-week follow-up. Reasons for discontinuation are presented in Fig. 2. In brief, 7 participants discontinued due to increased pain after the intervention, 5 transitioned to the alternative treatment within the study per protocol, 2 sought treatment outside of the study (1 for repeat cryoneurolysis at Week 22 after study entry and 1 for an ankle corticosteroid injection at Week 18), 1 could not afford to travel for follow-up visits, and 3 did not answer calls or emails. The mean values for outcome measures at baseline and each follow-up visit are presented in Table 2.
Safety
Over the 24-week follow-up period following cryoneurolysis, 19 participants reported a total of 42 adverse events. Of these, 2 were deemed to be unrelated to the intervention, 4 unlikely related, 18 Abbreviations: BMI, body mass index; SD, standard deviation; NRS, numeric rating score; FAOS, Foot and Ankle Outcome score; KL, Kellgren-Lawrence; ADL, activities of daily living; QOL, quality of life; m/sec, meters per second. *Following enrolment, one participant was found to have KL4 talonavicular OA and one was found to have subtalar joint OA, although they were enrolled with outside diagnoses of ankle OA. All results are shown as means with standard deviations or, counts with percentages unless otherwise stated. a Scores range from 0 to 10 with a score of 0 indicating no pain and 10 indication worst possible pain. b Scores range from 0 to 100 with a score of 0 indicating the worst possible foot/ankle symptoms and 100 indicating no foot/ankle symptoms. possibly related, 14 probably related (paraesthesia for 6, pain in the treated ankle in 2, muscle cramp in 1, pruritis in 1, lump or edema in 2, and 1st web space pain in 1), and 5 definitely related (ankle pain immediately following the procedure in 3, a blister over the saphenous nerve treatment site in 1 and a bruise in 1). A full list of the reported adverse events is presented in Table 3. The most frequently reported adverse events were index ankle arthralgia (N ¼ 14) and paraesthesia (N ¼ 11), and most were of mild or moderate severity (81%). The 8 adverse events that were severe enough to limit sleep or self-care were ankle pain (N ¼ 6), foot muscle cramp (N ¼ 1) and pain in the 1st web space of the foot (N ¼ 1). There were no serious adverse events.
Discussion
This single-center, single-arm, open-label study is the first to evaluate the symptomatic and functional benefits of cryoneurolysis in participants with unilateral, symptomatic ankle OA. We observed statistically significant changes from baseline in FAOS subscales for pain, ADL, QoL, and in NRS-pain at 6, 12 and 24-weeks post-treatment. These data suggest that cryoneurolysis is likely to be effective to relieve pain and improve selfreported function in patients with symptomatic ankle OA, although walk time was not found to improve. These findings warrant further investigation and validation by future randomised, placebo-controlled trials.
Cryoneurolysis has been shown to be an effective treatment for decreasing knee pain [28], and improving post-operative outcomes following knee arthroplasty [29,37] yet the application of cryoneurolysis for ankle OA had not been studied. Cryoneurolysis is not expected to permanently reduce symptoms, as sensory nerves regenerate [27]. Unlike pharmacological treatments (e.g. NSAIDs, acetaminophen) which carry risk of side-effects [38,39], patients may undergo repeated cryoneurolysis to extend symptomatic benefits. In the current study and compared to baseline, statistically significant improvements in pain, ADL and QoL were observed at 6-, 12-(primary outcome) and 24-weeks following treatment. These effects are similar to a previous knee OA randomised trial in which, compared to sham, cryoneurolysis demonstrated statistically significant improvement in pain at days 30, 60 and 90 [28]. Further, WOMAC pain responders at day 120 continued to experience a statistically significant treatment effect at Day 150 (~21.5 weeks) [28]. However, no statistically significant improvement in 40m-FPWT was observed despite finding a statistically significant improvement in ADL. These data suggest that despite improvements in functional ability and pain relief, patients with symptomatic, unilateral ankle OA may continue to be cautious or limited in walking.
In the current study, cryoneurolysis was well tolerated by most participants with a total of 42 adverse events (AEs) reported in 19 participants over the full 24-week study period. The most common AEs were arthralgia (N ¼ 14) and paraesthesia (N ¼ 11), which were mostly mild in severity and did not require clinical intervention. The higher occurrence of these AEs in the initial group of participants was attributed to the use of the 6.9 mm sharp tips and was very rare after changing to 55 mm dull tips that allowed continuous sonographic guidance to avoid direct contact with the nerve. Even with sonographic guidance, it was thought that the use of the sharp tips may increase the risk of puncturing the nerve sheath leading to increased post-cryoneurolysis dysesthesias observed in the first three participants treated. After switching to the 55 mm tip, the procedure had substantially fewer AE's.
When compared to a previous study of cryoneurolysis in patients with mild-to-moderate knee OA [28], we reported similar counts of AEs. Whilst we did not have a sham group, in the same study, the incidence of device-or procedure-related AE were similar across treatment and sham arms [28] giving confidence to the safety of the treatment. Our rates of AEs were significantly fewer compared with the rates reported for standard of care therapy (i.e. corticosteroid injection) [40]. These data suggest that cryoneurolysis may provide comparable or superior pain relief with functional improvement with relatively low risks compared with currently available pharmacological treatments for OA.
One limitation of this study was that, upon re-assessment of the baseline radiographs following treatment, one participant had KL grade 4 talonavicular OA and one had subtalar joint OA, rather than tibiotalar OA. Given that these participants were treated for ankle pain and returned for follow-up, they were not excluded. Further, we used an open-label design without a sham treatment as this was the first study of cryoneurolysis for ankle OA, and the funding source elected to determine the magnitude of effect of the novel protocol used prior to initiating a controlled study. Large placebo effects have been observed in OA clinical trials, particularly in studies of surgical intervention [41]. While it is possible that the independent treatment effects of cryoneurolysis could be better evaluated by comparison against a sham intervention, the robustness of the response for at least 24 weeks in participants following years of severe ankle pain lends credibility to the study findings. These findings, however, require validation in future randomised, placebo-controlled trials. Cryoneurolysis is a palliative treatment approach and does not target the underlying cause of pain generation. Lastly, whilst 40 participants were recruited, only 22 participants completed the full 24-week follow-up period. Strengths of this study included the frequency of follow-up assessments and the duration of follow-up, design features that permitted measurement of pain, function, and quality of life following this novel application of cryoneurolysis.
Conclusions
Cryoneurolysis of the superficial and/or deep nerves surrounding the ankle resulted in significant improvements in ankle pain, function and QoL for up to 24-weeks in participants with unilateral, symptomatic ankle OA. These data support the use of cryoneurolysis as an effective and safe non-pharmacological treatment of joint symptoms in patients with symptomatic ankle OA.
Role of funding source
Funding for this study, equipment and cryoprobes were provided by Pacira CryoTech, Inc. (previously Myoscience) through an Investigator-Initiated Research (IIR) unrestricted grant. The funders were not involved in the study design, data collection and interpretation of study results.
Role of funding source
Funding for this study, equipment and cryoprobes were provided by Pacira CryoTechh, Inc. (previously Myoscience) through an Investigator-Initiated Research (IIR) unrestricted grant. The sponsor was not involved in the study design, data collection, data analysis, data interpretation or manuscript preparation. HERON is supported in part by funds from CTSA Award #UL1TR000001and Patient-Centered Outcomes Research Table 3 Adverse events (AE) reported in eligible study participants. A total of 19 participants reported N ¼ 42 adverse events. a Adverse events were not mutually exclusive and study participants could report the same adverse event at different time points.
Author contributions
Conception, design and conduct of study: NAS. Analysis and interpretation of data: both authors. Drafting Article: both authors. Critical revision of article: both authors. Final Approval: both authors.
Availability of data and materials
All data generated and analysed in this study are available upon reasonable request. Access to data generated in this report should be sent to the corresponding author at thomas.perry@kennedy.ox.ac.uk.
Public and patient involvement (PPI) statement PPI was not required nor involved with any aspect of the work presented.
Ethical approval
Ethical approval was granted by the University of Kansas Institutional Review Board (STUDY00142298).
Declaration of competing interest
Neither author declares conflicts of interest related to the current study. NAS has consulted for Flexion Therapeutics for unrelated work. | 2022-05-17T15:06:22.560Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "48daa694be6db537ed4470456a11b686d501a956",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ocarto.2022.100272",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "27c57307f8cecae27a3271c33af0e2bdd24ed28d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235811415 | pes2o/s2orc | v3-fos-license | Self-controlled responses to COVID-19: Self-control and uncertainty predict responses to the COVID-19 pandemic
Two online studies (Total N = 331) tested the hypothesis that individual differences in self-control and responses to uncertainty would predict adherence to Centers for Disease Control and Prevention (CDC, 2020a) guidelines, reported stockpiling, and intentions to engage in hedonic behavior in response to the COVID-19 pandemic. Trait self-control (b = 0.27, p = .015), desire for self-control (Study 1: b = 0.28, p = .001; Study 2: b = 0.27, p = .005), and cognitive uncertainty (b = 0.73, p < .001) predicted more CDC adherence. State self-control (Study 1: b = −0.15, p = .012; Study 2: b = −0.26, p < .001) predicted less stockpiling, whereas emotional uncertainty (b = 0.56, p < .001) and cognitive uncertainty (b = 0.61, p < .001) predicted more stockpiling. State self-control (b = −0.18, p = .003) predicted less hedonic behavior, whereas desire for self-control (b = 0.42, p < .001) and emotional uncertainty (b = 0.26, p = .018) predicted more hedonic behavior. Study 2 (pre-registered) also found that emotional uncertainty predicted more stockpiling and hedonic behavior for participants low in state self-control (stockpiling: b = −0.31, p < .001; hedonic behavior: b = 0.28, p = .025), but not for participants high in state self-control (stockpiling: b = 0.03, p = .795; hedonic behavior: b = −0.24, p = .066). These findings provide evidence that some forms of self-control and uncertainty influenced compliance with behavioral recommendations during the COVID-19 pandemic. Supplementary Information The online version contains supplementary material available at 10.1007/s12144-021-02066-y.
In early 2020, people were urged to take drastic precautions to reduce the spread of a new virus. Soon after, the World Health Organization (WHO, 2020) announced that the outbreak of the Coronavirus disease 2019 would be characterized as a pandemic. People immediately began buying large amounts of food staples, personal items, and cleaning supplies, leaving some store shelves empty for weeks (Guynn, 2020;Jones & Tyko, 2020). The U.S. Centers for Disease Control and Prevention (CDC, 2020a) published guidelines to try to prevent the spread of the virus, including wearing a face covering and maintaining a distance of at least six feet from others. Containment of the spread of the virus depended in part on individual compliance. Individuals had to choose between the slight discomfort of wearing a face covering when in public, or taking a greater risk of becoming infected or infecting others. Grocery shoppers were faced with the decision to buy only what they needed, or to stock up on common goods, potentially creating shortages and leaving people in need. Several national holidays occurred in the U.S., enticing Americans to celebrate in large gatherings and ignore social distancing guidelines at the risk of creating a "superspreader" event.
These dilemmas created conflicts in which individuals must choose between their personal interest and the good of society, which can be resolved through the use of self-control. Self-control may make it easier for individuals to follow the guidelines surrounding COVID-19. We predicted that selfcontrol and dispositional responses to uncertainty would be associated with adherence to CDC guidelines, stockpiling, and indulging in hedonic behaviors.
Self-Control and COVID-19 Response
Self-control is broadly defined as one's likelihood of prioritizing long-term goals when they conflict with immediate goals or desires (de Ridder et al., 2018;Fujita, 2011). Selfcontrol is necessary to meet many goals, regardless of whether those goals were established by the self or by society (Vohs & Baumeister, 2017). Individuals with high self-control engage in healthier behaviors, such as less substance abuse and higher likelihood of exercising, than individuals low in self-control (Crescioni et al., 2011;Hagger et al., 2009;Vohs & Baumeister, 2017). Individuals with high self-control are also more willing to help others, adhere to social norms, and make prosocial decisions than those with low self-control (DeWall et al., 2008;Tian et al., 2018;Vohs & Baumeister, 2017). An individual's overall tendency to choose long-term goals over short-term goals is considered trait self-control (Tangney et al., 2004). An individual's likelihood of choosing behaviors that support a long-term goal over behaviors that support a short-term goal at a point in time is considered state selfcontrol (Twenge et al., 2004). Although state self-control is sometimes thought of as "in the moment" self-control, it is somewhat stable over short periods of time. Experience sampling data found that over the course of the week, state selfcontrol varied relatively little within an individual (Zhang et al., 2018). Whereas trait self-control reflects one's general level of self-control, state self-control may be more sensitive to people's feelings of control during a recently-declared pandemic.
The COVID-19 pandemic presented individuals with many self-control conflicts; people were asked to stay at home and to not gather at social events, not to buy food or household items in excess, and to partake in unpracticed behaviors previously acknowledged as atypical, such as keeping a social distance from people and wearing face coverings in public. Following these guidelines required self-control in that individuals had to override the desire to do something immediately rewarding or desirable (take off the uncomfortable mask, hug a friend) in order to pursue the long-term goal of preserving the health and well-being of themselves and those around them. Indeed, recent research has shown that adherence to pandemic-related health behaviors is self-control demanding , and is also associated with generally negative and aversive experiences (Brooks et al., 2020). found that high trait self-control was directly related to social distancing adherence, and it also buffered the effect of perceived difficulty of following social distancing guidelines on adherence.
Due to a potential risk of food, health, and medical supply shortages, American residents were instructed not to buy more than they needed at the beginning of the pandemic (Executive Order No. 13910, 2020). Research on stockpiling behaviors during the pandemic has found that individuals characterized by the Dark Triad traits (i.e., psychopathy, Machiavellianism, and narcissism), as well as collective narcissism (the feeling that one's group is superior to other groups), engaged in more hoarding (Nowak et al., 2020). Not only was buying much more than one typically bought considered to be insensitive to other shoppers, but stockpiling medical supplies or personal protective equipment could potentially leave hospitals and treatment centers in dire need during this time. In line with prior work on self-control and COVID-19 , we predicted that both trait and state self-control would be related to people's likelihood of following COVID-19 guidelines, which include adhering to CDC recommended health behaviors and avoiding stockpiling.
Pandemic-containment behaviors are simultaneously aversive and self-control demanding, which can further lead to a reduction in willingness to exert future efforts of self-control (Brooks et al., 2020;. After using self-control to carry out these health behaviors, people may indulge in hedonic behaviors to make themselves feel better. Hedonic behaviors are activities that are inherently enjoyable or pleasurable. When self-control is low, people will tend to indulge in more inherently enjoyable behaviors (Baumeister & Heatherton, 1996). We predicted that low state self-control would be associated with more indulgence in hedonic behaviors during the COVID-19 pandemic.
In addition to the amount of self-control people have, research has shown that the degree to which people wish to have more self-control can also predict behavior (Uziel & Baumeister, 2017). Desire for self-control is a wish for more self-control which rests on the belief that one does not have enough self-control to meet necessary demands. Desire for self-control is theoretically distinct from the amount of selfcontrol people believe they have (see Uziel & Baumeister, 2017, for review). Even participants who report moderate to high levels of self-control still report wanting more control. Counterintuitively, this desire then impairs future efforts of self-control as the person realizes they are not capable of meeting current demands. In previous research, wanting more self-control (measured or manipulated) in the face of difficult challenges led to a sense of reduced self-efficacy, which impaired performance on future efforts of self-control (Uziel & Baumeister, 2017). Given that the desire for self-control arises when one recognizes that they have a need for more self-control, a higher desire for self-control should appear during situations in which people feel they need more self-control to carry out or avoid certain actions. Due to this ironic effect of desire for self-control, we expected a high desire for selfcontrol to be associated with more stockpiling and indulgence in hedonic behaviors, and fewer CDC-recommended behaviors.
Uncertainty and COVID-19 Response
Uncertainty is the awareness of a lack of knowledge (Anderson et al., 2019). Sources of uncertainty can stem from the randomness or unpredictability of future events, as well as the perceived ambiguity and complexity of that information.
The International Monetary Fund estimates that global economic and political uncertainty stemming from the COVID-19 pandemic is at an unprecedented high, three times the size than during the previous 2002-2003 severe acute respiratory syndrome (SARS) outbreak (International Monetary Fund; Ahir et al., 2020). Most of what was known about the virus was probabilistic (National Safety Council, 2020). Further, the National Safety Council reported being unable to quantify the odds of dying from COVID-19 due to rapidly changing mortality trends. Even though minimal amounts of uncertainty may not be as aversive, prolonged or chronic uncertainty is considered to be a threatening event (Anderson et al., 2019). Due to the uncertainty surrounding the pandemic, we predicted that individual differences in people's responses to uncertainty would interact with state self-control to predict their behavioral responses to COVID-19.
Early theorizations of coping with uncertainty posited that responses to threatening or stressful situations are appraised to assess the situation, which in turn informs coping strategies to manage the stressor (Lazarus & Folkman, 1984). These strategies include emotionfocused coping, which involves regulating one's emotions and affective response to the situation, and problem-focused coping, which involves changing or managing the source of the stress or situation (Lazarus & Folkman, 1984). Building from this theory, Greco and Roger (2001) identified individual differences in the degree to which people respond to uncertainty emotionally and cognitively. These traits are measured on separate scales, and an individual can be high on one, both, or neither. Emotional responses to uncertainty involve experiencing uncertainty as emotionally threatening, which leads to anxiety and negative affect. Cognitive responses to uncertainty involve coping with uncertainty by planning ahead and taking action to reduce or avoid ambiguity (Greco & Roger, 2001). Theoretically similar to an emotional response to uncertainty is intolerance of uncertainty, they are both driven by perceiving ambiguous and uncertain situations as threatening (Rosen et al., 2014). Recent research found a positive association between intolerance of uncertainty and COVID-19-related health anxiety (Tull et al., 2020). If information about a situation is perceived as ambiguous, people who respond with more emotional uncertainty will have a more difficult time coping and regulating their emotions. People who respond with more cognitive uncertainty will cope by taking action in order to reduce uncertainty.
Stress from the COVID-19 pandemic was relatively high on a global level (Xiong et al., 2020). As such, coping with stress was heavily emphasized in the U.S., through both conventional (e.g., therapy, helplines) and unconventional (e.g., 'treating yourself') methods (CDC, 2020b;Gran, 2020). Hedonic consumption has been viewed as compensating for personal discomfort through buying hedonic or materialistic goods to reduce that discomfort (Mandel et al., 2017). Indeed, recent research found that pandemic-related uncertainty increases consumers' tendencies to compensate by spending more money on things they want but do not need (Pomerance et al., 2020). Additionally, a perceived lack of control stemming from the pandemic increased consumers' materialistic and impulsive tendencies (Li et al., 2020b). Taken together, this research suggests that uncertainty surrounding the COVID-19 pandemic increases anxiety and may motivate hedonic consumption, although the exact mechanisms are unclear.
Given that an emotional response to uncertainty is characterized by negative affect, participants with a higher emotional response to uncertainty should engage in more stockpiling and hedonic activities to feel better. Participants who have a higher cognitive response to uncertainty should follow CDC guidelines and enact more preventative health behaviors, insofar as a cognitive response to uncertainty is marked by problemfocused actions that reduce uncertainty.
Self-Control and Emotional Responses to Uncertainty
Previous research has shown that the effect of individual differences is stronger when self-control is low. Low state selfcontrol among those with high anxiety leads to increased worrying, impaired cognitive performance on academic tasks, and impaired attention regulation (Bertrams et al., 2013;Englert & Bertrams, 2015). Recent research has found that high selfcontrol buffers the impact of a negative appraisal of the COVID-19 pandemic, such that the correlation between perceived severity of the pandemic and mental health problems decreased as participants' self-control ability increased (Li et al., 2020a). Global uncertainty was at an all-time high during the COVID-19 pandemic (International Monetary Fund; Ahir et al., 2020). How people cope with this uncertainty depends in part on their emotional or cognitive dispositional response to uncertainty. However, people's ability to control their behavior in a given time period (state self-control) may also be protective against dispositional traits. Thus, we expected the relationship between emotional responses to uncertainty, stockpiling behaviors, and hedonic behaviors to be stronger for participants lower in state self-control than participants higher in self-control.
Overview of the Present Research
The present research was designed to test the hypothesis that, in a sample of U.S. participants, high trait and state selfcontrol would predict better adherence to the guidelines surrounding COVID-19. We also predicted that cognitive response to uncertainty would be associated with CDC adherence, and emotional uncertainty would be associated with more stockpiling and indulgence in hedonic behaviors. Additionally, we tested the hypothesis that for individuals with high state self-control the association between emotional uncertainty and behavior would be weaker than for individuals with low state self-control. In both studies, we measured participants' compliance with CDC-recommended health behaviors and stockpiling. Health behaviors were measured by asking how much participants engaged in certain behaviors that, according to the CDC, helped to protect oneself and others (CDC, 2020a). We measured stockpiling by asking participants how much of various items they had bought since the outbreak of COVID-19 compared to how much they usually buy. In Study 2, we were interested in how self-control and responses to emotional uncertainty predict engagement in hedonic behaviors. We asked participants to self-report on how they would spend unexpected time as a measure of indulgence in hedonic behavior.
An instructional attention check was included in both studies (embedded in the state self-control measure), in which the question item instructed participants to select "a little not true" as their answer; those who did not select that response were excluded from all analyses in both studies. All main effects were analyzed using mean-centered scores of each predictor in individual linear regression analyses in both studies. The primary assumptions for all analyses were met. For exploratory purposes, we also assessed several other variables, including political orientation, personality trait type, and social desirability. Details regarding these exploratory analyses can be found in Supplemental Materials.
Data were collected at the front-end of the pandemic (mid-April), after the pandemic was officially declared (WHO, 2020). Studies 1 and 2 were run concurrently and designed by different researchers, which is the reason the studies do not use the exact same measures (e.g. measuring trait and state self-control in Study 1, but measuring just state self-control in Study 2). Both studies complied with ethical guidelines and were approved by the institutional review board. All participants provided informed consent before participating, and were debriefed, thanked, and compensated upon completion. Study 2 was pre-registered at https://aspredicted.org/blind. php?x=d2j24k; https://aspredicted.org/x4hi5.pdf.
Study 1
Study 1 tested the hypothesis that trait and state self-control would respectively predict more CDC adherence and less stockpiling. Study 1 also tested the hypothesis that high desire for self-control would predict less CDC adherence and more stockpiling.
Method
Participants An a priori power analysis (GPower; Erdfelder et al., 1996) was used to determine the number of participants needed to detect a small-medium effect size for self-control on pandemic-related responses. Using α = .05 (one-tailed), it was determined that a minimum of 138 participants were required to obtain an effect size of r = .21 in a correlation test with 80% power. This effect size estimate was based on a meta-analysis of the effect of trait self-control on selfreported behaviors (de Ridder et al., 2012). We added 10% of this total (N = 152), following a lab standard to account for anticipated omissions due to completion and attention check failures. U.S. residents were recruited for an online study via TurkPrime, a research platform associated with Amazon's Mechanical Turk (MTurk; cf. Buhrmester et al., 2011; TurkPrime; see Litman et al., 2017). Of the 152 participants recruited, 4 started the survey but did not complete it, and 10 were excluded due to failing the attention check question embedded in the state self-control measure. Thus, analyses were performed on the final sample (N = 138; 80 men, 57 women, 1 agender; M age = 37.1, SD = 13.0).
Adherence to CDC Health Recommendations Questionnaire
Participants were asked the extent to which they adhered to each of nine behaviors recommended on the CDC website in response to the outbreak of COVID-19 (CDC, 2020a). Participants were asked to, "indicate how much you have done each of the following activities since, or in response to, the outbreak of COVID-19 (the novel coronavirus)." Anchors for the scale were 1 (Less than usual) to 4 (Usual amount) to 7 (More than usual) and an "N/A" option, which was coded as missing data. Sample items include "Social distancing" and "Using hand sanitizer" (see Appendix 1 for all items). Responses on the scale showed high internal consistency, Cronbach's α = .86.
Stockpiling Behaviors Questionnaire Participants were asked the extent to which they bought each of 13 products since the outbreak of COVID-19. The products included were based on necessary grocery and household items, as well as items that were discussed in the media as being bought at high levels, such as toilet paper, firearms and ammunition, and cleaning or disinfectant supplies (Guynn, 2020;Jones & Tyko, 2020;Oppel Jr., 2020). Participants were asked to "indicate how much of each product you have bought since, or in response to, the outbreak of COVID-19 (the novel coronavirus)." Anchors for the scale were 1 (Less than usual) to 4 (Usual amount) to 7 (More than usual) and an "N/A" option, which was coded as missing data. Sample items include "Toilet paper" and "Medical masks" (see Appendix 2 for all items); Cronbach's α = .84.
Trait Self-Control Scale (TSCS; Tangney et al., 2004) The brief version of the TSCS (Cronbach's α = .81) was used to assess self-reported levels of trait self-control. The TSCS is a 13item, Likert-type scale with anchors of 1 (Not at all like me) to 5 (Very much like me). Sample items include, "I am good at resisting temptation" and "I refuse things that are bad for me." State Self-Control Scale (SSCS; Twenge et al., 2004) The SSCS was designed to assess how much self-control participants feel they have at that moment. The SSCS (Cronbach's α = .95) is a 25-item, Likert-type questionnaire with anchors of 1 (Not true) to 7 (Very true). Sample items include, "I feel discouraged" (reverse-coded) and "A new challenge would appeal to me right now." Marlowe-Crowne 2(10) Social Desirability Scale (M-C 2(10); Strahan & Gerbasi, 1972) The M-C 2(10) was designed to assess a person's levels of social desirability or wanting to enact socially acceptable behaviors. The M-C 2(10) is a 10item, true (1)/false (0) questionnaire with higher scores indicating higher social desirability (Cronbach's α = .62). Examples items include, "I never hesitate to go out of my way to help someone in trouble" and "I have never intensely disliked anyone." Desire for Self-Control Scale (DSCS; Uziel & Baumeister, 2017) The DSCS is designed to assess a person's motivation to have more self-control. The DSCS (Cronbach's α = .91) is an 8item, Likert-type questionnaire with anchors of 1 (Strongly disagree) to 5 (Strongly agree). Sample items include, "I want to be more self-disciplined" and "I want to have more control over my feelings."
Procedure
Participants completed the stockpiling behaviors and CDC adherence questionnaires in a random order. Next, the trait self-control scale, state self-control scale, social desirability scale, and desire for self-control scale were presented to participants in a random order, followed by a demographics questionnaire.
Analytic Approach
Simple regressions were used to test whether trait self-control, state self-control, and desire for self-control predict CDC adherence and stockpiling, respectively. To distinguish desire for self-control from trait and state self-control, the relationships between desire for self-control and the outcome variables were analyzed controlling for trait self-control and state self-control (Uziel & Baumeister, 2017). Additionally, because self-control is a socially desirable characteristic and people are often motivated to report that they have high selfcontrol, social desirability was controlled for across all analyses. Exploratory analyses for the main effect of social desirability on responses can be found in Supplemental Materials; all analyses were conducted using SPSS 26.0.
Results
See Table 1 for means, standard deviations, and correlations for all variables.
Discussion
Study 1 found that trait and state self-control predicted more adherence to CDC recommendations, and state self-control also predicted less stockpiling. These results suggest that individuals with low self-control were less likely to engage in behaviors that protect themselves and others from adverse effects during the pandemic than individuals with high selfcontrol. Moreover, with all self-control variables in the model, state self-control and desire for self-control predicted more CDC adherence, but trait self-control did not predict CDC adherence.
This suggests that trait self-control may be less relevant for predicting responses to time-specific events than current feelings of self-control.
Study 2
Study 2 tested the hypothesis that state self-control and desire for self-control would predict more CDC adherence, less stockpiling, and less hedonic behavior. Study 2 also tested the hypothesis that emotional uncertainty would predict more stockpiling and more hedonic behavior, whereas cognitive uncertainty would predict more CDC adherence. We also predicted that emotional uncertainty would be related to more stockpiling and indulgence in hedonic behavior for people low in state selfcontrol, but not for people high in state self-control. Specifically, Study 2 tested the hypothesis that state self-control would moderate the relationships between emotional uncertainty and stockpiling, and emotional uncertainty and hedonic behavior.
Method
Participants An a priori power analysis (GPower; Erdfelder et al., 1996) was used to determine the number of participants needed to detect a small-medium effect size for uncertainty on pandemic-related responses. Using α = .05 (two-tailed), it was determined that a minimum of 191 participants were required to obtain an effect size of r = .20 in a correlation test with 80% power. We decided in advance to add 10% of this total for attrition to account for anticipated attrition due to completion and attention check failures (N = 210). U.S. residents were recruited for an online study via TurkPrime through Amazon's Mechanical Turk (MTurk; cf. Buhrmester et al., 2011;TurkPrime;Litman et al., 2017). Of the 210 participants recruited, 17 were excluded due to their failure on the attention check question embedded in the state self-control measure. Any participants with missing data for a certain measure were excluded (listwise) from the analyses for that measure. Analyses were performed on the final sample (N = 193; 66 women, 121 men; M age = 36.34, SD = 11.78).
Materials
Adherence to CDC health recommendations, stockpiling behavior (see Appendix 3 for full measure), state self-control, desire for self-control were assessed with the same measures as in Study 1 (Cronbach's α: .86, .91, .94, .86, respectively). We also measured responses to uncertainty and indulgence in hedonic behaviors. Uncertainty Response Scale (URS; Greco & Roger, 2001) The URS was designed to assess individual differences in coping with uncertainty. We used the emotional uncertainty factor and the cognitive uncertainty factor from the URS to measure differences in orientation toward uncertainty. The emotional uncertainty factor of the URS (URS-EU) was measured using a 15-item questionnaire, with anchors of 1 (Never) to 4 (Always). Sample items include, "I get worried when a situation is uncertain," and "Sudden changes make me feel upset"; Cronbach's α = .93. The cognitive uncertainty factor of the URS (URS-CU) was measured using a 17-item questionnaire, with anchors of 1 (Never) to 4 (Always). Sample items include, "I like to plan ahead in detail rather than leaving things to chance," and, "I like to know exactly what I'm going to do next"; Cronbach's α = .89.
Indulgence in Hedonic Behaviors
Questionnaire Participants responded to a prompt that assessed activities they would engage in if they had extra free time: "Imagine you had a few extra hours of time after this study that you had not expected to have. How would you use that time?" Referencing their responses, participants then assessed how indulgent the activities were using a 6-item questionnaire on a 7-point scale, with anchors from 1 (Not very much) to 7 (Very much). Items include, "How satisfying are these activities?", "How desirable are these activities?", "How enjoyable are these activities", "How rewarding are these activities?", "How indulgent are these activities?", and "How luxurious are these activities?"; Cronbach's α = .78.
Procedure
Following the same procedure as in Study 1, participants completed the stockpiling and CDC adherence questionnaires in a random order. Next, participants completed the state selfcontrol scale, desire for self-control scale, uncertainty response scale-emotional uncertainty, uncertainty response scale-cognitive uncertainty, and the indulgence measure in a random order, followed by the demographics questionnaire.
Analytic Approach
The Aiken and West (1991) method was used to test for interactions between emotional uncertainty and state selfcontrol on stockpiling and hedonic behavior using meancentered scores. Simple slopes were probed at one standard deviation above and below the mean of state self-control. Additional multiple regression analyses were conducted controlling for all three outcome measures: CDC adherence, stockpiling, and hedonic behavior. 1 All analyses in Study 2 were conducted in R (R Core Team, 2020) and figures were produced using the package 'rockchalk' (Johnson, 2019). We did not make any predictions for the relationship between emotional uncertainty and CDC behaviors, nor the relationship between cognitive uncertainty and stockpiling or hedonic behaviors, but we report these and all other exploratory results in Supplemental Materials, Fig. 1.
Discussion
Replicating Study 1, Study 2 found that state self-control predicted less stockpiling; Study 2 additionally found that state self-control predicted less hedonic behavior. Also replicating Study 1, Study 2 found that that desire for self-control predicted more CDC adherence; Study 2 additionally found that desire for self-control predicted more stockpiling and hedonic behavior. Unlike Study 1, Study 2 found that state self-control predicted more CDC adherence only when controlling for the other outcome measures. This was not one of our pre-registered analyses and should be interpreted cautiously. Finally, as predicted, Study 2 found that cognitive uncertainty predicted more CDC adherence, whereas emotional uncertainty predicted more stockpiling and hedonic behavior. These results suggest that individual differences in self-control and cognitive responses to uncertainty are associated with differences in reported behavioral responses to the COVID-19 pandemic.
As predicted, Study 2 also found that state self-control moderated the effects of emotional uncertainty on stockpiling and on hedonic behavior, respectively. Specifically, Study 2 found Low SSC High SSC Fig. 1 CDC adherence between high and low state self-control. Note. CDC = Centers for Disease Control and Prevention. Low SSC = −1 standard deviation below the mean of state selfcontrol. High SSC = +1 standard deviation above the mean of state self-control that emotional uncertainty predicted more stockpiling and hedonic behavior among participants low in state self-control, but not among participants high in state self-control. These results suggest that state self-control plays a role in people's behavioral responses to the COVID-19 pandemic, but may interact with people's dispositional responses to uncertainty.
General Discussion
The present research found evidence that individual differences in self-control and cognitive response to uncertainty support compliance with guidelines during the COVID-19 global pandemic. Participants with high trait self-control were more likely to report following CDC behaviors, even when controlling for social desirability (Study 1). Participants with high state self-control were less likely to report stockpiling in response to the pandemic (Studies 1 and 2). As predicted and pre-registered, self-control was associated with weaker effects of emotional uncertainty on behavior. The effects of emotional uncertainty on behavior were weaker for people with high state self-control than for people with low state self-control. For people with low state self-control, emotional responses to uncertainty predicted more stockpiling and hedonic behaviors. However, for people with high state selfcontrol, there was no relationship between emotional responses to uncertainty and stockpiling or hedonic behaviors.
The present research builds on previous research by showing that in addition to trait self-control, state levels of selfcontrol are associated with better guideline adherence. Selfcontrol not only plays a central role in these relationships, but it can also be associated with differences in the effect of traits on behavior. Our results contribute evidence to the theory that individual differences are more strongly associated with behaviors at low levels of self-control (Bertrams et al., 2013;Englert & Bertrams, 2015;Tangney et al., 2004). Our findings show that the effect of emotional responses to uncertainty on reported behaviors is lower for those high in state self-control.
Desire for self-control involves the perception that one does not have enough self-control to meet demands, and is conceptually and statistically distinct from amount of self-control (Uziel & Baumeister, 2017). Desire for self-control was correlated with higher CDC adherence in both studies, as well as stockpiling and engagement in hedonic behaviors in Study 2. The finding that desire for self-control was related to CDC adherence was opposite of our prediction; however, prior work has shown that desire for self-control is associated with a fear of failure, stronger prevention focus, and less emotional stability (Uziel et al., 2021). This suggests that those high in desire for self-control may be sensitive to threats, so threatening environments, such as a pandemic, may be a motivating factor to take action. Desire for self-control is also associated with increases in behavioral intentions, such as wanting to join a self-control training program (Uziel et al., 2021). Our findings suggest that desire for selfcontrol may be associated with a greater likelihood of acting to reduce uncertainty, whether that be by engaging in preventative health behaviors, stockpiling goods, or distracting oneself with a pleasurable activity. These studies provide support for the growing literature on the impact of wanting more self-control on efforts of self-control. Although prior work has shown a somewhat ironic effect of desire for self-control, in that wanting more self-control impaired future efforts of self-control (Uziel & Baumeister, 2017), our results suggest that having a high desire for self-control in a threatening environment may be motivating enough to enact behaviors of self-control.
Although the uncertainty surrounding the pandemic is often presented as a negative, the present research suggests that greater awareness and response to uncertainty can be associated with greater compliance with following CDC recommendations. Participants with a greater cognitive response to uncertainty engaged in more CDC-recommended behaviors than participants who were less responsive to uncertainty. This fits with previous research showing that greater responsiveness to uncertainty was associated with more adaptive health behaviors, such as seeking out precautionary measures (Rosen & Knäuper, 2009). Our findings emphasize the importance of having clear, applicable guidelines as a means to reduce uncertainty. In both studies, CDC adherence and stockpiling were positively correlated. It is plausible that people high in emotional and cognitive uncertainty have different motivations for engaging in the same behaviors. For people with a strong emotional response to uncertainty, stockpiling and hedonic behaviors could be used to cope with the fear and uncertainty surrounding the pandemic. For people with a cognitive response to uncertainty, stockpiling and engagement in hedonic behaviors may have been a method of coping through planned actions and strategies to reduce uncertainty. This interpretation is particularly relevant to the finding that a cognitive response to uncertainty was correlated with CDC behavior adherence, whereas an emotional response to uncertainty was not. When faced with uncertainty, an individual with a cognitive response may be more likely to take effortful actions, such as following CDC-recommended health behaviors, to manage the uncertainty of a situation. Thus, our findings contribute to the body of literature showing that differences in responses to uncertainty can lead to specific coping behaviors, particularly related to health domains (Brouwers & Sorrentino, 1993;Greco & Roger, 2001;Hillen et al., 2017).
Limitations
Because our results are purely correlational, we cannot determine the causal direction of these relationships. As such, our results should not be interpreted to mean that self-control is necessarily the cause of decreased stockpiling behaviors and increased adherence to CDC recommendations. For example, it is possible that feeling like one is successfully enacting behaviors to deal with the pandemic may make a person feel more confident in their self-control abilities. Additionally, previous research shows that people who have more self-control have more extrinsically successful careers, marked by higher salaries and more occupational prestige, as well as higher relationship and parenting satisfaction (Converse et al., 2018). Fig. 2 Relationship between emotional uncertainty and stockpiling behaviors by state self-control. Note. SSC = State Self-Control. Low SSC = −1 standard deviation below the mean of state self-control. High SSC = +1 standard deviation above the mean of state selfcontrol Fig. 3 Relationship between emotional uncertainty and hedonic behaviors by state self-control. Note. SSC = State Self-Control. Low SSC = −1 standard deviation below the mean of state self-control. High SSC = +1 standard deviation above the mean of state self-control Additionally, those in higher income communities were more likely to follow CDC recommendations on social distancing and sheltering in place (Weill et al., 2020). These resources may have made it more possible for people with high selfcontrol to adapt to recommendations to stay home. However, Study 1 found significant differences between people high and low in self-control in behaviors that seem unlikely to be affected by resources, such as covering coughs and sneezes. This suggests that self-control is associated with better adherence in the context of a pandemic. The present findings are based on an MTurk sample of U.S. residents. Although there have been concerns over quality of data obtained from online markets, research suggests that MTurk data are valid and comparable with data collected through traditional laboratory settings (Buhrmester et al., 2011). Including only American residents allowed us to limit (though admittedly not eliminate) the differences in external constraints, which could potentially change the influence of self-control on compliance behaviors. However, in limiting our sample to only American residents, we cannot generalize these results to other populations. For example, in countries with strongly enforced requirements (Think Global Health, 2020), there may be less variability in behavior that could be predicted by self-control. Another limitation is that the effect sizes are small to medium, suggesting that other factors certainly play a role in predicting these behaviors. Another limitation is that CDC behavior adherence was measured by self-report, rather than an objective behavioral measure. Although we controlled for social desirability in Study 1, self-reported behaviors may not be as accurate as observable measures (O Boyle et al., 2001).
Implications and Future Directions
To the extent that self-control supports people's ability to adhere to recommendations, interventions that increase the likelihood of success at self-control may be used to increase compliance with these recommendations. Implementation intentions, for example, involve a plan to engage in certain behaviors in a given situation, which diminishes the need to use self-control (Webb & Sheeran, 2003). Implementation intentions have shown to be effective in promoting self-control abilities in the initiation of health-protective, disease-preventive, and prosocial behaviors (Gollwitzer & Sheeran, 2006). An implementation intention plan one could use involves linking a behavior with a situational context, such as, "if I leave the house, I will wear a mask." In a similar vein, habits are a reliance on automatic behaviors that require less self-control than effortful behaviors. By making a habit of always keeping a mask or hand sanitizer in one's car or on one's body, one will not have to think about bringing it with them, relieving the need for effortful control in future occurrences. Planning ahead and forming good habits serve to both reduce the need for self-control and offset low self-control in the moment. Regularly practicing such good habits and plans can even improve self-control abilities in the long run (Baumeister et al., 2006;Tian et al., 2018), which could serve to benefit self-control during a time when it is most needed. Kokkoris and Stavrova (2021) found that in the context of goal pursuit during the pandemic, high trait self-control was associated with the development of new goal-directed behaviors, as well as those behaviors becoming habits. This suggests that high self-control is resilient to disruptive circumstances and can facilitate positive behavioral change even in adverse contexts.
In the context of the pandemic, the effects of a dispositional emotional response to uncertainty were stronger for those with low state self-control. Although prior research has examined the relationship between depleted self-control and the effects of select traits on behavior, future research should expand upon self-control's moderating effects on dispositional responses and coping mechanisms. Our hypothesis was based on the idea that an emotional response to uncertainty predicts maladaptive behaviors (Greco & Roger, 2001;Van den Bos et al., 2007). However, it is possible that, for people low in state self-control, an emotional response to uncertainty may predict greater likelihood of taking whatever action is available, including beneficial ones. This fits with research showing that people sometimes show a reactive approach response to uncertainty (McGregor et al., 2010). Nash et al. (2011) found that uncertainty increases people's motivation to enact approach behaviors that reduce distress. Additional research would be needed to determine if a reactive approach response to uncertainty is greater when state self-control is low.
Conclusion
The present research provides evidence that self-control and dispositional responses to uncertainty are related to engaging in stockpiling and in behaviors that reduce the spread of a virus during a time of global pandemic. Individuals low in trait self-control and low in cognitive responses to uncertainty are less likely to follow health guidelines. Individuals low in state self-control and individuals high in cognitive or emotional responses to uncertainty are more likely to stockpile goods, leaving shelves empty for future shoppers, than those high in state self-control or low in cognitive and emotional responses to uncertainty. Knowing that a cognitive response to uncertainty involves a plan to take action, guidance and recommendations from governing bodies regarding health behaviors should include clear, actionable steps one can take to protect oneself and minimize the spread of a virus. Following set guidelines might make people less susceptible to fluctuations in self-control or responses to uncertainty in uncertain circumstances. Immediately enacting well-defined public guidelines in ambiguous situations could also potentially eliminate the need for state self-control to act as a protective moderator on the effects of emotional uncertainty. Policy-makers and health experts should take into account individual differences in un-certainty responses and self-control that may hinder or enhance voluntary compliance with health and societal recommendations.
CDC Behavior Adherence Measure Items in Study 1
Instructions: Using the scale provided, indicate how much you have done each of the following activities since, or in response to, the outbreak of COVID-19 (the novel coronavirus).
Stockpiling Behaviors Measure Items in Study 1
Instructions: Using the sliders below, indicate how much of each product you have bought since, or in response to, the outbreak of COVID-19 (the novel coronavirus).
Less than usual
Usual amount More than usual | 2021-07-14T05:20:20.841Z | 2021-07-10T00:00:00.000 | {
"year": 2021,
"sha1": "97d87625c479951a0c6a0f8b0a49f36d915368ef",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12144-021-02066-y.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "97d87625c479951a0c6a0f8b0a49f36d915368ef",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256107953 | pes2o/s2orc | v3-fos-license | Response of soil water content temporal stability to stand age of Haloxylon ammodendron plantation in Alxa Desert, China
Afforestation as an effective measure for wind and sand control has achieved remarkable results in northern China, and has also greatly changed the land use and vegetation characteristics of the region. It is important to study the spatial and temporal dynamics of soil water content (SWC) in different afforestation years and its temporal stability to understand the dynamic characteristics of SWC during afforestation. In order to reveal the spatiotemporal dynamic characteristics of SWC in desert area Haloxylon ammodendron (HA)plantations, in this study, five restorative-aged HA plantations in desert areas were selected and their SWC was measured in stratified layers for the 0–400 cm soil profile; we also analyzed the spatiotemporal dynamics and temporal stability of the SWC. The results showed that the SWC of HA plantations decreased with the increase in planting age in the measurement period, and the SWC of deep layers increased by more than that of shallow layers with planting age. Spearman’s rank correlation coefficients for SWC of 0–400 cm in both 5- and 11-year-old HA plantations reached above 0.8 and were highly significantly correlated; the temporal stability of SWC tends to increase as the depth of the soil layer deepens. In contrast, the temporal stability of SWC in deeper layers (200–400 cm) of 22-, 34- and 46-year-old stands showed a decreasing trend with depth. Based on the relative difference analysis, representative sampling points can be selected to monitor the regional average SWC, but for older HA plantations, the uncertainty factor of stand age should be considered in the regional moisture simulation. This study verified that it is feasible to simulate large-scale SWC in fewer observations for HA plantations younger than 11 years old, while large errors exist for older stands, especially for deeper soils. This will help soil moisture management in HA plantations in arid desert areas.
Introduction
This study area is located in northwestern China, at the southern edge of the Badain Jaran Desert, which has long suffered from wind and sand. In order to effectively curb wind and sand hazards and prevent the further expansion of sandy land, China has launched a number of major ecological construction projects in wind and sand hazards, such as the "3-North Shelter Forest Program" and the "Grain for Green" program with artificial vegetation construction as the main ecological restoration measure. The vegetation represented by Haloxylon ammodendron (HA) plantations is increasingly restricted by soil water content (SWC), especially as the large area of HA plantations has a more obvious soil drying phenomenon, which seriously affects the water cycle process in the study area (Kang et al., 2021;Zhou et al., 2022). Therefore, it is important to understand the hydrological effect of SWC after afforestation for the water cycle and eco-hydrological process of terrestrial ecosystems. SWC is at the core of functioning sandy ecosystems in arid and semi-arid regions, driving the material cycle and energy flow in the soil-vegetation-atmosphere continuum, and its dynamic changes affect hydrological and ecological processes such as precipitation infiltration, vegetation transpiration and solute transport (Vereecken et al., 2015;. SWC has strong spatial and temporal variability due to topography, elevation, soil texture and climate (Hu et al., 2010;Heathman et al., 2012). Related studies also indicate that vegetation affects the spatial distribution of SWC, enhancing or reducing the spatial heterogeneity of SWC to some extent (Li et al., 2008;Cho and Choi, 2014), and the variability of SWC also responds to vegetation growth succession and spatial patterns to varying degrees (Van Pelt and Wierenga, 2001). Therefore, a quantitative study of the spatial variability of SWC is essential to grasp the regional ecohydrological dynamics.
Although soil water has strong spatial and temporal variability, previous studies have shown that SWC is characterized by temporal stability (Brocca et al., 2009;Brocca et al., 2010). Vachaud et al. (1985) found that the spatial structure of SWC is continuous in time when external factors such as soil structure and topography remain stable, and that certain sampling points can represent regional average SWC conditions; this phenomenon was subsequently defined as the temporal stability of SWC. Since then, the temporal stability of SWC has been widely used to identify the average SWC condition in the field (Huang et al., 2020;Zhou et al., 2020;Quan et al., 2021) to simulate hydrological variables (Nasta et al., 2018). The temporal stability of SWC is influenced by numerous elements, such as the length of the observation period (Zhang et al., 2018) and the type of land use (Liu and Shao, 2014;. Liu and Shao (Liu and Shao, 2014) reported a significant effect of different land use types on SWC in the 0-4 m profile and further validated the feasibility of representative sampling points for estimating the average SWC. reported that in the spring wheatshelterbelt-maize agroforestry ecosystem, SWC relationships between adjacent land use types were explored by determining the most timestable locations of each soil profile for different land use types. Jia and Shao (Jia and Shao, 2013) reported that vegetation cover and aboveground biomass are the main factors affecting the temporal stability of SWC, and further concluded that sampling in slopes may produce better results when the temporal stability theory is applied to slopes.
The above findings indicate that the study of temporal stability of SWC is more meaningful in the context of non-homogeneous soils, and that vegetation factors profoundly affect temporal stability and differ for different soil depths. As a key area for revegetation in the Alxa desert region, artificial revegetation has greatly changed the land use and vegetation characteristics of the region, and the effect of this huge disturbance on the temporal variability and temporal stability of SWC is yet to be studied. Specifically, this study aimed to: (1) explore the response of SWC to stand age in HA plantations;(2) reveal the spatiotemporal dynamic characteristics of SWC in desert area. This study provide a theoretical basis for water resource management and vegetation construction in the region.
Description of study site
The Alxa desert area is located in northwestern China (97°10′E-106°52′E,37°21′N-42°47′N). The region has an arid climate. The sunshine is sufficient and the annual sunshine hours are 2993 to 3345h. The annual average temperature is 6.8-8.8°, with extreme minimum and maximum temperatures of -36.4°and 41.7°r espectively. The temperature has a large diurnal temperature difference and significant seasonal changes. The frost-free period is 130-165 d. The average annual precipitation is 39.3-85.6 mm, with precipitation from July to September accounting for about 90% of the year. The water table is 80-120 m, and there are no river confluences. Due to the drought tolerance and high survival rate of HA, the Chinese government planted HA in large quantities in the study area around the 1970s for wind and sand control, and it has played a vital role in the improvement of the ecological environment in the area.
Experimental design and measurements
Since 1975, afforestation projects have been carried out every year in Alxa desert. In this study, the HA planted in 1975HA planted in ,1987HA planted in ,1999HA planted in ,2010HA planted in and 2016 were selected according to the principle of consistency of soil texture. Then, five representative sample plots of 5-, 11-, 22-, 34and 46-year-olds were selected in the study area with a 10-year age gradient, 50m*50m sample plots were delineated in each plantation and 3 replicate samples were collected at the center of the sample plots in May-September 2021 in 20 cm stratification. The 0-400 cm soil profile was sampled in 20 cm layers, and three duplicate samples were collected from each sample site. And put the soil sample into the aluminum box back to the laboratory for drying method to measure the SWC. Three undisturbed soil samples were collected with a ring knife near the sampling point for the determination of soil hydraulic characteristics such as soil bulk density and field water capacity, and sample plots were investigated at the same time (Table 1). The selected plots were all planted in a standardized "two rows and one strip "planting mode. There was no other vegetation around, so the study was not interfered with by the planting density and other vegetation on the experimental results.
Statistical analysis
Soil water storage is the amount of soil water stored in a certain unit volume. In this study, based on the observed depth of 0-400 cm soil depth, the soil water storage per unit volume of 0-400 cm soil depth is given by: where SWS i is a soil water storage of 0-400 cm from point i (i=1,…, n) (mm), SVWC i is the soil volumetric water content (cm3cm-3), d i is the soil depth (mm), and SVWC j is the average soil volumetric water content at time j (j=1,…,m) (cm3cm-3), The number of soil layers observed in this study is 20. According to Vachaud et al. (1985), the relative difference (RD) and standard deviation (SD) of each observation can characterize the temporal stability of SWC. The RD ij and SD of SWS ij at any observation at point i (i=1,…,m) at time j (j=1,…,m) are given by: where SVWC ij is SVWC at point i (i=1,…,m) at time j (j=1,…,m) (cm3cm-3), SVWC j is the average SVWC at time j (j=1,…,m) (cm3cm-3), and m is the number of measurements. The mean relative difference MRD i and its corresponding standard deviation SDRD i are given by: According to Zhao et al. (Zhao et al., 2010), we compared the temporal stability of SWC at different soil depths in different restoration years of HA plantations by comparing the index of temporal stability at depth i (ITSD i ) to find the highest temporal stability point. The observation point with the highest temporal stability, which is representative of the average SWC condition and ITSD i , is given by: Spearman's rank correlation coefficient r s was used to analyze the stability of the rank change over time for different observations during the growing season: where R ij is the rank of the SWC at point i at time j,R il is the rank of the SWC at point i at time l, and n is the total number of observation points. Dummy Figure 1 3 Results and discussion
Distribution characteristics of SWC variability
Stand age influenced the spatial and temporal distribution of soil water to some extent, and that the age factor gradually overshadowed the soil depth factor as the stand age increased. Table 2 shows the spatial statistical characteristics of the mean SWC, standard deviation and coefficient of variation (CV) of the 0-400 cm soil layer profile in each HA plantation. The standard deviation and CV of SWC in each soil layer showed a general trend of increasing with the deepening of the soil layer. From Figure 2, it can be seen that the CV of SWC changed irregularly for 5-,11-, and 22-year-old of HA plantations, but the CV of SWC of 34-and 46-year-old HA plantations tends to increase with the deepening of the soil layer, It is indicated that the spatial variability of SWC increases with the deepening of the soil layer for 34-and 46-year-old of HA plantations. Li et al. (2015) reached similar conclusions on the Loess Plateau, the spatial distribution of SWC depends largely on structural factors such as climate, topography and soil texture, while stochastic factors such as vegetation recovery and human activity increase the spatial variability of SWC (Zhao et al., 2018). The sampling area is located at the southern edge of the Badain Jaran Desert, and is less disturbed by human and animal activities, the spatial variability of SWC was increase as the soil layer deepened for 34-and 46-year-old HA Location of the study area. His research showed that soil water consumption was divided into two stages, the first stage was that the artificial Robinia pseudoacacia forest gradually transited from shallow soil layer to deep soil layer with the increase of age, the second stage is the transition from the deep soil to the shallow soil when the deep soil water consumption reaches a certain threshold. However, this study found that the soil water consumption was similar to the first stage of the above research with the increase of the age of the HA plantation, but the rule of the second stage did not appear. The reason for this difference may be that the sampling depth of this study is relatively shallow (400 cm), and the sampling depth has not yet reached the conversion threshold of soil water consumption. According to Zeng et al. (2011), it was found that the root system of aged HA can grow vertically down to 800 cm, which also verified the above hypothesis. Future studies should carry out deeper soil layer research for aging HA. It also helps to reveal the relationship between artificial vegetation growth and soil water consumption.
The spatiotemporal behavior of SWC
The SWC showed a trend of decreasing with increasing years of HA plantations, and the variation of SWC with plant years was greater in the deep layer than in the shallow layer. Figure 3 shows the variation trend of SWC with depth in the HA plantation profile at each stand age; it can be seen that the SWC in the surface layer (0-100 cm) did not vary much with stand age during the study period and was at a high level relative to the deeper layers, fluctuating in the range of 0.06-0.09 cm 3 cm -3 . The SWC in the 100-400 cm soil layer varied greatly with the age of the forest, fluctuating between 0.02-0.09 cm 3 cm -3 , with the SWC in the 100-400 cm soil layer decreasing with depth in the 46-year-old HA plantations. The SWC in 11-, 22-and 34-year-old HA plantations showed a decreasing trend with a depth from 0 to 300 cm, but there was an "inflection point" from 300 to 400 cm, where the SWC increased with depth and the "inflection point" in 5-year-old HA plantations occurred at a more shallow level (100-200 cm). Hu et al. (2010) used soil water storage data in the Loess Plateau to identify temporal stability points, and found that artificial vegetation significantly affected deep soil water. Gao and Shao (2012) conducted research on 0-300 cm soil layer SWC in the Loess Plateau and found that the change rate of SWC in deep soil was lower than that in shallow soil, which was contrary to the conclusion of this study. These differences are due to the different types of trees, shrubs and herbs that were focused on in the study of Gao and Shao (2012). The present study is consistent with Hu et al. (2010), which was conducted on a mono-vegetated hillside. Compared to trees and shrubs, herbs consume less water at shallow depths, and the effect of vegetation factors on SWC varies with soil depth. The vegetation factors interfered with the regularity of SWC variation with depth, resulting in large differences in vertical soil profile variations under different vegetation types. In addition, this study area is located in an arid desert area with little human activity, which reduces the influence of human activity and other factors on the soil surface moisture changes. HA plantations are planted for a long period of time (46 years) and HA is a deep-rooted plant (Gong et al., 2019); the SWC in the 200-400 cm soil layer is more influenced by root water consumption, which also leads to the deep SWC changing with planting. The SWC in the 200-400 cm soil layer is more influenced by root water consumption, which leads to a greater variation of SWC in the deep layer than in the shallow layer with planting age. Figure 3 shows that there is no significant decrease in SWC in the 0-100 cm soil profile for all stand ages, while the SWC in 200-400 cm decreases significantly after more than 11 years of HA plantations. Moradi et al. (2017) assessed the impact of afforestation on soil physical and chemical properties and SWC in south-west Iran, and found that afforestation improved soil conditions but significantly selected HA planting years similar to the present study. The reason for this phenomenon is that precipitation is low and evaporation is high in the study area. The less precipitation mainly occurs in short precipitation periods, and the infiltration process of precipitation is mainly concentrated in the soil layer within 60 cm. Therefore, shallow soils can be recharged by precipitation, while deep soil water recharge is not significant. (Yang and Zhao, 2014). Furthermore, the transpiration water consumption of HA is much higher than the precipitation (Chang et al., 2007); after 22 years of planting in particular, the excessive consumption of deep soil water by HA roots further leads to SWC deterioration.
Spearman's rank correlation coefficient
For different stand ages, the correlation coefficients tended to decrease with increasing stand age, but in general the correlation coefficients were greater than 0.5 and highly significant (P<0.01). Spearman's rank correlation coefficient of SWC at each sampling point can characterize the temporal stability of the study area as a whole. Figure 4 shows Spearman's rank correlation coefficient matrix of SWC at 0-400 cm depth for different observation dates in the study area. As depicted in Table 2, the correlation coefficient between August 12 and the other measurement dates is small, between 0.631 and 0.828, and the correlation coefficient between the remaining dates is large. The reason for this discrepancy may be due to the large variation in SWC due to the larger rainfall in the week before the July 12 sampling, resulting in a smaller correlation coefficient. The study of Xin et al. (Xin et al., 2008) on the temporal stability of soil water uptake also found that the temporal stability of soil water was worst when the soil was alternately dry and wet. This indicates that the spatial distribution of SWC at 0-400 cm depth in HA plantations in the study area is characterized by temporal stability within the study time; Grant et al. (2004), However, it should be noted that the temporal stability showed a trend of time-dependent variation, The closer the sampling time, the greater the correlation coefficient. This result indicates that the duration of temporal stability of SWC is limited, which is a similar conclusion in other studies with interannual observation scales (Penna et al., 2013;Zhang and Shao, 2013a). This study was conducted based on one growing season of HA plantations, and for future studies, a longer time series can be considered to explore the effective period of temporal stability.
Relative difference analysis
Deep SWC in favor of maintaining a higher temporal stability. However, with the introduction of artificial vegetation, the growth of stand age gradually overshadows the soil depth factor, making the temporal stability of deep SWC lower. Figure showed a trend of increasing and then decreasing; for the shallow layer, the MRD increased gradually probably because the spatial variability of SWC increased as the soil layer deepened (Jia and Shao, 2013). The MRD of SWC in the deeper soil layers decreases, probably due to the young age of the HA plantations and the failure of the root system to grow into the deeper soil layers. For the 22-, 34and 46-year-old HA plantations, the MRD in SWC increased with the increasing depth of the soil layer, and the same conclusion was reached in the CV analysis above. The stronger the spatial variability, the more dispersed the distribution of water content at each sampling point, and the greater the deviation from the average value will be; Zhang and Shao (2013) selected two plots with different soil textures in northwest China, and evaluated the spatial distribution of SWC. The results showed that the deeper the soil depth, the greater the MRD of SWC, which was similar to the conclusion of this study. In a gravel-mulch field in northwestern China, in relation to the spatial and temporal stability of SWC, Zhao et al. (Zhao et al., 2017) found that the MRD decreases with increasing soil depth; the reason for this difference lies in its small research scope (32*32 m), and the gravel-mulch field soil has a homogeneous texture and uniform sand and gravel cover, resulting in its weak spatial variability of SWC. Li et al. (Li et al., 2015) found that the fluctuation range of the MRD may be related to the study scale, test placement, sampling method, etc. With the expansion of the study area, the more complex the corresponding soil properties, topography and vegetation cover, the stronger the spatial variability of SWC, and the range of the MRD also increases (Martinez-Fernandez et al., 2003). Spatial variability is the result of the interaction of different dominant factors at different spatial scales, and in the process of scale transformation, the influencing factors also change, so the results of studies at different scales vary (Sun et al., 2022). Therefore, for specific research objects, the sampling scale range and sampling granularity need to be considered in order to have a better explanation of the spatial heterogeneity of SWC (Fagan et al., 2003). Figure 4 also shows that the MRD is asymmetric, with negative absolute values greater than positive values, due to the fact that the SWC at more sampling points is less than the mean value, probably because the soil at the measurement site has a gravel content, which is not conducive to water retention. The magnitude of the SDRD can characterize the degree of temporal stability of SWC at the sampling point; the smaller the SDRD, the higher the temporal stability. Figure 4 shows that 5-,11and 22-year-old HA plantations is in line with the trend of decreasing SDRD with soil depth deepening, but not at ages 34 and 46, indicating that deep soil water of HA plantations at the age of 5,11, and 22 is not disturbed by vegetation and has higher temporal stability. However, SWC in the deep layer of HA at 34-,46-year-old was greatly disturbed by vegetation, so the temporal stability of SWC did not show a decreasing trend with soil depth., which was consistent with the MRD analysis above. Temporal stability of SWC is the result of the interaction of many factors such as vegetation, topography, climate and soil (He et al., 2021). The study area is on the edge of the desert. The climate environment of low rainfall and high evaporation has its own special effects on the temporal stability of soil water; long time soil freezing makes the mid-soil flow a predominantly vertical movement; meanwhile, the soil dry layer induced by vegetation water depletion hinders the vertical movement of soil water and limits the recharge of deep soil water. Ding et al. (Ding et al., 2020) concluded that artificially restored vegetation has a certain impact on SWC in the rhizosphere, but it has less impact below the rhizosphere, mainly because the vegetation growth period is short and the vegetation characteristics of slow growth make its ecological water consumption low, and the precipitation basically meets the vegetation growth demand. However, this study found that as the growth years of HA plantations increased, it had a non-negligible effect on the changes of SWC dynamics; precipitation and evaporation are the most direct influencing factors of SWC, but there are large differences in response to different soil depths. Surface runoff caused by strong rainfall on the slope surface replenishes surface SWC significantly; while it is not conducive to deep SWC infiltration and evaporative dissipation of surface SWC is large, the rough soil texture is weak in lifting deep SWC, and the combined factors lead to deep SWC in favor of maintaining a higher temporal stability. However, with the introduction of artificial vegetation, the growth of stand age gradually overshadows the soil depth factor, making the temporal stability of deep SWC lower. The results of the above-mentioned study showed that the temporal stability of deep soil layers was higher than that of shallow layers, and the temporal stability characteristics of SWC were depth-dependent for HA plantations before 22 years of age. This is consistent with the results of related studies (Gao and Shao, 2012a). However, for older HA plantations there is an opposite performance in deeper soils (200-400 cm). It is worth to note that, limited by the experimental conditions, this study only studied SWC in HA plantation with a stand age of 5,11,22,34, and 46 years, and could not find a continuous stand that met the experimental conditions. Therefore, appropriate models should be selected for future research in order to evaluate SWC more accurately.
Representative sampling locations
The correlation with the mean value of SWC in the corresponding soil layer is high, and the average SWC of each soil layer in the study area can be estimated more accurately. show that representative sampling locations (RSL) can be selected to estimate the regional average SWC according to the principle that the MRD is close to 0 and the SDRD is relatively small. To verify the reasonableness of the RSL, the SWC of each RSL was compared with the average SWC of each soil layer, and it was found that the SWC of each measurement point fluctuated slightly around the average SWC ( Figure 6). Meanwhile, the regression analysis between the mean SWC values of different soil layers and the RSL during the observation period showed that the coefficient of determination R 2 of each soil layer from 0 to 400 cm varied in the ranges of 0.88-0.95, 0.60-0.95, 0.49-0.95 and 0.53-0.95, indicating that the correlation between the mean SWC values of each RSL and the corresponding soil layer was high. The correlation with the mean value of SWC in the corresponding soil layer is high, and the average SWC of each soil layer in the study area can be estimated more accurately, but the temporal stability of the deep SWC gradually decreases with increasing growth years of HA plantations.
Implications for future afforestation activities
In order to prevent further desert expansion, China introduced many plant species in the 1970s (Ahrends et al., 2017). The excessive water consumption of artificial vegetation has broken the dynamic balance between precipitation and native vegetation, and many new environmental problems have emerged (Gao and Shao, 2012b;He et al., 2019). In recent years, there have been numerous studies showing that the massive consumption of soil water by artificial vegetation leads to a decrease in SWC and has a negative impact on vegetation growth, which in turn threatens the health of the ecosystem (Chang et al., 2007;Xin et al., 2008;Zhang and Shao, 2013).
In our field survey, we found that when the SWC dropped to 0.02-0.03 cm 3 cm -3 after 30a of HA planting, the leaves fell off in large numbers and the individual biomass decreased significantly. Moreover, when the SWC was at 0.01-0.02 cm 3 cm -3 , the sandfixing plants appeared to decline; at about 1% the sand-fixing plants died in large numbers. Although HA can reduce the water consumption of aging photosynthetic organs by withering leaves and branches, the water scarcity of old leaves, branches and stems is extremely serious and has basically reached its limit. The further shedding of a large number of leaves and branches reduces its individual biomass and deteriorates its survival ability. Finally, the stems also dry up and break due to too little water in the plant. Therefore, based on the above findings, we suggest that moderate manual interventions, such as pruning of dead branches, proper irrigation and some level of crop flattening or intercropping after 30-40a of HA planting, may have positive implications for the revival and rejuvenation of HA plantations. It is worth noting that this study based on the same planting density on the SWC of different age HA plantations, to avoid the planting density to interfere with the results of the study. In the actual afforestation project, usually through adjusting the planting density to reduce the deep soil water consumption. Therefore, future research should further establish mechanism models through vegetation growth and soil water consumption to provide theoretical basis for rational afforestation engineering.
Conclusions
1. The SWC of 0-400 cm of Haloxylon ammodendron plantations t in the study area showed a decreasing trend with the increase of HA planting age, and the HA plantations consumed mainly shallow soil water in 5-and 11-year-old stands, while 22-,34-and 46-year-old consumed mainly deep soil water.
2. The distribution of SWC at 0-400 cm depth in the HA plantations in the study area was characterized by temporal stability during the study time period, but it should be noted that the temporal stability showed a time-dependent trend, with the correlation coefficient increasing the closer the sampling time and tending to decrease as the time interval increased. This result indicates that the duration of temporal stability of SWC is limited. 3. Afforestation (e.g., Haloxylon ammodendron) is an important measure to prevent wind and sand fixation in desert areas, which is of great significance to the sustainable development of the ecosystem. Our study found that the age of vegetation has an important influence on the deep SWC dynamics, and with the increase of forest age, the water depletion of vegetation roots leads to the degradation of SWC in the study area, and the temporal stability also decreases with the increase of forest age and soil depth, therefore, in the future afforestation work in the Alxa desert area, appropriate moisture control measures should be taken for HA plantations of advanced forest age.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. | 2023-01-24T14:41:33.145Z | 2023-01-24T00:00:00.000 | {
"year": 2023,
"sha1": "02ccdb6ab143bc7aab41dac5f588d118138b0fdf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "02ccdb6ab143bc7aab41dac5f588d118138b0fdf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16083945 | pes2o/s2orc | v3-fos-license | How to Handle Assumptions in Synthesis
The increased interest in reactive synthesis over the last decade has led to many improved solutions but also to many new questions. In this paper, we discuss the question of how to deal with assumptions on environment behavior. We present four goals that we think should be met and review several different possibilities that have been proposed. We argue that each of them falls short in at least one aspect.
Introduction
In reactive synthesis, we aim to automatically build a system that fulfills guarantees Gua under the assumption that the environment fulfills some properties Ass. Most popular synthesis approaches take the rudimentary view that the system and its environment are adversaries, and that the synthesis problem is solved by generating a system that realizes the formula Ass → Gua.
We argue that this view is imperfect, describe principles that we believe are important to obtain desirable systems, review the work of others who have come to similar conclusions, and describe drawbacks of the proposed approaches. The purpose of the paper is to raise questions rather than to present answers, and to highlight (our) lack of understanding of the problem, rather than our understanding of a solution. Doing this, we hope to spark discussions and further research on this topic.
To see that the setting described above is imperfect, consider a hypothetical example from real life. Suppose that a coach promises the owners of his/her team to win a match under the reasonable assumption that none of the coach's players gets injured during the match. In order to fulfill this contract, the coach may either work hard at winning the game, or may injure one of the players during the last few minutes of the match. While the latter approach may not be unheard of 1 , it is generally frowned upon. The same problem occurs in synthesis: a system may fulfill the specification Ass → Gua by forcing the environment to violate the assumptions, which is quite undesirable [27].
We will assume that systems are implemented in a setting that consists of multiple components. Some of these may be implemented by a synthesis tool and are thus correct by construction. Some may be implemented by a human and we should have good faith in correctness, but not certainty. Some components involve physical interaction with the environment, and we should be skeptical of the assumptions that we have made about these components [32]. The same applies for components whose functionality is carried out by a human operator. Finally, note that even in a perfectly implemented system, errors occur due to environmental influences such as soft errors [29].
To be sure, the requirement that the system fulfills the guarantees in all cases that the assumptions are fulfilled is very natural and captures the notion of correctness. Yet, it is a very incomplete notion of what is desirable in a system. We present the following (non-exhaustive) list of functional goals that we believe a desirable system should aim for: Be Correct! Fulfill the guarantees if the environment fulfills the assumptions.
Don't be Lazy! Fulfill the guarantees as well as possible for as many situations as possible, even when the assumptions are not fulfilled.
Never Give Up! If you cannot satisfy the guarantees for every environment behavior, try to satisfy them when you can.
Cooperate! If possible, allow or help the environment to fulfill the assumptions.
Note the difference between Don't be Lazy and Never Give Up: in the first case, we can enforce the guarantees. In the latter we cannot enforce it, but we may be able to succeed if the environment does not exhibit worst-case behavior. Besides these four functional goals, we want the assumptions to be an abstraction of the environment's specifications so as not to make the synthesis procedure unduly complex. Traditional LTL synthesis only meets the goal to Be Correct. In Section 2, we will show this in more detail, along with the limitations of the approach. After that, we survey and illustrate some existing approaches for the other goals. For Don't be Lazy (Sect. 3) we focus on robust and error-resilient synthesis, and on synthesis with quantitative objectives since these approaches attempt to satisfy guarantees as well as possible. For Never Give Up (Sect. 4) we will look at research that goes beyond a purely adversarial view of games by suggesting reasonable strategies for losing states. For Cooperate (Sect. 5) we will leave the adversarial view even further: We will consider non zero-sum approaches which allow for explicit collaboration by constructing joint strategies for the players. For each approach we will show how it addresses at least one goal and how it is imperfect for another. Finally, in Section 6, we will conclude our investigation with a table summarizing the strengths and weaknesses of the discussed approaches.
Standard Synthesis
In standard synthesis, the environment is treated as an adversary, i.e., synthesized systems must be correct for any possible behavior of the environment. The behaviors of the environment under which the guarantees Gua must hold are then modeled as antecedents to the implication Ass → Gua.
The corresponding payoff matrix 2 is shown in Figure 1: the only one case that is considered undesirable is when Ass is fulfilled, but Gua is not; there is no difference in payoff or desirability between the three remaining cases.
First, the implication does not enforce Don't Be Lazy: It does not distinguish a trace that satisfies Gua from one that violates Ass. Thus, it does not restrict the behavior of the system in any way on traces where the environment violates Ass. (An example can be found in Section 3.) Second, the formalization does not imply the satisfaction of Never Give Up. Standard synthesis only optimizes the worst-case behavior, i.e., if Ass → Gua cannot be fulfilled for some behavior of the environment, then the output of the synthesis algorithm will simply be "unrealizable", instead of a system that fulfills Gua whenever possible. This goal is all the more important in a situation in which the assumptions are violated. In that case, the guarantees may not be realizable, but even then they should be fulfilled whenever possible. (An example can be found in Section 4.) Third, the approach does not fulfill Cooperate: the system may force the environment to violate Ass.
This specification should result in a system that grants every request in the next time step, for every environment that gives infinitely many requests, but no request is repeated before there is a grant. In this setting, requests are signaled by setting r to true, whereas grants are signaled by setting g to true. By simply giving no grants at all (violating the guarantees), the system can force the environment to violate the assumptions, thus fulfilling the specification.
The behavior shown in the example may be intended if the environment is considered to be purely adversarial, but in many applications of system design, this is not the case. Good examples for this fact are large systems that are constructed modularly, where the overall system is abstracted by environment assumptions for a particular component that we want to synthesize. Then our goal is not for the component to force the environment (i.e., the rest of the system) to violate the assumptions, but to work together with the environment to some extent, allowing both Ass and Gua to be satisfied whenever possible.
Thus, the standard approach fulfills Be Correct, but not Don't Be Lazy, Never Give Up, and Cooperate. This is summarized in Table 5.2 (on page 48) together with the strengths and weaknesses of the approaches discussed in the next sections.
Don't Be Lazy!
Traditionally, correctness is considered to be a Boolean property: a system either realizes a specification or not. For specifications of the form Ass → Gua, this attitude results in the desirability matrix shown in Figure 1. This section focuses on improving the system behavior if assumptions are violated, i.e., on the left column of the matrix.
Example 2. As motivation, consider a flight control system which must work correctly under the assumption that the number of simultaneously arriving planes is less than 100. For more planes, the specification may be unrealizable, e.g., because it may be impossible to guarantee all timing constraints. Suppose further that the system has been synthesized, is in operation, and the 101 st plane arrives. A work-to-rule synthesis algorithm could have considered this situation as won, and may have randomly chosen to stop serving any plane in this situation. A more desirable system would serve planes as well as possible, even though the assumption is violated: For instance, ignoring the 101 st plane or responding a bit slower are certainly better options. Even more, for configurations of the 101 planes that can be handled with the available resources, it would be preferable if no reduction in the quality of service occurs at all.
With the matrix in Figure 1, once the assumptions are violated, there is no additional benefit for the system to satisfy the guarantees any more. The implication is satisfied for any future system behavior, so it can then behave arbitrarily. The synthesis algorithm can exploit this freedom even in situations in which it would still be possible to satisfy the guarantees. This is clearly undesirable. Intuitively, the synthesized system should always aim for satisfying the guarantees, even if assumptions are violated, instead of getting lazy and doing only the least to satisfy the implication.
With the payoff matrix in Figure 2, this changes. By giving traces of the system in which the system satisfies the guarantees a higher payoff regardless of whether the assumptions are satisfied, there is always an incentive for the system to satisfy the guarantees. An approach to deal with multiple ranked specifications is presented in [1].
In practice, assumptions and guarantees can be violated only slightly, or very badly. With this non-Boolean understanding of property violations, the desirability matrix of Figure 2 gets blurred, with gradual transitions between the quadrants, as represented in Figure 3. It makes sense to consider the degree in which guarantees are satisfied also in synthesis: even if it is not possible to satisfy all guarantees due to assumption violations, an ideal system would still try to satisfy guarantees "as well as possible".
In the remainder of this section, we briefly review previous approaches to synthesize systems that are eager to satisfy their guarantees. We start by reviewing the strict implication semantics employed in the Generalized Reactivity(1) Synthesis approach in Section 3.1, which yields a form of such eagerness as a by-product. In Section 3.2, we then discuss approaches that extend the set of environment behaviors under which the system can satisfy its guarantees, and in this way make the system less lazy without sacrificing the satisfaction of the guarantees. Synthesis approaches that allow slight deviations from the guarantees in case of assumption violations are discussed in Section 3.3. Finally, we discuss quantitative synthesis in Section 3.4, which offers a flexible framework to encode quality criteria of synthesized systems, including some notions of eagerness of the system to satisfy its guarantees.
Assumptions in Generalized Reactivity Games
Specifications in the generalized reactivity fragment of rank 1 (GR(1)) have been proposed as an alternative to full LTL, as their synthesis problems are efficiently decidable and are still sufficiently expressive for many important properties [8]. What is particularly interesting in our present comparison is that in GR(1) synthesis games that solve the synthesis problem for this fragment, the implication Ass → Gua is interpreted slightly different than in the standard semantics (see Bloem et al. [8] and Klein and Pnueli [27]). In particular, safety guarantees and assumptions are treated differently: even if the environment does not satisfy Ass completely, the system must satisfy its safety guarantees at least as long as the environment satisfies the safety assumptions. This rules out some non-intuitive behavior by the system, where it violates Gua because it knows that it can force the environment to violate Ass at some point in the future. In particular, the unintended behavior in Example 1 is ruled out. While this changes the rules of the synthesis game such that the system player loses the game if such a safety guarantee is violated before the environment violates some safety assumption (instead of the system winning whenever the environment violates Ass anywhere in the infinite trace), it does not change the purely adversarial view on the game. Example 3. If the safety guarantee G(r → X g) in Example 1 is changed into a liveness guarantee G(r → F g), i.e., the specification is modified to the system can still enforce an assumption violation by violating the guarantee, even in the modified semantics of the implication, by not giving any grants. The reason is that the system does not violate the guarantee before the assumption is violated.
Furthermore, this extension does not change the purely worst-case analysis that will simply return "unrealizable" if the specification cannot be fulfilled in all cases, and otherwise return a solution that does not distinguish between cases where ¬Ass ∧ ¬Gua holds versus cases where Gua is actually satisfied. (Recall that strengths and weaknesses are summarized in Table 5 Related mechanisms are presented in [18]. This work presents an approach to synthesize eventbased behavior models from GR(1) specifications. It uses the following definitions in order to avoid systems that satisfy the specification by violating assumptions. A best effort system system satisfies the following condition: if the system forces Ass not to hold after a finite trace σ , then no other system that achieves Gua could have allowed Ass after σ . An even stronger definition is that of an assumption preserving system: the system should never prevent the environment from fulfilling its assumptions. Every assumption preserving system is also a best effort system. Finally, the authors propose assumption compatibility as a methodological guideline. It is a sufficient condition under which any synthesized system is assumption preserving: The environment must be capable of achieving Ass regardless of system behavior. This can be checked by deciding realizability with swapped roles. However, this condition is rather strong.
Synthesizing Error-Resilient Systems
The most desirable form of the system to react to environment assumption violations is to continue to satisfy its guarantees. As in a system engineering process, assumptions are typically only added on an as-needed basis, this will only be possible in rare circumstances, and the synthesized system can then simply be made robust against assumption violations by removing them before performing synthesis.
Yet, this does not mean that every single assumption violation requires the system to violate its guarantees. A couple of approaches aim at exploiting this fact.
Topcu et al. [32] describe an approach to weaken the safety part of the assumptions as much as possible in context of GR(1) synthesis. The weakening is performed in a very fine-grained way, much finer than how a human specifier would do so, and as fine-grained as possible in GR(1) synthesis without the introduction of additional output signals to encode more complex properties. The resulting synthesized controller is then completely error-resilient against environment behavior that is forbidden by the original assumptions, but allowed by the refined assumptions.
Ehlers and Topcu [20] approach the problem from a different angle. They describe how to synthesize a k-resilient implementation. The notion of k-resilience has been defined earlier by Huang et al. [26]. Adapted to the case of GR(1) specifications, it requires the system to satisfy the guarantees if not more than k safety assumption violations occur in between assumption-violation-free periods of the system execution, provided that these periods are long enough to allow the system to recover. The approach also allows a more fine-grained analysis of for which assumptions some of their violations can be tolerated and for which no violation can be tolerated -whenever there is a trade-off between the choices of assumptions for which violations should be tolerated, all Pareto-optimal such choices are presented to the specifier.
Orthogonal to k-resilient synthesis is the idea to extend a synthesized implementation by recovery transitions [34]. Such transitions can be added for cases in which the assumptions are violated, but for which the system can react in a way that does not jeopardize the system's ability to completely satisfy its guarantees along its run if the environment starts to satisfy its assumptions again. In contrast to kresilient synthesis, recovery behavior is added on a best-effort basis and the synthesized system does not strategically choose its nominal-case behavior such that as many safety assumption violations as possible are tolerated.
All three approaches only make the system robust to a certain extend as they extend the set of environment behaviors under which a system can be synthesized. They do not help to satisfy the guarantees as well as possible for environment behaviors that do not fall into this set.
Synthesis of Robust Systems
The basic idea of robust synthesis is to satisfy guarantees as well as possible, even if assumptions are violated. Slight violations of the guarantees are allowed when the assumptions are violated, and we can further distinguish between different severity levels of assumption-and guarantee violations.
Robust synthesis is motivated by the observation that synthesized systems sometimes simply stop responding in any useful way after an assumption has been violated. Consider the following example.
Example 4. A system must grant two requests, but not simultaneously: Gua = G((r 0 → g 0 ) ∧ (r 1 → g 1 ) ∧ (¬g 0 ∨ ¬g 1 )). The environment must not raise both requests simultaneously: Ass = G(¬r 0 ∨ ¬r 1 ). The plain implication Ass → Gua allows the system to ignore any future request if the environment ever happens to raise both requests. Optimizations for other properties like circuit size of the synthesized solution may exploit this freedom. However, a system that ignores one of the simultaneous requests and then continues normally instead of getting lazy would be more preferable.
Of course, in case of violated assumptions, it may not always be possible to satisfy all guarantees, as Example 4 shows. Otherwise, some assumptions would be superfluous. Also, it makes sense to take the severity of the assumption violation into account. Intuitively, a small assumption violation should also lead to only small guarantee violations. Therefore, the crux in robust synthesis is to define measures of how well guarantees are satisfied and how severe assumptions are violated. Then, an optimal ratio with respect to these metrics can be enforced. Existing approaches [5] typically optimize the worst case of this ratio. For safety properties, a natural conformance measure for both assumptions and guarantees is to count the number of time steps in which properties are violated. For liveness properties, this does not work because a liveness property violation cannot be detected at any point in time: If some event is supposed to happen eventually, and has not happened yet, we may just not have waited long enough. If Ass and Gua are composed of several properties, one can also count the number of violated properties to define the severity of a violation [5].
Despite the fact that liveness assumption violations cannot be observed at runtime, robust synthesis approaches for specifications with liveness assumptions and guarantees exist that let the system tolerate (safety) assumption violations. Intuitively, the idea is to ask the system to tolerate safety assumption violations if in only finitely many steps of the system's execution, such violations occur. The system is then only allowed to violate safety guarantees finitely often. Liveness assumptions are assumed to hold at all times. Since the system cannot know when an assumption violation has been the last one, it has to behave in a robust way [19]. As a variant to the approach, the system can additionally be required to satisfy the liveness guarantees even if safety assumptions are violated infinitely often [7].
In summary, robustness is definitely a useful extension to correctness. One shortcoming of existing solutions is that they only optimize the robustness measure for the worst case, i.e., assume a perfectly antagonistic environment. As a consequence, the resulting system may still be unnecessarily lazy for more cooperative environment behaviors. The fact that the system cannot satisfy the guarantees any better in the worst case should not be an excuse for not trying. In this sense, not assuming a fully adversarial environment in the robustness optimization may yield even better results. This aspect will be elaborated in Section 4.
Quantitative Synthesis
Among all systems that realize a given specification, some may be more desirable than others. The idea of synthesis with quantitative objectives is to construct a system that not only satisfies the (qualitative) specification, but also maximizes a (quantitative) desirability metric. In this sense, some approaches to robust synthesis, as discussed in the previous section, can be seen as special cases of quantitative synthesis. But quantitative synthesis can also be a handy tool to optimize solutions with respect to other desirability metrics. Example 5. Continuing Example 4, we may prefer systems that give as few unnecessary grants as possible. This can be achieved by assigning costs to unnecessary grants (i.e., situations with g i ∧ ¬r i ), and let the synthesis algorithm minimize these costs.
Of course, one could also specify each and every situation where no grant should be given. While this is quite possible for this small example, it can be tedious, error-prone, and destroy the abstract quality of the specification for more complicated cases: Ideally, a specification only expresses what the system should do, but not how. If the exact behavior needs to be specified for each and every situation, it is better to implement the system right away.
The work of [6] presents a machinery based on games with a lexicographic mean-payoff objective and a simultaneously considered parity objective to solve such problems. The parity objective encodes the qualitative specification, while the mean-payoff objective encodes the quantitative desirability metric. The approach assumes fully adversarial environments and optimizes for this worst case.
Defining a desirability metric for a system is never an easy task. Cerný and Henzinger [11] propose to define it in two steps. The first step is to assign costs (or payoffs) to single traces. This can be done by combining the costs of single events in the trace, e.g., by taking the sum, average, maximum, etc. Second, the costs for individual traces are combined into total costs. Again, there are various options like taking the worst case, the average case, or a weighted average assuming some probability distribution. Although this approach is quite generic, it is questionable if the desirability of a system can be expressed by one single number in the venture of satisfying guarantees as good as possible in as many situations as possible. Dominance relations inducing a partial order between systems, as used in the next section, may be a more natural notion as they provide a natural quantification over environment behaviors.
If cost notions for both the environment and system actions can be given, there is a canonical way to define which system traces are desirable: the ones that are the cheapest. Tabuada et al. [31] adapt notions from control theory to define a preferability relation on system behaviors. In addition to minimizing the ratio between environment behavior cost and system behavior cost, they also require that the effect of sporadic disturbances vanishes over time.
Finally, there are approaches that combine quantitative approaches with a probabilistic model of the environment, to find the best solution under a given probability distribution for actions of the environment [21]. A combination of probabilistic and worst-case reasoning is considered by Bruyère et al. [10].
In summary, quantitative synthesis does not directly address the problem of dealing with assumptions in synthesis, but can rather be seen as a tool for obtaining better solutions with respect to different metrics. The fact that the environment is considered as perfectly adversarial in most methods may not be ideal in all settings.
Never Give Up!
Traditional games-based synthesis is only concerned with the worst case. As already raised in the previous sections, this mind-set is not always justified. Example 6. The flight control system from Example 2 may actually be able to handle way more than 100 planes in time if they do not all signal an emergency at the same time. This worst case is possible, but very unlikely to happen in practice.
If a guarantee cannot be enforced in the worst case, traditional synthesis methods will consider this guarantee as "impossible" to achieve. Thus, the constructed system would behave arbitrary if it ever gets into such a "hopeless" situation, i.e., it would not even try to reach the goal. However, when the system is in operation, its concrete environment may not be perfectly adversarial, i.e., the worst case may not occur. Hence, it makes sense for the system to behave faithfully even in (worst-case-)lost situations instead of resigning. In other words, the synthesized system should retain or even maximize the chances of reaching the goal (e.g., satisfying all guarantees even if assumptions are violated), even if this is not possible in the worst case.
Note the difference to robust and quantitative synthesis, as discussed in the previous section: Robust and quantitative synthesis aim at satisfying guarantees as well as possible for the worst case environment behavior. In contrast, this section is concerned with satisfying the specification (preferably without cutbacks) for many environment behaviors that do not represent the worst case as they violate Ass. In the following, we will discuss existing synthesis approaches that tackle this problem by dealing with "hopeless" situations in a constructive way.
Environment Assumptions
When we consider the basic idea of "restricting" the environment behavior by adding assumptions to an LTL specification of the form Ass → Gua, then synthesis from such a specification results in a system that is guaranteed to satisfy Gua for all behaviors of the environment that satisfy Ass. On the other hand, the system does not give any guarantees for traces on which Ass is violated.
Chatterjee, Henzinger and Jobstmann [14] show how, for a given unrealizable system specification Gua, one can compute an environment assumption Ass, such that Ass → Gua is realizable (for ωregular specifications). The computed assumptions consist of a safety and a liveness part, and should be as weak as possible. While minimal (but not unique) safety assumptions 3 can be computed efficiently, the problem is NP-hard for minimal liveness assumptions 4 . If it is sufficient to compute a locally minimal set of liveness assumptions, i.e., a set of liveness assumptions from which no element may be removed without changing the resulting specification to be realizable, NP-hardness can be avoided.
Best-Effort Strategies for Losing States
Faella [23,22] investigates best-effort strategies for states from which the winning condition cannot be enforced. Intuitively, a good strategy should behave rationally in the sense that it does not "give up". Hence, this work assumes the desirability matrix of Figure 1, and is concerned with staying away from the top-right corner, even if this is not possible in the worst case.
Example 7. As an example, consider the specification G F(o ∧ X i), where i is an input and o is an output.
There is no way the system can enforce satisfying the property. However, setting o to true as often as possible is more promising than setting o always to false.
Faella [23] discusses and compares several goal independent criteria for such rational strategies. The work concludes that admissible strategies, defined via a dominance relation, may be a good choice. Intuitively, strategy σ dominates strategy σ if σ is always at least as good as σ , and better for at least one case. More specifically, σ dominates σ if (1) for all environment strategies and starting states, if σ satisfies the specification then σ does so too, and (2) there exists some environment strategy and starting state from which σ satisfies the specification but σ does not. This induces a partial order between strategies. An admissible strategy is one that is not dominated by any other strategy.
For positional 5 and prefix-independent 6 goals, Faella [23] presents an efficient way to compute admissible strategies: the conventional winning strategy σ is computed and played from all winning states. For the remaining states, a cooperatively winning strategy σ is computed, assuming that σ is played in the winning states. This is a very relevant result because, e.g., parity goals are positional and prefixindependent, and LTL specifications can be transformed into parity games. For goals that do not fall into this category, the computation of admissible strategies is left for future work. Unfortunately, this work has not been actively followed up on.
Damm and Finkbeiner also consider admissible strategies, called dominant strategies in [17], and show that for a non-distributed system, a dominant strategy can be found (or its non-existence proved) in 2EXPTIME. That is, dominant strategies are not harder to find than the usual winning strategies. Since 3 Minimal here means that a minimal number of environment edges are removed from the game graph. 4 Here, minimal means to put fairness conditions on a minimum number of environment edges in the game graph. 5 A goal is positional if the strategy does not require memory on top of knowing the current position in synthesis games that are built from the given specification. 6 A goal is prefix-independent if adding or removing a finite prefix to/from the execution does not render a satisfied property violated. a dominant strategy must be winning if a winning strategy exists, this means we can find best-effort strategies in the same time complexity as usual winning strategies, without sacrificing the basic goal of correctness.
The focus of [17] is however on the synthesis of dominant strategies for systems with multiple processes, which is shown to be effectively decidable (with a much lower complexity than with other approaches) for specifications that are known to have dominant strategies. Moreover, the constructed strategies are modular, and synthesis can even be made compositional for safety properties. Thus, in this case we not only obtain strategies that do their best even if the specification cannot always be fulfilled, but we can find such a strategy even in cases where the classical distributed synthesis problem is undecidable.
Even though it is in some sense orthogonal to our question of how to properly treat assumptions, we view the behavior of the system on lost states as an important ingredient to building desirable systems.
In a system composed of components that are not necessarily adversarial, this approach may help reach a common goal. While robust synthesis attempts to satisfy guarantees as well as possible under the worstcase environment, the best-effort strategies attempt to increase the chances of satisfying all guarantees under a friendly environment assumption. Both views have their merits.
Fallback to Human
Another interesting way of dealing with "hopeless" situations in synthesis has recently been presented by Li et al. [28]. Safety critical control systems like autopilots in a plane or driving assistance in a car are usually not fully autonomous but involve human operators. If the environment behaves such that guarantees cannot be enforced, the controller can therefore simply ask the human operator for intervention. This allows for semi-autonomous controllers, even for unrealizable specifications. There are two additional requirements: the human operator should be notified ahead of time, and no unnecessary intervention should be required.
The approach computes a non-deterministic counterstrategy. In operation, the controller constantly monitors the behavior of the environment and tracks if it conforms to this counterstrategy. This prevents alarms when the environment is not fully adversarial, so that the guarantees can be enforced even though the specification is unrealizable in the worst case. Notifying the human operator ahead of a potential specification violation is achieved by requiring a minimum distance (number of steps) to any failureprone state.
The faithfulness of this approach is similar in spirit to the best-effort strategies discussed in the previous section: the specification cannot be satisfied in the worst case, but this should not be an excuse for resigning. The worst case may not occur (often) in operation, and the synthesized system should take advantage of this. While requiring human intervention may only be an option in specific settings, the idea of checking the actual environment behavior against a counterstrategy in order to assess whether the environment is behaving in an adversarial manner is definitely interesting.
Markov Decision Processes
Another way of refraining from worst case assumptions in synthesis is by using Markov Decision Processes (MDPs) [4,2]. The environment is not considered to behave adversarially but randomly with a certain probability distribution. This situation is also referred to as 1.5 player game (the probabilistic environment only counts as half a player). Strategies for such games attempt to maximize the probability to satisfy the goal. There also exist solutions to maximize quantitative objectives against a random player [15]. MDPs as the sole synthesis algorithm may not be satisfactory since optimality against a random player does not necessarily imply that the strategy is winning against an adversarial player [23]. Nevertheless, MDPs can be valuable to optimize the behavior in lost states, or to specialize a winning strategy that allows for multiple options in several situations.
Cooperate!
Realistic applications of synthesis methods will in general not synthesize a complete system from scratch, but will separate the system into components that can be implemented (either by hand or by synthesis) modularly. To make such an approach tractable while still giving global correctness guarantees, synthesis of every component must take into account the expected behavior of the rest of the system, again expressed as some kind of environment assumption.
Thus far, we have discussed synthesis approaches that are designed to prefer cases where Gua is satisfied over cases where Ass is violated (Sect. 3), and that try to optimize the result even if the goal cannot be reached in all cases (Sect. 4). In some sense, the latter can be seen as an implicit collaboration with the environment, i.e., hoping that it is not its main goal to hurt us.
In this section, we consider synthesis algorithms for systems that explicitly cooperate. In this case, the environment can really be considered as a second system player, and the payoff matrix is notably different, see Figure 4. In particular, we do not want "our" system component to force assumption violations in other system components, as this would lead to incorrect behavior of the overall system. Instead, we want synthesis to be based on a "good neighbor assumption", i.e., the environment will only violate the assumptions if necessary, and we should not force it to do so, but try to make the overall system work even if the assumptions are not always satisfied.
The basic idea is that environment and system can cooperate to some extent, in order to satisfy both Ass and Gua. If we allow full cooperation, then the synthesis problem becomes the problem of synthesizing an implementation for both the environment and the system, and requiring them to jointly satisfy Ass ∧ Gua. This problem has been considered for different models of communication [16,30,24]. Such solutions are however unsatisfactory for two reasons: (i) The approaches synthesize one particular implementation of the environment. This will only be a correct implementation in the overall system if Ass contains all of the required properties of the rest of the system, not allowing us to abstract from parts of the environment.
(ii) The synthesized implementation of the system is guaranteed to satisfy Gua only for exactly this environment. Thus, the approach does also not allow additional refinement or modification of the environment behavior.
Together, these two properties imply that we cannot use such an approach to modularize synthesis, as we need to synthesize both components in full detail at the same time.
In the following, we consider assume-guarantee synthesis (Sect. 5.1) and synthesis under rationality assumptions (Sect. 5.2), two approaches that are between a completely adversarial and a completely cooperative environment behavior. Both are based on the notion of non-zero-sum games, i.e., games in which players do not have mutually exclusive objectives, but can reach (part of) their respective objectives by cooperation.
Assume-Guarantee Synthesis
Intuitively, the assume-guarantee synthesis approach by Chatterjee and Henzinger [13] wants to synthesize implementations for two parallel processes P 1 , P 2 (which could be the system and the environment) such that solutions are robust with respect to changes in the other process, as long as it does not violate its own specification. More formally, we want to find implementations of P 1 , P 2 that satisfy φ 1 ∧ φ 2 together, and furthermore the solutions should be such that each process P i satisfies φ j → φ i for any implementation of the other process P j . That is, given a pair of solutions for P 1 , P 2 , we can replace one of them with a different implementation. As long as it satisfies its own specification φ j (together with the fixed implementation for the other process), we know that the overall specification φ 1 ∧ φ 2 will still hold.
This means that players have to cooperate to find a common solution, but cooperation is also limited, in that the players cannot decide on one particular strategy to satisfy the joint specification. Thus, assumeguarantee synthesis is an option between purely adversarial and purely cooperational synthesis: if we obtain process implementations P 1 and P 2 that satisfy P i |= φ j → φ i for adversarial synthesis, then the parallel composition P 1 P 2 of these two implementations will also satisfy the conjunction φ 1 ∧ φ 2 . Since each of them satisfy their spec in an arbitrary environment, they in particular satisfy the assume-guarantee specification. Moreover, every solution for assume-guarantee synthesis obviously is also a solution for cooperative synthesis.
Example 8. Consider two processes P 1 , P 2 , each with one output o i that can be read by the other process, and specifications There are several implementations for P 1 that satisfy φ 1 (and do not depend on the implementation of P 2 ), and several implementations for P 2 that satisfy φ 2 , most of them depending on the implementation of P 1 . For example, P 1 might raise o 1 in the initial state, and then every third tick. For this implementation, a suitable implementation for P 2 can raise o 2 in the first tick after the initial state, and then every third tick from there.
While this implementation for P 2 is correct for the particular implementation of P 1 , it is not correct for all implementations of P 1 that satisfy φ 1 . For example, P 1 could raise o 1 every second tick, and the given P 2 would not satisfy φ 2 anymore. However, there is an implementation that satisfies φ 2 for all implementations of P 1 that satisfy φ 1 : P 2 can simply read o 1 and go to a state where it raises o 2 iff o 1 it currently active. Only such a solution for P 2 solves the assume-guarantee synthesis problem (any solution for P 1 that satisfies φ 1 is fine, since it does not depend on P 2 ).
Furthermore, consider the extended specification . of the assume-guarantee synthesis problem: a solution for P 2 must raise o 2 at least every second tick now, and will only work with such implementations of P 1 , but not with those that raise o 1 less frequently (even if they still satisfy φ 1 ).
Synthesis under Rationality Assumptions
A number of different approaches to the synthesis of multi-component systems relies on the notion of rationality. Informally, this means that every component has a goal that it wants to achieve, or a payoff it wants to maximize, and it will always use a strategy that maximizes its own payoff. Both the rationality of players and the goals of all components are assumed to be common knowledge. In particular, a player will only use a strategy that hurts other components if this will not lead to a smaller payoff for itself. As can be expected, this leads to implementations that do not behave purely adversarial, but cooperate to some degree in order to satisfy their own specification.
We survey three different approaches based on rationality: rational synthesis by Fisman, Kupferman and Lustig [25], methods based on iterated admissibility by Berwanger [3] and by Bernguier, Raskin and Sassolas [9], and an extension of the notion of secure equilibria to the multi-player case, called doomsday equilibria [12].
Rational Synthesis. The rational synthesis approach centers synthesis around a special system process, and produces not only an implementation for the system, but also strategies for all components in the environment, such that the specification of the system is satisfied, and the strategies of the components are optimal in some sense. To guarantee correctness, the approach assumes that these strategies can be communicated to the other components, and that the components will not use a different strategy than the one proposed, as long as it is optimal.
The definition of what is considered to be an optimal strategy leaves some freedom to the approach. The authors explore Nash equilibria (cp. Ummels [33]), dominant strategies (cp. Faella [23], Damm and Finkbeiner [17]), and subgame-prefect Nash equilibria (also [33]). Intuitively, • if the set of proposed strategies is a Nash equilibrium profile, then no process can achieve a better result if it changes its strategy (while all others keep their strategies); • if the set of strategies is a dominant strategy profile, then no process can achieve a better result if any number of processes (including itself) change their strategy; • if the set of strategies is a subgame-perfect equilibrium profile, then no process can achieve a better result for any arbitrary history of the game 7 by changing its strategy (while all others keep their strategies).
Compared to assume-guarantee synthesis, this approach does not guarantee that the synthesized implementation will also work when other processes change their behavior, even if the different behavior still satisfies the specification. Instead, it is based on the assumption that other processes have no incentive to change their behavior, which is somewhat unsatisfactory for a modular synthesis approach.
Example 9. Consider again the example from above, . 7 even those that do not correspond to the given strategy profile A solution that satisfies (P 1 P 2 ) |= φ 1 ∧ φ 2 is also a rational synthesis solution, for any of the notions of optimality above. However, for a Nash equilibrium, a pair of implementations for P 1 , P 2 is also a solution if (P 1 P 2 ) |= φ 1 and (P 1 P 2 ) |= ¬φ 2 , as long as there does not exist an implementation P 2 for which (P 1 P 2 ) |= φ 2 .
Rational synthesis with Nash equilibrium has strictly weaker conditions on implementations than assume-guarantee synthesis. That is, any solution of assume-guarantee synthesis will also be a solution for this case of rational synthesis, but this is not always the case in the other direction. Also, dominant or subgame-perfect equilibria strategy profiles will always be Nash equilibrium profiles, but the set of solutions seems to be incomparable with assume-guarantee synthesis.
A combination of assume-guarantee reasoning with rational synthesis seems possible: instead of requiring that the system implementation works exactly in the given equilibrium, it should work for any behavior of the other processes that does not reduce their payoff, or respectively any behavior where they still satisfy their own specification.
Iterated Admissibility. The basic idea of iterated admissibility approaches [3,9] is similar to rational synthesis: every component has its own goal in a (non-zero-sum) game, and is assumed to be rational in that it avoids strategies that are dominated by other strategies (taking into account all possible strategies of the other players). This avoidance of dominated strategies removes some of the possible behaviors for all players. Both the rationality assumption and the full state of the game being played are assumed to be common knowledge, so every player knows which strategies the other players will eliminate. Under the new sets of possible behaviors, there may be new strategies that are dominated by others, so the process of removing dominated strategies can be iterated and repeated up to a fixpoint.
The basic notions of this class of infinite multi-player games have been defined by Berwanger [3]. Brenguier, Raskin and Sassolas [9] have recently investigated the complexity of iterated admissibility for different classes of objectives, and showed that in general it is similar to the complexity of Nash equilibria.
Compared to rational synthesis, where the system process can compute strategies for all other components and they will accept them if they are optimal, in this case there is no distinguished process. Instead, all processes compute a set of optimal (or admissible) strategies, with full information allowing all components to come to the same conclusions.
Doomsday Equilibria. The notion of doomsday equilibria by Chatterjee et al. [12] uses the rationality assumption like the two approaches mentioned before, but takes the punishment for deviating from a winning strategy to the extreme: a doomsday equilibrium is a strategy such that all players satisfy their objective, and if any coalition of players deviates from their strategy and violates the objective of at least one of the other players, then the game is doomed, i.e., the losing player(s) have a strategy such that none of the other players can satisfy their objective.
A distinguishing feature of doomsday equilibria is that their existence is decidable even in partial information settings, in contrast to the other existing notions of equilibria. In the case of two players, doomsday equilibria coincide with the well-known notion of secure equilibria.
Conclusions
In this paper, we discussed the role of environment assumptions in synthesis of reactive systems, and how existing approaches handle such assumptions. Besides correctness, we proposed three more properties Synthesis under Rationality Assumptions 5.2 () () that a good system should realize: systems should satisfy guarantees as well as possible even if environment assumptions are violated (Don't be Lazy!), they should aim for satisfying the guarantees even if this is not possible in the worst case (Never Give Up!), and systems should rather help the environment satisfy the assumptions instead of trying to enforce their violation (Cooperate!). These properties are especially important in modular synthesis, where assumptions are used to abstract other parts of the system rather than expressing "don't care"-situations. As summarized in Table 1, we conclude that none of the existing approaches satisfies all these requirements. Although important steps towards synthesis of high quality systems have been made, we believe that even better results can be achieved by combining and extending ideas from the different branches. The perfect solution may not exist, since it may strongly depend on the application. Even if it does exist, it may be prohibitively expensive to achieve. In any case, more research is needed to explore both the most important objectives and the best possible solutions. | 2014-07-21T00:28:18.000Z | 2014-07-21T00:00:00.000 | {
"year": 2014,
"sha1": "a9cea58c61f5b3b493ad3b15408989828d577253",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/1407.5395",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a9cea58c61f5b3b493ad3b15408989828d577253",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
24622384 | pes2o/s2orc | v3-fos-license | Low-energy versus middle-energy extracorporeal shockwave therapy for the treatment of snapping scapula bursitis
Objective: Extracorporeal shockwave therapy (ESWT) has been used successfully in treatment of musculoskeletal disorders. Our objective was to assess the effectiveness of low versus middle-energy ESWT on snapping scapula bursitis. Methods: Thirty-five patients, divided into two groups, group (L), received low-energy ESWT, group (M) received middle-energy ESWT. Groups were evaluated at 1,3,6 and 12 months using the Visual Analogue Scale (VAS), the Constant-Murley scoring (CMS) and the Roles and Maudsley criteria. Results: In groups (L) and (M), VAS average values after 1,3,6 months and one year were (43±5.17, 38±4.33, 28±4.18 and 19±3.39) and (37±4.85, 26±4.74, 21±4.45 and 7±3.42) respectively. At six and twelve months, statistical difference was detected, P (0.034, 0.026) respectively. After one year of completing the treatment, the average values of CMS were (83.5±6.44 and 91±5.33) respectively, P=0.046. Roles and Maudsley criteria demonstrated that, patients in group (L), 6 (35%) excellent, 5 (29%) good, 4 (24%) acceptable and 2 (12%) had poor results. Whereas, patients in group (M), 11 (61%) excellent, 3 (17%) good, 3 (17%) acceptable and 1 (5%) had poor results. Conclusion: Although low-energy ESWT showed good early-term results, but middle-energy ESWT protocol demonstrated better early-term, Mid-term, and late-term results.
between the subscapularis and serratus anterior muscles (Fig.1). Over -use injuries of the muscles inserted around these two bursae or biomechanical dysfunction may lead to inflammation of these bursal tissue causing a painfull snapping scapular bursitis whereas the minor four bursae are scattered around the inferior margin of the scapula and the scapulothoracic articulation. [7][8][9] Snapping scapula is commonly a misdiagnosed disorder. 5 Most patients complain of pain at the superomedial angle of the scapula during activities, whereas others complain of pain even at rest. 7 Usually the pain is the direct cause of the scapulothoracic bursitis located at the level of the levator scapulae muscle insertion at the superomedial angle of the scapula. 7,8 Many treatment methods have been described to deal with snapping scapula, including surgical and non-surgical methods. Non-surgical measures consist of corticosteroid injection, modified extremity activity, rest with non-steroidal anti-inflammatory drug intake. 9,10 ESWT has been described to give satisfactory results in comparison to corticosteroid injection in the treatment of painful cases of scapulothoracic bursitis. 11 Different ESWT protocols have been described to treat variant musculoskeletal disorders including low, middle and high energy ESWT protocols. [1][2][3][4]11 Middle energy ESWT protocol has been described previously to treat snapping scapula bursitis, demonstrated good early and late-term results. However, the applied middle energy ESWT protocol produced some complications like skin irritation, intramuscular hematomas and intermittent periscapular pain. 11 The aim of this study was therefore to assess and compare the dose-related effect of two ESWT protocols, low-energy and middle-energy ESWT protocols on the treatment of snapping scapula bursitis.
METHODS
Institutional ethical approval was granted by the ethical committee of research under the reference number 346-90. A prospective study was submitted on 35 patients diagnosed with scapulothoracic bursitis, divided into two groups according to the ESWT energy level applied. Group (L) composed of seventeen patients who received low-energy ESWT protocol and group (M) which is composed of eighteen patients who received middle-energy ESWT protocol. All demographic data related to patients involved in this study is demonstrated in detail in Table-I. Patients with previously well-known cervical disc problems, congenital spine anomalies, neuromuscular disorders, rotator cuff pathologies, clotting disorder, infections and tumors were excluded from our study. The main purpose of this study was explained in detail for all patients before they accept and agree to be involved. To evaluate the pain associated with snapping scapula bursitis around the superomedial angle before and after completing the treatment protocol, patients were asked to quantify the pain severity level using the visual analogue scale method (VAS), which has been demonstrated to have a good construct validity and a valid measure of acute pain. 12 Whereas, to evaluate the functional outcome of both treatment protocols reflection on the shoulder girdle, all patients were evaluated according to the Constant-Murley scoring (CMS) system. CMS combines physical tests (35 points) with subjective evaluation of the examined patient (65 points), which at the end of the evaluation, scores range between 0 (worst) and 100 (most favorable). CMS is the most commonly used method containing objective measurement and has been documented to be valid and reliable for many shoulder pathologies. 13 However CMS is used in clinical practice to evaluate surgeries related to the glenohumeral joint related structures, specifically the rotator cuff surgeries. 14,15 For this reason, the Roles and Maudsley criteria 16 , which measures the endpoint level of satisfaction for patients at the end of the follow up period, was added to the assessment methods. Although it has been used to assess the satisfaction level of the lower extremity surgery, however it was used before to estimate the satisfaction level of different treatment protocols. 11,16 Roles and Maudsley criteria composed of four conditions depending mostly on measuring the residual pain compared to the pain before starting the treatment. Excellent; Patient has no pain with full movement and full activity, Good; Patient has occasional discomfort with full movement and full activity, Acceptable; Patient has some discomfort after prolonged activities, and Poor; Patient has a pain limiting activity.
All patients included in this study have a common complaint of posterior shoulder pain after overhead activities. Whereas thirteen patients described an audible crepitus accompanied with moderate to severe pain located at the superomedial scapular angle radiates mostly to the levator scapulae muscle. Twenty two patients did not describe audible crepitus. However physical examination demonstrated a crepitus during palpation and compression of the superomedial scapular angle against the posterior chest wall.
Radiological examinations, including plain radiographies, magnetic resonance imaging (MRI), computerized tomography (CT) and dynamic ultrasonographies were obtained to rule out any pathological condition that may mimic snapping scapula and to detect any fluid-filled small bursal lesions. However none of the radiological investigation was helpful in revealing inflamed bursal tissue.
To confirm our clinical diagnosis, which is based on severe pain on the superomedial scapular border squeezed to the posterior chest wall with a palpable crepitus and a pain with overhead activities radiating to the levator scapulae muscle, a 3 cc of local anesthetic (1% lidocain) was injected beneath the superomedial angle of the scapula. Partial or complete pain relief after the injection process was used as a strong indicator of the presence of an inflamed bursal tissue. 17,18 The patients involved in this study were divided into two groups using the close envelope technique, group (L), the low-energy treatment protocol group, included seventeen patients, composed of 12 females and 5 males with an average age of 34.7± 6.4 years. Using the portable ESWT (BTL industries Ltd, UK) depending on the tolerance of patients, a weekly session for three weeks ranging from 5 to 7 minutes of ( 1500 pulses of 0.08 MJ/mm 2 ), which is defined as a low-energy ESWT treatment protocol. 19 Group (M), the middle-energy treatment protocol group, included eighteen patients, composed of 15 females and three males with an average age of 36.2± 6.8 years. A weekly session for three weeks ranging from 5 to 7 minutes of (2000 pulses of 0.2 MJ/mm 2 ), which is defined as a middle-energy ESWT treatment protocol. 20 The application of the ESWT was focused on the trigger and painful area around the superomedial angle of the scapula with the patients arm in extension, adduction and internal rotation with the patient in prone position (Fig.1).
All patients were asked to rest and reduce the level of the overhead activities as possible for one month during the ESWT application period, non-steroidal anti-inflammatory drugs were not allowed throughout the study period. All patients were followed clinically for at least one year. All patients were assessed clinically at one month, three months, six months and one year. After the end of treatment, the visual analogue scale (VAS) scores were recorded at each follow up. Whereas the level of satisfaction and functional outcome was evaluated for all patients using the Constant-Murley scoring (CMS) system and Roles and Maudsley criteria at the end of follow up. For statistical analysis the Mann-Whitney-U Test was used. The level of significance was set as p < 0.05.
RESULTS
A total of 35 patients included in this study diagnosed with scapulothoracic bursitis (snapping scapula), divided into two groups, (L) group received low-energy ESWT protocol and (M) group received middle-energy ESWT protocol. All patients included in this study were evaluated one month, three months, six months and one year after completing the treatment protocol. The age averages for group (L) and group (M) were 34.7± 6.4 and 36.2± 6.8 years respectively. There was no statistical significance between the two groups regarding age with P= 0.78. Before starting the treatment the average VAS values were 78 ± 5.61 Treating snapping scapula bursitis with variant shock wave protocols Although there was no significant difference between the study arms regarding the VAS average results after one and three months with P values (0.89 and 0.25) respectively. Group (M), which received middle-energy ESWT protocol, demonstrated low VAS average values compared to group (L) which received low-energy ESWT protocol. Six and twelve months after completing the treatment, both groups revealed a significant statistical difference in favor of group (M), regarding the VAS with P values (0.034, 0.026) respectively.
Before starting the treatment, the average values of the Constant-Murley scoring (CMS) system of group (L) and group (M) were (71.8 ± 9.36 and 73.4± 8.12) respectively. No statistical significance was detected between the two study arms with P= 0.18. However, one year after completing the treatment, the average values of the Constant-Murley scoring system were (83.5 ± 6.44 and 91± 5.33) respectively. A statistical significance was detected between the two groups with P= 0.046. Roles and Maudsley criteria, which was used mainly to assess the satisfaction level of the tow treatment protocols demonstrated that, out of 17 patients in group (L), six patients (35%) had excellent, 5 patients (29%) had good, 4 patients (24%) had acceptable and two patients (12%) had poor results. Whereas, out of 18 patients in group (M), 11 patients (61%) had excellent, three patients (17%) had good, 3 patients (17%) had acceptable and one patient (5%) had poor results (Table-II).
DISCUSSION
Many treatment methods have been described for the treatment of a resistant snapping scapula ranging from surgical excision of the superomedial border of the scapula to exercise and activity modification. However, non-invasive methods have been described to be ineffective in dealing with the pain and crepitus of the scapulothoracic bursitis. 21 The precise mechanisms of ESWT action is still unknown. Loew et al. 22 , described three modes of action of ESWT, mechanical effect that results in deposit fragmentation, molecular effect that results in deposit phagocytosis and analgesic effect results in denervation of pain receptors. ESWT can be divided into three categories based on the level of energy produced. Low-energy (< 0.08 mJ/mm 2 ), Nihat Acar middle-energy (0.08-0.28 mJ/mm 2 ), and highenergy (> 0.28 mJ/mm 2 ). 23 The use of ESWT in treatment of soft tissue injuries and tendon lesions has been highly controversial, it has been described to be very beneficial in many musculoskeletal system disorders. 24 Rompe et al. 25 reported that, in general, the ESWT trials conducted on soft tissue disorders such as planter fasciitis, lateral epicondylitis and calcific tendenitis of the shoulder most of the time use a 1500 -3000 shocks of a low-energy protocol applied mostly on the site of maximum tenderness and it is usually applied three to five times, once every week.
ESWT was found to induce a long term tissue regeneration effect and produces an immediate analgesic and anti-inflammatory outcomes. Chemical inflammation mediators washout and triggering of neovascularization together with nociceptive inhibition are described to be the essential biological effect of ESWT on tissue. 26,27 In in-vitro studies, it had been demonstrated that ESWT at low and middle energy field density (EFD) is documented to produce a neoangiogenic effect by increasing the expression of vascular endothelial growth factor and its receptor Flt-1. 28 Gotte et al demonstrated that ESWT induces a nonenzymetic production of nitric oxide and a subsequent suppression of NF-B (nuclear factor kappa B) activation which are responsible for the clinically beneficial effect of ESWT on tissue inflammation. 28,29 Several studies have demonstrated that high energy ESWT protocols applied to calcific tendenitis of the rotator calf led to improvement of the muscle function and shrinkage of the subacromial bursa and thus improved the clinical picture of the involved shoulder. 30,31 There are high-energy, middle-energy, and low-energy ESWT protocols used in clinical applications of musculoskeletal disorders, However to date, it is still not clear which level of energy is more effective in pain relief and clinical improvement of pain caused by inflamed bursa and tissue degeneration. 25,32 Searching the literature revealed only a single study that used a middle-energy ESWT protocol to deal with snapping scapula. ESWT has been demonstrated to be superior to corticosteroid injection in treating snapping scapula related scapulothoracic bursitis. 11 However in that study, ESWT of 0.1 to 0.15 MJ/mm 2 energy protocol with 1500 to 2500 pulses of 1.4 to 2.1 bars, were used. Which is a middle-energy protocol. 24,25 The middle-energy and low-energy ESWT protocols have been used frequently in musculoskeletal disorders. Middle-energy ESWT protocol is commonly used in calcific tendinitis of the shoulder joint, trochanteric bursitis and calcaneal bursitis. 16,24,25 Whereas the low-energy ESWT protocol is frequently used in lateral epicondilitis, pes anserine bursitis and planter fasciitis. 17,18 This study shows that, although there was no statistical difference between the low-energy and middleenergy ESWT protocols at early-term (one month) and mid-term (three months) results, however lateterm results demonstrated statistical significance at six and twelve months in favor of the middleenergy ESWT.
The functional results evaluated by the Constant-Murley scoring system, did not show any statistical significance between the two groups prior to treatment induction. However, after one year of completing the treatment protocol, group (M) demonstrated higher and statistically significant score average than that of group (L). However, Constant-Murley scoring system, despite its being the most commonly used scoring system for shoulder functional evaluation recommended by the European Society of Elbow Surgery, 18,19 it is designed more specifically to evaluate lesions of the rotator cuff muscles and surgeries related to them. For that reason the average scores of the pretreatment period were high, since the rotator cuff pathology was one of the exclusion criteria of this work, thus all patients involved in this study were rotator cuff pathology free.
To precisely evaluate the treatment satisfaction level after one year, the Roles Maudsley criteria was used. In group (M), 78% of patients showed excellent to good results, whereas 64% of patients in group (L) demonstrated excellent to good results.
Although both ESWT protocols showed to be effective in treating snapping scapula associated with scapulothoracic bursitis, however middleenergy ESWT demonstrated superior results.
During the early treatment period, just few skin irritation and minimal skin burning sensation were noticed in the low-energy protocol which resolved within a couple of days. However, in the middleenergy protocol, besides skin irritation and burning sensation recognized in some patients a localized minimal muscle hematomas were detected in three patients, resolved within a week.
High-energy ESWT protocol in which the energy exceeds 0.28 mJ/mm 2 has been used by many researchers to treat calcific tendinitis of the rotator calf muscles and avascular necrosis of the femoral head. 24,25 However, high-energy ESWT is more painful and requires intravenous analgesia. In addition to that, due to the high energy applied, few complications like humerous avascular necrosis and large muscular hematomas. [22][23][24][25] Although, ESWT has been applied for ischemic heart disease with low-energy protocols 33 , we found it hazardous to apply this protocol to the superomedial angle of the scapula due to its adjacent relationship to many vital structures like the pleura of the lung and the large chest vasculature.
Limitations of the study:
The number of the involved patients is small. Since snapping scapula bursitis is an underestimated condition, yet there is no specific subjective and objective criteria that can evaluate the scapular movement function apart.
CONCLUSION
Low and middle-energy ESWT protocols can be safely and successfully used for the treatment of snapping scapula associated with scapulothoracic bursitis. Although low-energy ESWT showed good early-term results, but middle-energy ESWT protocol demonstrated better early-term, Mid-term, and late-term results. | 2018-04-03T04:33:01.788Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "6a3a53a70712b95ef97678b90994cb42c0464de9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12669/pjms.332.12262",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a3a53a70712b95ef97678b90994cb42c0464de9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253426091 | pes2o/s2orc | v3-fos-license | Effects of phytocompound Precocene 1 on the expression and functionality of the P450 gene in λ-cyhalothrin-resistant Spodoptera litura (Fab.)
Spodoptera litura (Fabricius) is an agriculturally significant polyphagous insect pest that has evolved a high level of resistance to conventional insecticides. A dietary assay was used in this work to assess the resilience of field populations of S. litura to λ-cyhalothrin. Analysis of the function and expression of the cytochrome P450 gene was used to test the sensitivity of S. litura larvae to sub-lethal concentrations of the insecticidal plant chemical Precocene 1, both by itself and in combination with λ-cyhalothrin. The activity of esterase enzymes (α and β) was found to decrease 48 h post treatment with Precocene 1. The activity of GST enzyme and cytochrome P450 increased with Precocene 1 treatment post 48 h, however. Expression studies revealed the modulation by Precocene 1 of cytochrome P450 genes, CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10. While CYP4M16 expression was stimulated the most by the synergistic Precocene 1 + λ–cyhalothrin treatment, expression of CYP4G31 was the most down-regulated by Precocene 1 exposure. Hence, it is evident that λ–cyhalothrin-resistant pest populations are still sensitive to Precocene 1 at a sublethal concentration that is nevertheless capable of hindering their development. Precocene 1 can therefore be considered a potent candidate for the effective management of insecticide-resilient S. litura.
Pest control practices via indiscriminate application of synthetic insecticides lead to resistance development and hence the failure of the control measure (Murali-Baskaran et al., 2021;Subaharan et al., 2021). Resistance to insecticides in lepidopteran pests is mainly by detoxification and target site insensitivity, involving enzymes such as esterase, glutathione complex, and cytochrome P450 (Tang et al., 2022). Organophosphate (OP) and carbamate insecticides inhibit the activity of AChE in insects through the process of phosphorylating or carbamylating the serine residues at the target site (Cao et al., 2020). Resistance to organophosphates and carbamates in Helicoverpa armigera and S. litura has already been reported (Selin-Rani et al., 2016). The epsilon class GSTs of S. litura are good enough to detoxify DDT (dichloro-diphenyltrichloroethane) and deltamethrin (Deng et al., 2009). Excessive application of cypermethrin and pyrethroid develops resistance in S. litura (Shyam-Sundar et al., 2021a, 2021b by way of decreased penetration, target site sensitivity alteration, and enhancement of detoxification enzyme activity, such as that of cytochrome P450 monooxygenase (P450), carboxylesterase (CarE), and glutathione S-transferase (GST) (Ahmad and McCaffery, 1999;Dong et al., 2016). Esterase exhibits a broad spectrum of accuracy, capable of cleaving tri-ester-phosphates, halides, esters, thioesters, amides, and peptides. The modality of esterases in detoxifying insecticides is well reported (Saleem and Shakoori, 1996). These enzymes are synthesized in the biochemical pathway during the developmental stages of the larva (Huang and Han, 2007). Increase in esterase activity when exposed to synthetic pesticide is one of the main resistance mechanisms in pests (Kranthi et al., 2002). Glutathione-Stransferase (GST) belongs to a protein family, and plays a major role in the detoxification of xenobiotics by converting them into less toxic water-soluble products (Singh et al., 2001). GST develops resistance against organophosphorus and pyrethroid insecticides in a wide range of insect pests (Senthil-Nathan, 2020). Insecticide-resistant strains of various insects reveal a correlation, in terms of an upsurge in expression of gene and GST activity, as part of their insecticide resistance (Li et al., 2007).
The mechanism of the detoxifying P450 enzyme in insects helps them resist phytoconstituents and chemical insecticides (Sasabe et al., 2004;Mao et al., 2006;Despre et al., 2007;Bautista et al., 2009;Niu et al., 2011;Pentzold et al., 2014;Lu et al., 2021). Various P450s are capable of metabolizing a single substrate, and a single P450 is able to metabolize multiple substrates (Meunier et al., 2004). Phytochemical-inducible P450s are required for the development of cross-tolerance in insecticides (Li et al., 2000). The overexpression of P450s results in increased insecticide resistance, and tolerance to allelochemicals has been reported in various orders of insects, such as Lepidoptera, Diptera, Coleoptera, Hemiptera, and Hymenoptera (Bass et al., 2011;Johnson et al., 2012;Liang et al., 2015;Chen C. et al., 2017;Chen Y. et al., 2017;Wang R. L. et al., 2018;Wang X. et al., 2018;Hass et al., 2022). The role of cytochrome P450 genes and enzymes in S. litura for detoxifying host plant allelochemicals and other xenobiotics has not been much explored.
The major damage to agricultural crops results mainly from attack by lepidopteran insects (Shu, 2012;Selin-Rani et al., 2016). S. litura, in particular, develops resistance when exposed to synthetic pesticides (Jahan et al., 2008;Shyam-Sundar et al., 2021a, 2021b. The present study is aimed at finding the strategy of using allelochemicals for developing resistance when exposed to insecticides. In light of the previously mentioned, the present study scrutinizes the impact of Precocene 1, and its influence on detoxifying enzyme activity, and on the expression levels of five P450 genes in S. litura larvae (CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10), upon exposure to λ-cyhalothrin.
Insects
The larvae of S. litura were obtained from agricultural land in Kadayam (latitude 8.8213°N, longitude 77.3741°E), Tenkasi District, India, and cultured in the Biopesticides and Ecotoxicology Laboratory (BET Lab), SPKCEES, Manonmaniam Sundaranar University, Alwarkurichi. To maintain generation, the larvae were kept in an insectary at 27 ± 1°C, with relative humidity (RD) of 85%, under a 12:12 L:D photoperiod schedule. Castor leaves were given as feed, and the adults that emerged were placed in a container (10 × 10 × 7 cm), with castor leaves for pairing (1 male: 2 females). A 10% honey solution was given to adults for oviposition, and a sanitary black cloth was used to cover the cage.
Preparation of chemically supplemented diets
Preparation of an artificial and chemically supplemented diet followed previously published protocols (Karthi and Shivakumar, 2015;Chen C. et al., 2017). The Precocene 1 was dissolved in 1% dimethyl sulfoxide (DMSO). For the control, the artificial feed was made with the addition of 1% of DMSO. Distilled water containing 0.1% (v/v) Triton X-100 and 1% DMSO was used to dilute the λ-cyhalothrin insecticide, plus 1% DMSO, and treated as stock solution. For bioassays, 15 ml of stock solution was pipetted and sterilized in plastic cups of 4.0 cm in diameter × 3.0 cm in height. Agar was added and stirred into the liquid artificial diet for 2 min, which was then allowed to solidify at 40-45°C. A similar protocol was carried out for the preparation of the control diet, but without adding insecticides, and the control was then maintained at 4°C prior to use.
Toxicity bioassay
Third-instar larvae were used to find the effects of Precocene 1 uptake and tolerance to λ-cyhalothrin for all instances, as this developmental stage allows for observation of weight gain and mortality. 0.2% Precocene 1 was added to the artificial diet for 48 h before bioassay, and the control larvae were given the artificial diet containing the 0.1% DMSO solution. In total, twenty-five third-instar larvae were used for the experiment with five replications. In addition, in order to study the toxicity of λ-cyhalothrin, a diet merger methodology was followed using third-instar larvae (Karthi and Shivakumar, 2015). A standard solution (2,500 mg/L) of λcyhalothrin was dissolved in deionized water, having 0.1% (v/ v) Triton X-100 and 1% DMSO for bioassays at concentrations of 150, 180, 210, 240, and 270 mg/L. A known volume of insecticide from the abovementioned solutions was added to tiny 20-ml sterilized plastic cups (4.0 cm × 3.0 cm), into which the artificial liquid diet was incorporated and stirred for 2 min. For the control, the same quantity of 0.1% (v/v) Triton X-100 and 1% DMSO was included in the diet and the cups were covered with perforated lids for the purpose of ventilation. For every bioassay series, and for each insecticide concentration, twenty-five larvae from the pretreated 0.2% Precocene 1 group, and from the non-exposed group, were used. The diet without chemical treatment was given to the control larvae. Mortality was recorded at 72 h post-treatment, and five replications were carried out in each experimental study.
Synergistic impact of PBO on the toxic nature of insecticide
Piperonyl butoxide (PBO) was used as the synergist, and its presence or absence was evaluated as described earlier. S. litura larvae were fed with an artificial diet with or without 0.2% Precocene 1 for 48 h. To obtain the concentration of 25 mg/L, the PBO was dissolved in acetone. Using a micro-syringe, 10 µg of PBO/larva was applied topically in the dorsal prothorax region of individual larvae of S. litura. After sterilization, plastic cups were used to place the 2 h PBO-exposed larvae of S. litura, to which were added different concentrations of insecticide solutions, namely, 150, 180, 210, 240, and 270 of λcyhalothrin mg/L, with or without 0.2% Precocene 1, to assess the toxicity of λ-cyhalothrin. The diet with Precocene 1 was fed to the larvae for 48 h, without the pre-treated PBO of λ-cyhalothrin, and the larvae were kept in cups with perforated lids. Every bioassay was conducted in triplicate.
Whole-body homogenate preparation for enzyme assay
Ten fourth-instar larvae were treated with Precocene 1 and then homogenized on ice with 0.1 M phosphate buffer (7.2 pH), containing 1 mM EDTA, 1 mM DTT, 1 mM PTU, 1 mM PMSF, and 20% glycerol. Tissues were collected after 24 and 48 h periods of homogenization in 2 ml of the buffer, and centrifugation was performed at 4°C, 10,000 g for 15 min. Solid debris and cellular materials were separated and the supernatant was transferred into Eppendorf tubes. These were placed on ice for the assay of carboxyl esterase, glutathione-S-transferase, and cytochrome P450. The total content of protein was then estimated using the procedure of Lowry et al. (1951).
Carboxyl esterase assays
The α and β-carboxylesterase activity was determined (Kranthi, 2005) in the larval extracts prepared with phosphate potassium buffer (0.1 M: pH 7.2), and with 20 μl; 84 μg protein. The extract was added to 500 μl buffer (0.3 mMα-or β-naphthyl acetate in 0.1 M phosphate potassium at pH 7.2, containing 1% acetone), and followed by incubation at 30°C for 20 min. To this, 0.3 and 3.3 percentages of Fast Blue B and sodium dodecyl sulfate (SDS) were added, respectively. After centrifugation (3,000x g, 28°C), the supernatant was collected and absorbance was recorded at 590 nm. One unit of enzyme activity determines the quantity of enzyme required to generate 1 μmol of αor β-naphthol per minute.
Glutathione-S-transferase activity
Glutathione S-transferase assays were carried out, following the protocol of Kao et al. (1989), with reduced glutathione GSH as the substrate with 1-chloro 2,4-dinitrobenzene. Fourth-instar larvae were homogenized with 250 μl of sodium phosphate buffer (50 mM: pH 7.2) and centrifuged at 15,000×g at 4°C for 20 min. The union of the thiol group of glutathione with the substrate comprising 1-chloro-2, 4-dinitrobenzene (CDNB) was estimated using the Sigma-Aldrich (Catalog 0410, Bangalore) GST assay kit. Each well was loaded with 20 μl of the homogenate, along with 200 μl of Dulbecco's phosphate buffer (Sigma-Aldrich, Bangalore, IN), reduced glutathione (4 mM), and CDNB (2 mM). The GST activity was conjugated as a μmol/mg protein/min substrate.
Cytochrome P450 activity
Determination of Cyt P450 activity was carried out by peroxidation of a TMBZ assay following the protocol of Brogdon et al. (1989) with minimal alteration. After adding 250 μl of 0.05 M potassium phosphate buffer (pH 7.0) to 50 μl microfuge supernatant and 500 μl TMBZ solution (0.05% 3,3′,5,5′ tetramethyl benzidine, i.e., TMBZ +5 ml methanol +15 ml sodium acetate buffer 0.25 M pH 5.0), the mixture was prepared by adding 200 μl of 3% hydrogen peroxide and incubated at room temperature for 30 min. The reading was taken at 630 nm absorbance and calculated by comparison with the standard curve of cytochrome C.
Extraction of total RNA and synthesis of cDNA
The total content of the RNA in S. litura larvae was extracted by TRIzol (Invitrogen). Synthesis of cDNA was performed in 20 µl of reactant, having 1 µl of total RNA, 0.6 µl of forward primer (10 pmol), 0.6 µl of reverse primer (10 pmol), and 20 units of RNase inhibitor, 1 µl of dNTP mixture (10 mM each), and 1 µl oligo (dT)18 primer (50 µM) at 42°C , for 1 h by reverse transcription. The concentration of RNA was quantified at 260-280 nm absorbance. The quality was assessed by agarose gel electrophoresis, and staining was performed with ethidium bromide (EB).
Analysis of real-time RT-PCR
To estimate the amount of RNA using agarose gel electrophoresis, the reverse transcription to cDNA from 1 µg of total RNA was conducted with a PrimeScript ® RT Reagent Kit with gDNA Eraser (Perfect Real Time) (TaKaRa, Japan) according to the manufacturer's protocol. To study the RT-PCR, primers (Table 1) were planned using Primer 3 software (Applied Biosystems). SYBR Green qPCR was carried out in a 0.2 ml PCR 8-tube strip with flat 8-cap strips (Axygen, USA), employing an iQ5 real-time PCR detection system (Bio-Rad, United States). The prepared 20 µl PCR reaction consisted of 2 µl cDNA template, 10 µl 2 X SYBR ® Premix Ex Taq ™ II (Perfect Real Time) (TaKaRa, Japan), 0.8 µl of each primer (10 pmol/µl), and 6.4 µl ddH 2 O. The RT-PCR program was carried out using the melting curve dissociation methodology (from 55°C to 95°C), and by following the thermal conditions: initial denaturation 95°C for 30 s, subsequently 40 cycles of 95°C for 5 s, and 60°C for 30 s. A melting curve analysis was then performed to assess the specificity and consistency of the PCR products. The expression levels of target genes were calculated with the 2 −ΔΔCT method and normalized to the internal housekeeping gene β-actin.
Statistical analysis
Data are represented in mean ± SD. Resulting pairs were compared using the Student's t-test. One-way ANOVA was applied to determine the significant differences (p < 0.05) for Frontiers in Physiology frontiersin.org different groups by using the Tukey test. Statistical analysis was carried out with SPSS 11.5 software.
Results λ-cyhalothrin toxicity to S. litura larvae and synergist activity The impact of Precocene 1 and its synergistic effect with PBO against the tolerance and sensitivity of S. litura to λ-cyhalothrin is presented in Table 2. The LC 50 value of larvae of S. litura treated with PBO was 91.89 mg/L. The effect of Precocene 1 alone, tested for LC 50 in the larvae of S. litura, was found to be lower (78.05 mg ai/L), a considerable decrease due to the synergistic effect of PBO with Precocene 1 (61.06 mg ai/L).
The influence of diet with Precocene 1 on detoxification enzymes in S. litura larvae
The activity of the α-esterase enzyme in the control, the effect of Precocene 1 only, of λ-cyhalothrin, and of the synergistic effect of Precocene 1 with λ-cyhalothrin in the larva of S. litura at 24 and 48 h after exposure is reported in Figure 1. The higher enzyme activity of S. litura, due to the various treatments, is shown in the order of Precocene 1 + λcyhalothrin > Precocene 1>λ-cyhalothrin > control and was 1.95, 1.5, 1.1, and 0.8, respectively, in the said order at 24 h of exposure. At 48 h of exposure, profound activity was observed in Precocene 1 + λ-cyhalothrin (2.7), followed by λ-cyhalothrin, Precocene 1, and the control (2.4, 1.9, and 1.2).
The β-esterase enzyme activity in the larvae of S. litura at 24 and 48 h of exposure in the control, Precocene 1, λcyhalothrin, and the combined effect of Precocene 1 + λcyhalothrin is shown in Figure 2. A similar trend in the enzyme activity in S. litura as in the case of alpha esterase was observed, and the values were 2.01, 1.7, 1.4, and 1.15 at 24 h of exposure. Similarly, in the larva of S. litura, the enzyme activity at 48 h of exposure was also found to be of the same order as reported in the alpha esterase activity in S. litura at 48 h, with values of 2.89, 2.6, 2.10, and 1.33.
Regarding GST enzyme activity, both at 24 and 48 h of exposure to the synergistic effect of Precocene 1 + λcyhalothrin, the activity of the enzymes was found to be Here, LC 50 = lethal concentration to kill 50% of the population; a. i. = active ingredient; CL = confidence limit; SE = standard error; RR-resistance ratio; χ 2 = chi-square value.
Frontiers in Physiology frontiersin.org greater, being 1.89 and 2.11 in the larva of S. litura. The enzyme activity of S. litura was followed in the order of Precocene 1, λcyhalothrin, and the control as 1.29, 1.13, and 0.98 at 24 h of exposure, while a similar rate of enzyme activity was seen in S. litura larva at 48 h of exposure (Figure 3). The cytochrome P450 enzyme activity of S. litura in the control, Precocene 1, λ-cyhalothrin, and the synergistic effect of Effects of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin in α-esterase activity after 24 and 48 h. Data in the figure are means ± SE. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test.
FIGURE 2
Effects of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin in β-esterase activity after 24 and 48 h. Data in the figure are means ± SE. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test.
FIGURE 3
Effects of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin in glutathione-S-transferase activity after 24 and 48 h. Data in the figure are means ± SE. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test.
FIGURE 4
Effects of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin in cytochrome P450 activity after 24 and 48 h. Data in the figure are means ± SE. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test. The expressions of the genes CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10 of S. litura at 24 h of exposure in the control, Precocene 1, λ-cyhalothrin, and Precocene 1 + λ-cyhalothrin groups are reported in Figure 5. Regarding CYP4M16, the gene expression of the larva was found to increase (1.01, 2.22, 2.68, and 2.97-fold). A similar increasing pattern of gene expression was noticed in CYP4L10, where the values were 1.01, 1.77 1.89, and 1.98-fold. When compared with the control in all other families, the gene expression was found to increase, whereas a changing pattern of gene expression was observed among the three groups. Among the five families studied, the CYP4M16 showed higher gene expression due to the combined effect of Precocene 1 with λ-cyhalothrin at 24 h of exposure.
The CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10 expressions of S. litura after an exposure period of 48 h in the control, Precocene 1, λ-cyhalothrin, and Precocene 1 + λ-cyhalothrin groups are reported in Figure 6. A high level of gene expression was found in CYP4M16, with Precocene 1 + λcyhalothrin (2.65); the lowest level of gene expression (1.20) was observed in the Precocene 1-treated CYP4G31 of S. litura; and no variation in the gene expression pattern was noticed in the control.
Discussion
Plants produce a variety of secondary metabolites, or allelochemicals, which play a defensive role against plant eaters and pathogens. Furthermore, the natural predators of herbivores are attracted by these allelochemicals (Takabayashi et al., 1991;War et al., 2011). In the meanwhile, such feeding behavior paves the way to persistent development of resistance against pesticides in the agricultural field (Li et al., 2007;Zhu et al., 2016). The success of phytophagous insects is affected by the way they modulate their defensive strategies against the changing biotic stress of various kinds of secondary metabolites present in the plant. They do this by using the detoxification enzyme to detoxify or eliminate harmful components (Hafeez et al., 2018) Cytochrome P450 monooxygenases (P450s), esterases, and glutathione S-transferases (GST), are the prime detoxification enzymes to disarm insecticides and phytotoxins (Scott and Wen, 2001;Feyereisen, 2005;Li et al., 2007;Schuler, 2012;Liu et al., 2013). The present study investigated the effect of a diet
FIGURE 5
Effect of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin and relative expression levels of cytochrome P450s genes after 24 h. The transcription levels of three cytochrome P450s genes are determined by quantitative real-time PCR, normalized to different genes. Each bar indicates the mean of transcription levels (±SE), each being replicated. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test.
Frontiers in Physiology frontiersin.org incorporating Precocene 1 on the tolerance of S. litura larvae in response to λ-cyhalothrin. Furthermore, the impact of Precocene 1 on the activity of P450, esterases, and glutathione S-transferases, and the relative gene expression levels of cytochrome genes (CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10) were also assessed. Concerning the synergistic ratio, findings on the individual effect of Precocene1 in an enhanced state, as opposed to the synergistic and lone effect of PBO, deviate from the results of Chen et al. (2018), while at the same time they coincide well with the findings of Hafeez et al. (2020). Adoption of the molecular strategy of upregulating P450 genes may be the mechanism behind the resistance (Elzaki et al., 2015). The enhanced αand β-esterase activity at 24 and 48 h of exposure to the various treatments in the present study supports the results of other researchers (Mukherjee, 2003;Usha Rani and Pratyusha, 2013;Karthi and Shivakumar, 2016). Generally, the allelochemicals, be they either the secondary metabolites of the plant or the synthetic pesticides, developed toxicity when exposed to the pest, and in response, the pest nullified the same. The pest exhibited enhanced detoxifying enzyme activity, which may be the reason for the results of the present study. The increased levels of the detoxifying enzyme GST found in this study are in accordance with the reports of Dhivya et al. (2018), and Manjula et al. (2020), who identify the crucial role the multifunctional enzyme GST has in metabolizing the treatment of toxic plant allelochemicals, namely, hexane extract of Prosopis juliflora, and petroleum benzene leaf extract of Manihot esculenta, in S. litura. In the present study, the diet incorporating Precocene 1 led to an increased level of tolerance to the insecticide, λ-cyhalothrin, in S. litura. A similar kind of resistance to λ-cyhalothrin was noticed in S. exigua (Hafeez et al., 2020) and H. armigera (Chen et al., 2018), due to quercetin ingestion, and in H. zea forα-cypermethrin when exposed to xanthotoxin (Li et al., 2000). Likewise, tolerance to deltamethrin was observed in S. exigua fed with gossypol (Hafeez et al., 2018). The enhanced action of the P450 enzyme from 24 to 48 h, noticed in Precocene 1, λ-cyhalothrin, and Precocene 1 + λcyhalothrin, fed to larvae of S. litura, is corroborated by the results of Chen et al. (2018) and Tao et al. (2012) concerning H. armigera when fed with gossypol on exposure to pyrethroid, and quercetin to λ-cyhalothrin, respectively. In the result of RT-PCR at 24 and 48 h, the transcriptional levels of the CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10 enzymes of S. litura increased more markedly in all treatments other than in the control, and such activity resulted in the enhancement of Effect of Precocene 1 on Spodoptera litura larvae tolerance to λ-cyhalothrin and relative expression levels of cytochrome P450s genes after 48 h. The transcription levels of three cytochrome P450s genes are determined by quantitative real-time PCR, normalized to different genes. Each bar indicates the mean of transcription levels (±SE), each being replicated. Different letters above bars indicate significant differences (p < 0.05) according to the Tukey HSD test.
Frontiers in Physiology frontiersin.org P450 gene expression. The results of the present study accord with those of Li X. C. et al. (2004); Liu et al. (2006); and Rupasinghe et al. (2007), across a broad spectrum of compounds such as xanthotoxin, quercetin, and rutin, as well as the synthetic insecticides cypermethrin, diazinon, and aldrin.
Conclusion
The impact of active compounds isolated from a natural plant could elevate sensitivity to insecticides by enhancing the activity of detoxification enzymes in agricultural insects. Furthermore, the cytochrome P450 enzyme system certainly plays a vital role in ways insects resist the plant's chemical defense mechanisms. The current investigation assessed the impact of Precocene 1 alone, PBO, and their combined effect on the synergistic potential of S. litura, the role of plant allelochemicals in the activity of the detoxification enzymes viz., esterase, GST, and Cytochrome P450, and also the gene expression levels of CYP4M16, CYP4M15, CYP4S8V4, CYP4G31, and CYP4L10 in response to λ-cyhalothrin. It is obvious from the results that the S. litura showed different degrees of resistance, and after Precocene 1 treatment, particularly, the P450 gene showed a low level of expression, as in the control. Research is needed to discover the candidate genes that respond specifically to the natural toxins of plants, and also the impact of insecticides against such pests, in order to improve pest management strategies.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Author contributions
NS-S: study conception and design, data collection, data analysis, interpretation of results, and drafted manuscript preparation. RR: study conception and design, data collection, data analysis, interpretation of results, and drafted manuscript preparation. SK: study conception and design, data collection, data analysis, interpretation of results, and drafted manuscript preparation. SS-N: supervision, data analysis, interpretation of results, and drafted manuscript preparation. KM-PC: data collection, analysis, interpretation of results, and drafted manuscript preparation. HS: data analysis, interpretation of results, and drafted manuscript preparation. VS-R: data analysis, interpretation of results, and drafted manuscript preparation. GR: data analysis, interpretation of results, and drafted manuscript preparation, KN: data analysis, interpretation of results, and drafted manuscript preparation. SM: data analysis, interpretation of results, and drafted manuscript preparation. KA-G: data analysis, interpretation of results, and drafted manuscript preparation. AA-M: data analysis, interpretation of results, and drafted manuscript preparation. PK: data analysis, interpretation of results, and drafted manuscript preparation. All authors reviewed the results and approved the final version of the manuscript. | 2022-11-10T15:18:24.841Z | 2022-11-10T00:00:00.000 | {
"year": 2022,
"sha1": "d98f1023552e11c7b8f53a25e14ce660c0f31025",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d98f1023552e11c7b8f53a25e14ce660c0f31025",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259075558 | pes2o/s2orc | v3-fos-license | Dynamically-assisted nonlinear Breit-Wheeler pair production in bichromatic laser fields of circular polarization
Production of electron-positron pairs by a high-energy $\gamma$ photon and a bichromatic laser wave is considered where the latter is composed of a strong low-frequency and a weak high-frequency component, both with circular polarization. An integral expression for the production rate is derived that accounts for the strong laser mode to all orders and for the weak laser mode to first order. The structure of this formula resembles the well-known expression for the nonlinear Breit-Wheeler process in a strong laser field, but includes the dynamical assistance from the weak laser mode. We analyze the dependence of the dynamical rate enhancement on the applied field parameters and show, in particular, that it is substantially higher when the two laser modes have opposite helicity.
I. INTRODUCTION
Electron-positron pair production from vacuum by a constant electric field is a genuinely nonperturbative process that was first studied by Sauter in the early days of relativistic quantum mechanics [1].Later on, Schwinger treated the process within the framework of quantum electrodynamics (QED) and established its famous rate R ∼ exp(−πE c /E 0 ) that has been named after him [2].It contains the critical field of QED, E c = m 2 c 3 /(e ) ≈ 1.3 × 10 16 V/cm, and exhibits a non-analytical, manifestly non-perturbative dependence on the applied field strength E 0 .Here, m and e denote the positron mass and charge, respectively.Intuitively, the exponential field dependence indicates a quantum mechanical tunneling from negative-energy to positive-energy states.
Pair production rates in various other strong-field configurations share the characteristic Schwinger-like form (see [3][4][5][6] for reviews).For example, pair production in homogeneous electric fields oscillating in time [7,8] and pair production in combined laser and Coulomb fields via the nonlinear Bethe-Heitler effect [9][10][11] show exponential dependencies on the inverse field strength, as well, provided they occur in a quasi-static regime where the pair formation time is much shorter than the scale of field variations (and the applied fields are sub-critical).
Because of the huge value of E c , Schwinger pair production and its Schwinger-like variants have not been observed experimentally yet [12].However, motivated by the enormous progress in high-power laser technology, several high-field laboratories are currently aiming at the detection of the fully nonperturbative regime of pair production.They focus, in particular, on the nonlinear Breit-Wheeler process where pairs are created by a high-energy photon colliding with a high-intensity laser wave [13][14][15][16][17][18][19][20][21], according to ω + nω → e + e − , with the numbers of absorbed laser photons n 1.The corresponding rate in the quasi-static regime (ξ 1, χ 1) has the Schwinger-like form R ∼ exp[−8/(3χ)], where χ = 2ξ 2 ωω /(m 2 c 4 ) (assuming counterpropagating beams) denotes the quantum nonlinearity parameter and ξ = eE 0 /(mcω) is the classical laser intensity parameter.The experimental realization of this regime-that would complement the successful observation of nonlinear Breit-Wheeler pair creation in a few-photon regime (n ∼ 5, ξ 1) at SLAC in the 1990s [22]-still represents a formidable challenge [23][24][25][26][27].
To facilitate the observation of Schwinger-like pair production, a mechanism termed dynamical assistance has been proposed theoretically [28].It relies on the superposition of a very weak, but highly oscillating assisting field onto a strong (quasi)static background.Energy absorption from the assisting field can largely enhance the pair creation rate, while preserving its nonperturbative character.Dynamically assisted pair production has been studied for various field configurations, comprising the combination of static and alternating electric fields [28][29][30][31][32], static electric and plane-wave photonic fields [33][34][35] as well as two oscillating electric fields with largely different frequencies [36,37], including spatial field inhomogeneties [38,39].Only few studies revealed moreover the impact of dynamical assistance in the nonlinear Bethe-Heitler [40,41] and Breit-Wheeler processes [42].
In the present paper, we study nonperturbative Breit-Wheeler pair creation with dynamical assistance.To this end, the laser field is composed of a strong low-frequency and a weak high-frequency component.By considering both field components to be circularly polarized and taking the fermion spins into account, we complement and extend the earlier study [42] where dynamically assisted nonlinear Breit-Wheeler pair creation of scalar particles has been considered in two mutually orthogonal laser field modes of linear polarization.An integral representation for the production rate will be derived within the framework of strong-field QED, that includes the weak laser mode to leading order and allows to describe the absorption of one high-frequency photon from this mode during the pair production process.We will show that the latter can lead to a very strong dynamical rate enhancement and discuss its dependencies on the applied field parameters.In particular, it will be demonstrated that the circular field polarization offers an interesting additional setting option because the rate enhancement is found to be substantially larger when the two laser modes are counter-rather than co-rotating.
It is worth mentioning that, apart from rate enhancements by dynamical assistance, nonlinear Breit-Wheeler pair production in bichromatic laser fields comprises further interesting effects.In the case when both field modes have commensurate frequencies, characteristic quantum interference effects arise [43][44][45], whereas multiphoton threshold effects were presented for incommensurate frequencies of similar magnitude [46].A bichromatic field configuration may, moreover, allow for additional pair production channels involving photon emission processes [47].And for the so-called laser-assisted Breit-Wheeler process, i.e. pair creation in the collision of highfrequency (say, γ-ray and x-ray) photons taking place in the presence of a low-frequency background laser field, pronounced redistribution effects in the created particles' phase space have been revealed [48].
Our paper is organized as follows.In Sec.II we present our analytical approach to the problem that is based on the S matrix in the Furry picture employing Dirac-Volkov states for the fermions.An expression for the rate of Breit-Wheeler pair production by absorption of an arbitrary number of photons from the strong laser mode and a single photon from the weak laser mode is derived.The physical content of this expression is discussed in Sec.III where we illustrate the rate enhancement by dynamical assistance in the nonlinear Breit-Wheeler effect by way of numerical examples.Our conclusions are given in Sec.IV.Relativistic units with = c = 4πε 0 = 1 are used throughout, unless explicitly stated otherwise.Products of four-vectors are denoted as (ab) = a µ b µ = a 0 b 0 − a • b and Feynman slash notation is applied.
II. THEORETICAL APPROACH
In this section we present our analytical treatment of dynamically assisted Breit-Wheeler pair production in a bichromatic laser field.The latter is described by the four-potential in the radiation gauge and depends on space-time coordinates x µ = (t, r) via the phase variable τ = (κx) = t − κ • r, where κ µ = (1, κ) describes the uniform wave propagation direction along a unit vector κ.The field is composed of the circularly polarized frequency modes that will be denoted as main mode and assisting mode, respectively, with corresponding frequencies ω and ω and wave vectors k µ = ωκ µ and kµ = ωκ µ .The phases accordingly read η = (kx) = ωτ , η = ( kx) = ωτ , whereas ηα denotes a constant phase shift between the modes.The polarization vectors satisfy (κε i ) = 0, (ε i ε j ) = −δ ij for i, j ∈ {1, 2}.The helicity of the assisting mode is encoded in the parameter σ: the modes are co-rotating for σ = +1 and counter-rotating for σ = −1.The intensity parameters associated with their amplitudes are ξ = ea/m and ξ = eã/m.
A. Pair production amplitude
The S matrix element for nonlinear Breit-Wheeler pair production by a high-energy photon of wave vector k µ = (ω , k ) and polarization ε µ in the bichromatic laser field (1) reads with a normalization volume V .Here, Ψ p ,s and Ψ (+) p,s denote the Volkov states for the created electron, with asymptotic four-momentum p µ and spin projection s , and the created positron, with asymptotic fourmomentum p µ and spin projection s, respectively.They are given by [15] Observe that the normalization constant of the Volkov states is chosen with respect to the effective momentum involving the total intensity parameter ξ L = ξ 2 + ξ2 .With these details in mind, the S matrix becomes and the oscillating phase In the latter, we used z = ea −Q 2 L , z = eã − Q2 L , and /( kq L ), and the angle η 0 being determined by By virtue of the Volkov states (4), the S matrix (3) contains both modes of the bichromatic laser field to all orders.For our purposes, however, it is possible to simplify this general expression.Being interested in the Breit-Wheeler process with dynamical assistance, we shall assume from now on that ξ 1 ξ and ω ω.We may therefore expand the S matrix in powers of the assisting mode amplitude according to [49] where S (j) fi ∼ ãj .The zeroth order S (0) fi is obtained by setting ã = 0 in Eq. ( 6); it coincides with the well-known expression for the nonlinear Breit-Wheeler process in a monochromatic circularly polarized laser wave [13][14][15] The effective momenta q µ and q µ result from Eq. ( 5) by setting ã = 0 therein, i.e., q ( )µ = p ( )µ + m 2 ξ 2 2(κp ( ) ) κ µ ; the normalization factor becomes accordingly.The four-dimensional δ function displays the energy-momentum conservation in the process, and the sum over the number n of photons from the main mode A µ originates from a Fourier series expansion of the periodic parts in the S matrix, according to the formula e iz sin(η−η0) = n J −n (z)e −in(η−η0) with the ordinary Bessel functions J n .The matrix M n will be given in Eq. ( 13) below.Here and in the following, the zeroth order contributions are displayed to facilitate a direct comparison with terms involving the dynamical assistance by the weak mode õ .
The leading order contribution S fi is obtained by collecting the õ terms from the electronic and positronic Volkov states contained in Eq. ( 7) as well as the terms linear in z and zα stemming from a Taylor expansion of the phase factor e iΦ L in Eq. ( 6).The resulting expression can be decomposed into two terms, S corresponding to the emission (S ) of one photon k from the assisting mode.Our approach is illustrated diagrammatically in Fig. 1.
The matrices M n , M ± n , M ± n in Eqs. ( 11) and ( 12) can be expressed as ) in the matrix M n of the ordinary nonlinear Breit-Wheeler process [13][14][15].The remaining coefficients in the matrices M ± n and M ± n are associated with the firstorder contribution S (1) fi .We note that the coefficients B ± , C ± , and D ± marked by a tilde vanish in the limit ã → 0. The coefficients B ± , C ± , and D ± do not vanish themselves in this limit, but are multiplied by a factor ã in the matrix M ± n [see Eq. ( 13)].
All terms in S
(1,±) fi scale linearly with ã.The corresponding contributions to the pair production rate [see Eq. ( 22) below] will therefore contain an additional factor ξ2 as compared to the ordinary nonlinear Breit-Wheeler process.In case of S (1,−) fi , the reduction by the factor ξ2 1 is, however, counteracted by the modified energy-momentum balance: due to the absorption of one assisting photon k, the remaining barrier-that has to be overcome by additional photon absorption from the main mode-is lowered which facilitates the pair production.10)-( 12)).The wavy leg, common for all graphs, represents a quantized high-energy photon, whereas those starting (ending) with crossed blobs denote an absorbed (emitted) "assisting" photon.The solid arrowed lines on the left-hand side stand for the electron and positron wave functions interacting with the bichromatic background wave.Conversely, the external solid double lines on the right-hand side are the corresponding Volkov states which include the interaction with the main mode only.The internal double lines represent the electron-positron propagators in the field of the main mode.Note that, when the amplitude of the main mode vanishes, the leading-order term of the S-matrix element reduces to the well-established contributions linked to the linear Breit-Wheeler process.
For suitably chosen field parameters, the latter effect can overcompensate the reduction from the ξ2 scaling, this way leading to enhanced pair production.This is the physical origin of the rate enhancement by dynamical assistance.Conversely, the other first-order term S (1,+) fi , involving the emission of a photon k into the assisting mode, corresponds to an even higher barrier that has to be overcome by photon absorption from the main laser mode.It will give a smaller contribution to the rate than the assistance-free term S (0) fi and does not play a role for the enhancement effect that we aim for.
B. Pair production rate
From the S matrix we obtain the production rate per incident γ photon by taking the absolute square, summing over the produced particle spins, intergrating over their momenta, averaging over the γ-photon polarizations, and dividing out the interaction time: The production rate R accordingly refers to a scenario where the incident beam of γ photons is unpolarized.
In general, the absolute square of S fi from Eq. ( 10) contains-apart from diagonal terms-also cross terms that describe interferences between different contributions.In particular, the cross term of S (0) fi with S (1) fi leads to rate contributions linear in ã.These terms would exist and could cause interesting effects if the ratio of ω and ω was an integer (e.g.ω = 2ω).Such two-color quantum interference effects have already been studied elsewhere [43][44][45]; they are not of interest in the current consideration.To be specific, we shall assume in the following that the frequency ratio ω/ω is not an integer.Then the cross terms between S (0) fi and S (1) fi vanish identically because the associated δ functions in Eqs. ( 11) and ( 12) cannot be satisfied simultaneously.By requiring more strictly that 2ω/ω is not an integer either, we can moreover exclude interferences between S fi and S (2) fi drop out.Under this assumption, the squared S matrix becomes with the terms S (2) fi associated with the simultaneous absorption and emission of an assisting photon k [50].Leading to the same energy-momentum balance as in Eq. ( 11), these emissionabsorption processes are strongly suppressed as compared with S (0) fi , since they scale with ξ2 1 but do not lower the pair production barrier to be overcome by photon absorption from the main mode.They can therefore be safely neglected.In contrast, the dynamical assistance described by S The spin summation and polarization average in Eq. ( 15) can be carried out in the usual way by taking traces over the involved Dirac γ matrices.The result for the ordinary nonlinear Breit-Wheeler process is whereas for the leading-order in ã terms we obtain By performing afterwards the integrations over the particle momenta q and q , we obtain the corresponding contributions to the pair production rate, The well-established zeroth order contribution reads [13][14][15]] where correction terms of order ã2 from combined emission-absorption photon exchange processes with the assisting mode that do not change the four-momentum balance have been neglected.Here, α = e 2 is the finestructure constant and n 0 = 4m 2 * /s the photon number threshold, with the effective fermion mass m * = m 1 + ξ 2 dressed by the main laser mode and the Mandelstam variable s = 2(kk ).Besides, the upper integration limit is u n = n/n 0 , and the Bessel functions J ν = J ν (z) depend on the argument z = (8m 2 /s) ξ 1 + ξ 2 u(u n − u).
For the leading-order contribution with respect to ã, which involves the absorption or emission of one photon k from the assisting mode, we find with and the Bessel functions J ν = J ν (z ± ) depending on the argument z ± = (8m 2 /s) ξ 1 + ξ 2 u(u ñ± − u).Equation ( 22) constitutes the main result of our paper.While being somewhat more involved, its general structure-containing an integral over the variable u, that is related to the polar emission angle of the created particles, and a sum over the number of photons absorbed from the strong main laser mode-closely resembles the rate expression (21) for the ordinary non-linear Breit-Wheeler process.The important difference is that our formula for R (1,−) (or R (1,+) ) accounts for the absorption (or emission [47]) of an additional photon from the assisting weak laser mode.Accordingly, R (1,−) describes nonlinear Breit-Wheeler pair production in circularly polarized laser fields proceeding via the absorption of many low-frequency photons from the main field mode and a single high-frequency photon from the assisting mode, which may have either equal or opposite helicity as the main mode.The dynamical assistance provided by the weak high-frequency mode can largely enhance the pair production rate R (1,−) as compared with R (0) .Before moving on to the next section we note that, in the limit when the main laser mode vanishes (ξ → 0) while the assisting laser mode has low amplitude ( ξ 1) and sufficiently high frequency (such that s > 4m 2 ), the expression for R (1,−) reproduces the rate for the original Breit-Wheeler process [51] of pair production by two photons (see also Fig. 1).
III. NUMERICAL RESULTS AND DISCUSSION
In this section we illustrate our findings on the dynamically assisted nonlinear Breit-Wheeler process in a bichromatic laser field by numerical examples.For reasons of computational feasibility the main mode intensity parameter is chosen to have a rather moderate value of ξ ∼ 1, while the associated frequency is ω ≈ 0.05m.The frequency of the γ-beam is taken throughout as ω = 0.706m.For comparison we note that the typical values in experiment are ω ∼ 1 eV and ω ∼ 10 10 eV [22][23][24][25][26][27], yielding a product of ωω ∼ 0.04m 2 .The latter value is closely met by our set of parameters, meaning that we perform our calculations in a frame of reference that is boosted with respect to the laboratory frame.The assisting mode parameters are taken as ξ ∼ 10 −3 and ω ≈ ω , describing accordingly a weak mode of high frequency.The γ-beam and bichromatic laser wave are assumed to be counterpropagating.
In the following figures we present the rate R DA := R (0) + R (1,+) + R (1,−) from Eqs. ( 20)-( 22) for nonlinear Breit-Wheeler pair creation in a bichromatic laser field, including the effect of dynamical assistance.Two variants of this rate exist, depending on whether the two laser modes have equal or opposite helicities.In the figures, the corresponding rates are denoted suggestively as R DA (σ), with σ = +1 and σ = −1 respectively.Comparing them allows us to reveal the influence of the wave helicities on the exerted dynamical assistance.
The rates R DA (σ) in a bichromatic laser field will moreover be compared with the corresponding 'monochromatic rates' when only one of the two laser modes is present, in order to quantify the enhancement effect.This is, first of all, the rate R (0) for the ordinary nonlinear Breit-Wheeler process where the assisting mode is absent ( ξ = 0); it is denoted as R BW in the figures.Besides, the rate for nonlinear Breit-Wheeler pair creation by the γbeam and the assisting mode alone, when the strong field is switched off (ξ = 0) forms a second reference, denoted by RBW .We note that for the chosen parameters, at least three ω-photons need to be absorbed in the latter scenario to overcome the pair creation threshold.
A. Enhanced pair creation by dynamical assistance
Figure 2 shows the contributions to the pair creation rate stemming from the absorption of n laser photons from the main mode.Already here a pronounced rate enhancement through the dynamical assistance by the weak laser mode becomes apparent, as the contributions to R DA (σ) are much larger than those to R BW .They are shifted besides to smaller photon numbers because a part of the four-momentum required for pair creation already comes from the absorption of the high-frequency photon from the weak mode.Note that the photon number distributions in Fig. 2 are shifted to values substantially higher than the photon number thresholds of n 0 ≈ 56.7 and n − 0 ≈ 42.5 [see below Eqs. ( 21) and ( 22)], respectively, which is a characteristic above-threshold feature of pair creation at ξ 1.Furthermore, a comparison of R DA (σ = +1) and R DA (σ = −1) shows an impact of the mode helicities: the number distribution for counterrotating laser modes reaches larger maximum values and is slightly shifted to the left.
By summing the rate contributions over the number of absorbed strong-field photons, the corresponding total pair creation rates are obtained.They are shown in Fig. 3, as function of the inverse value of the main mode intensity parameter.In the chosen logarithmic representation, the dynamically assisted rates R DA (σ = ±1) and the unassisted rate R BW follow to a very good approximation declining straight lines, illustrating their Schwinger-like exponential dependence on −b/ξ, with some process-specific parameter b.In addition, the rate RBW for pair creation by γ-beam and assisting mode is included for reference along a horizontal line.
In the chosen range of parameters, the rates for dynamically assisted nonlinear Breit-Wheeler pair creation lie far above the monochromatic rates.The relative enhancement is largest close to 1/ξ ≈ 1.1 (i.e.ξ ≈ 0.9) and amounts to almost five orders of magnitude.The decline of the curves for assisted pair creation is much slower than for the unassisted process: by fitting our data to a Schwinger-like exponential, we obtain b 0 ≈ 35.0 for the unassisted process [52], while b 1 ≈ 14.0 (13.3) for the assisted process with σ = +1 (σ = −1).The reduced slope FIG.3: Total rates for dynamically assisted Breit-Wheeler pair creation in a bichromatic laser wave, as function of the inverse intensity parameter of the main mode; blue squares (green diamonds) refer to co-rotating (counter-rotating) laser modes.The parameters are the same as in Fig. 2. The unassisted case ( ξ = 0) is displayed by red crosses.Blue circles show the rate when instead the main mode is absent (ξ = 0).
arises because the absorption of the high-frequency photon from the weak mode reduces the tunneling barrier that remains to be overcome [28,[40][41][42].
Figure 3 shows moreover that the effect of dynamical assistance is stronger when the two laser modes have opposite helicity.This is remarkable because (i) the intuitive picture of dynamical assistance relies on the fact that the additional energy absorbed from the assisting field facilitates to overcome the pair creation threshold and (ii) the four-momentum of the assisting laser photon is the same for σ = ±1.Hence, the reduction of the tunneling energy-barrier by absorption of a high-frequency ω-photon occurs independently of its helicity.
The more pronounced enhancement effect of an assisting mode with opposite helicity can be explained by angular momentum conservation.The circularly polarized laser photons in our scenario carry definite angular momentum along the propagation axis of + or − .When the modes co-rotate, the angular momentum of each absorbed photon points in the same direction.In contrast, when the modes counter-rotate, the absorption of the assisting photon reduces the total angular momentum of all absorbed photons.Accordingly, for fixed strong-field photon number n, the angular momentum that is transferred to the created pair amounts to (n + σ) .
In a semiclassical picture, the orbital angular momentum of the electron and positron is determined by their relative momentum.In case of the unassisted Breit-Wheeler process, it amounts to ≈ 2[n 0 (n − n 0 )] 1/2 [53].This quantitiy is bounded from above according to ≤ n .Large contributions to the production rate can 0.8 0.9 1.0 RDA/ RBW( = + 1) RDA/ RBW( = 1) FIG.4: Relative enhancement of the dynamically assisted Breit-Wheeler process in a bichromatic laser field over the sum of the respective monochromatic rates.Blue squares (green diamonds) refer to co-rotating (counter-rotating) laser modes.The parameters are the same as in Fig. 2.
be expected when approximately balances the angular momentum of the absorbed photons [53].
In case of the assisted Breit-Wheeler process, we have to demand , accordingly.For counter-rotating waves, the condition = (n − 1) is very well met for n ≈ 73 in Fig. 2 and, indeed, exactly in this region of photon numbers we find the highest rate contributions.For co-rotating waves, however, the condition = (n + 1) cannot be satisfied because is at most n .In this case, the angular momentum balance can only be fullfilled when both particle spins are oriented along the laser propagation direction, this way providing an extra contribution of one unit of to the total angular momentum of the pair.Such an additional constraint is absent for counter-rotating waves, so that the accessible spin space is larger in this case.As a result, the total pair production rate is higher when the modes counter-rotate [54].
Our semiclassical consideration also explains the horizontal shift between the number distributions for σ = −1 and σ = +1 in Fig. 2. Since in the latter case, ≈ n is favorable, the region of highest rate contributions moves to larger n values than in the former case.We note that, asymptotically for ξ 1, one would expect the highest rates at n ≈ 2n − 0 ≈ 85 for co-rotating waves [53].The larger effectiveness of an assisting photon of opposite helicity is also seen in Fig. 4, displaying the relative enhancement due to dynamical assistance as compared with the sum of the monochromatic rates.At ξ ≈ 0.9, the pair creation rate for counter-rotating modes is almost twice as large as for co-rotating modes.The bellshaped form of the relative enhancement curves is a consequence of the rate dependencies shown in Fig. 3.The curves reach their maximum close to the point where the monochromatic rates R BW and RBW cross.For smaller values of 1/ξ, the relative enhancement is reduced due to the different slopes of the dynamically assisted rates R DA (σ) as compared with the unassisted rate R BW .For larger values of 1/ξ, it is reduced as well, since the rates R DA (σ) are falling while RBW is constant.
The dashed lines in Fig. 4
B. Parameter study of relative enhancement
For the field parameters considered in the previous section, we found a relative enhancement of the dynamically assisted rates R DA (σ) over the sum of the monochromatic rates R BW and RBW up to about 5 × 10 4 for σ = +1 and 9 × 10 4 for σ = −1 (see Fig. 4).In the following we will study the dependencies of the relative enhancement due to dynamical assistance on the applied parameters.
Figure 5 shows the relative enhancement for various values of the weak mode intensity parameter in the interval 5 × 10 −4 ≤ ξ ≤ 5 × 10 −3 .The dashed green lines with diamond symbols refer to ξ = 10 −3 , as considered in Sec.III.A.One sees that the maxima of the relative enhancement curves shift to larger values of 1/ξ and increase in magnitude, the smaller ξ is.These trends can be understood by taking reference to Fig. 3 and noting that the dominant contributions R (1,−) to the dynamically assisted rates R DA (σ) scale with ξ2 , while the monochromatic rate RBW scales more strongly with ξ6 ; the unassisted rate R BW remains unaltered when ξ is varied.Accordingly, when the value of ξ is reduced, the rate RBW decreases much more strongly than the rates R DA (σ), while R BW stays the same (see Fig. 3).The position of the maximum relative enhancement thus shifts to the right towards larger values of 1/ξ and grows in magnitude.We note, moreover, that the ξ-dependence for co-rotating and counter-rotating laser modes in panels a) and b) of Fig. 5 has very similar appearance.
In Fig. 6, the relative enhancement is displayed when the main mode frequency is varied in the range 0.04m ≤ ω ≤ 0.06m.The maxima of the curves decrease and move towards larger values of 1/ξ when ω grows.That the relative enhancement reaches higher values when ω is small, can be attributed to the growing ratio of ω/ω so that a single high-frequency photon corresponds to an increasing number of low-frequency photons.Interestingly, the product ωξ attains an approximately constant value at all curve maxima, corresponding to an electric field strength of the main mode of E ≈ 0.045E c .Thus, close to this field strength, the dynamical assistance is most efficient in the present scenario.
The influence of the main mode frequency is more pronounced for co-rotating laser modes [see Fig. 6 a)].The maximum relative enhancement decreases from left to right by about 40%, reaching 6.4 × 10 4 for ω = 0.04m and 4.0 × 10 4 for ω = 0.06m.In contrast, the relative enhancement for counter-propagating laser modes in panel b) decrease only by about 20%, from nearly 10 5 for ω = 0.04m to 7.8 × 10 4 for ω = 0.06m.
Very interesting structures arise when the relative enhancement is considered under variation of the assisting mode frequency ω.The results are shown in Fig. 7 within the interval 0.706m ≥ ω ≥ 0.354m.Note that the figure legend contains selected ω-values in order to not overload it.The top curve on the left corresponds to the frequency ω = 0.706m that has been considered so far.When this value is lowered, the relative enhancement curves go down and slightly shift to the right until ω ≈ 0.5m is reached.Their decrease is due to the fact that the reduction of the tunneling barrier is less and less pronounced when the assisting mode frequency becomes smaller.
However, when ω is decreased further, the relative enhancement curves start to grow again and shift considerable to the right, until the value ω = 0.472m is reached.At this frequency, the monochromatic process of nonlinear Breit-Wheeler pair creation by the γ-beam and the assisting mode alone changes its character in the sense that now at least four (rather than three) ωphotons are required to overcome the creation threshold.The corresponding rate RBW therefore drops down considerably, being suppressed by an additional factor of ξ2 1.As a consequence, the crossing point with the unassisted rate R BW shifts to the right (see Fig. 3), close to which the maximum relative enhancement occurs, in agreement with the shift of the curve maxima arising in Fig. 7.When the assisting mode frequency is even further decreased, the same transition takes place again.The curves decrease until ω ≈ 0.38m is reached, afterwards start to increase again up to the value ω = 0.354m, from which on at least five ω-photons must be absorbed to produce pairs via the monochromatic channel associated with the rate RBW .
In order to highlight the gradual transitioning due to multiphoton channel closings, we have marked the maxima of the relative enhancement curves in Fig. 7 by black circles.The distinguished frequencies of the assisting mode ω ∈ {0.706m, 0.472m, 0.354m}, where the main maxima in Fig. 7 arise, are of the form ω ≈ √ 2m/ñ 0 with ñ0 ∈ {2, 3, 4}, corresponding to s = 4ωω ≈ 4m 2 /ñ 0 .
IV. CONCLUSION
Dynamically assisted nonlinear Breit-Wheeler pair creation has been studied in a bichromatic laser field, consisting of a strong main mode of low frequency and a weak assisting mode of high frequency, both with circular polarization.By taking the weak mode in leading order into account, an integral expression for the corresponding pair creation rate with absorption of one high-frequency laser photon has been obtained, whose structure resembles the well-known formula for the ordinary (i.e.unassisted) nonlinear Breit-Wheeler process [13][14][15].
We have shown that the assistance from the weak laser mode can enhance the pair creation rate by several orders of magnitude.In particular, the enhancement is more pronounced when the two laser modes have opposite helicities.This interesting effect can be attributed to a lowering of the angular momentum barrier of the process.The relative enhancement due to dynamical assistance, as compared with the rates for nonlinear Breit-Wheeler pair creation by each laser mode separately, was found to be the larger, the smaller the main mode frequency and the assisting mode intensity parameter are.Quite complex structures have arisen when instead the assisting mode frequency is varied, which are caused by multiphoton channel closings.
Our analysis complements a previous study where dynamically assisted nonlinear Breit-Wheeler pair creation of scalar particles was considered for mutually orthogonal, linearly polarized laser waves [42].It is of potential relevance for future experiments on strong-field pair creation at high-intensity laser facilities [23][24][25][26][27].
FIG. 1 :
FIG. 1: Diagrammatic representation of the expansion up to linear order in the assisting mode amplitude of the S-matrix element for Breit-Wheeler pair production in the presence of a bichromatic laser field [see Eqs.(10)-(12)).The wavy leg, common for all graphs, represents a quantized high-energy photon, whereas those starting (ending) with crossed blobs denote an absorbed (emitted) "assisting" photon.The solid arrowed lines on the left-hand side stand for the electron and positron wave functions interacting with the bichromatic background wave.Conversely, the external solid double lines on the right-hand side are the corresponding Volkov states which include the interaction with the main mode only.The internal double lines represent the electron-positron propagators in the field of the main mode.Note that, when the amplitude of the main mode vanishes, the leading-order term of the S-matrix element reduces to the well-established contributions linked to the linear Breit-Wheeler process.
fi and those terms in S(2) fi which describe the absorption (or emission) of two photons k[42].And by finally imposing the stricter condition 6ω/ω / ∈ N (16) also interference terms of order ã3 between S (0) fi and S(3) fi as well as between S
2 ∼
on the right-hand side of Eq. (17) comprises the ã-independent contribution |S (0) fi | 2 along with O(ã 2 ) corrections to it.They stem from interferences between S (0) fi and second-order terms in S as will be demonstrated by numerical examples in Sec.III.
FIG. 2 :
FIG.2: Contributions to the pair creation rate in dependence on the number of photons absorbed from the main laser mode.The blue (dark gray) and green (light gray) bars refer to a bichromatic laser wave with co-rotating and counter-rotating modes, respectively, whereas the red (gray) bars show the unassisted case, as indicated in the legend.The parameters are ξ = 1, ξ = 10 −3 , ω = 0.05m, and ω = ω = 0.706m.
FIG. 5 :
FIG. 5: Relative rate enhancement due to dynamical assistance for different values of the weak mode intensity parameter.The γ-photon and laser frequencies are the same as in Fig. 2. Panels a) and b) refer to co-and counter-rotating laser modes, respectively.
FIG. 6 :
FIG.6: Relative rate enhancement due to dynamical assistance for different values of the main mode frequency.The γ-photon frequency and assisting mode parameters are the same as in Fig.2.Panels a) and b) refer to co-and counterrotating laser modes, respectively.
FIG. 7 :
FIG.7: Relative rate enhancement due to dynamical assistance for a sequence of decreasing values of the weak mode frequency between ω = 0.706m and ω = 0.354m (from left to right).The γ-photon frequency, main mode frequency and assisting mode intensity parameter are the same as in Fig.2.Black circles mark the curve maxima.Panels a) and b) refer to co-and counter-rotating laser modes, respectively. | 2023-06-06T01:15:59.652Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "23f79cc76eab8b82f634850ef5b94c7d6f391571",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.108.096023",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "23f79cc76eab8b82f634850ef5b94c7d6f391571",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
169602081 | pes2o/s2orc | v3-fos-license | Problems of Implementation of Public-Private Partnership in Russia
: The article is devoted to investigate the problem of creation and improvement of country’s infrastructure based on public-private partnership (further referred as PPP). As an object of research, authors look at public relations, that arise during the implementation of projects under the framework of the PPP law. The article considers the norms of the PPP law, as well as other laws, that regulate various aspects of agreements under PPP. As a result of conducting research, authors conclude that first of all, the difference between PPP and other forms of collaboration between the state and private sector (e.g. rent, privatization) is in its private principles, legal mechanisms and forms of realization. Such relations require a well-developed normative-legal basis. Secondly, the main idea of the stated law is not the opportunity for the private party to purchase the object of the agreement, but a wide opportunity to develop an effective form of PPP. Given this, authors point out, that the law about PPP possesses a number of features, that need to be considered alongside with various branches of legal rights. After the law was accepted, there was a need to reconsider a number of normative-legal acts of federal and regional significance in the aspect of public investment and budgetary guarantees. Given this, it is necessary to investigate the problems of public-private relations regulation in a complex manner, which will allow to define the key directions for the development of this institution.
Introduction
The term "public-private partnership" has emerged in the beginning of 1990. A notable, historical example of PPP development can be the German experience, where the projects based on PPP were used in cooperative construction projects. The first non-commercial construction societies were created on the basis of private investment in the middle of XIX century. As a result of collaboration between noncommercial companies with the public sector, the given companies were forced to impose certain regulations on their own business activity and profits distribution. The state in turn was granting these companies with tax shields and breaks.
In the 19 th century Russia practiced the interaction between public and private capital collaboration in the form of concessions. Further development of PPP wasn't successful given the command economy of the USSR.
The first stage of modern PPP is seen in the period [1995][1996][1997][1998][1999][2000][2001][2002][2003][2004][2005][2006][2007][2008][2009]. During this period, the Federal Law had been passed as of 30.12.1995 № 225-FL "About agreements in production", which has established the following form of agreement; agreement according to which Russia passes the exclusive rights to search, investigate, extract mineral raw material in designated areas to an investor. Investor in turn is obliged to carry out operations at his own expense and risk. During such an agreement, the parties settle down on the conditions and order production distribution. In theory it is common to believe that this law has given a start to the legal mechanism of PPP.
Theoretical, Empirical and Methodological Grounds of Research
In Russia, research of concrete forms of PPP is a complicated process, as the following institution is relatively new for the Russian practices and there is no official register or data about PPP projects.
According to the Infrastructure Projects Director of Gazprombank P. Brusser there are three main forms of PPP in Russia; concession, agreement about public-private partnership in compliance with the regional legislation and privatization (entry into the state companies share). The instruments used can vary being the investment loan, infrastructure bonds and shares, as well as any from of structured project, including those, that are currently not used in Russia, yet successfully applied in the West.
Therefore, we can conclude, that while choosing the form of a project, the parties engaged in PPP need to analyse the project, as well as an optimal model for collaboration. PPP projects are not just a form of public-private partnership, as such an agreement implies a selection of individual rights and bonds configuration for the parties. State pursues publically significant interests, while the aim of the private partner is profit maximization. Given such opposing aims, the agreements made under PPP are mixed, and are not labelled in the Russia rights code. Therefore, it is up to the parties in PPP to choose the optimal form for project realization, relying on the type of rights, conditions for construction and object transfer, cost of the contract, deadlines and relevant experience of both parties in relation to the project.
Defining factor for development of various forms of PPP is the opportunity, provided by the PPP law, for the private party to obtain property rights for the object under the agreement. Morever, the property right in the context of the law and page 209 of the Civil Code of the Russian Federation (Part 1) as of 30.11.1994 № 51-FL (later as CC RF) is different, as the ownership right under the PPP law is initially limited by the bonds of the private party. Furthermore, the provision of this particular right is a significant element of PPP.
It should be pointed out, that the formation of the ownership right for the object of the agreement from the private side defines the main difference between the concessional agreement and the PPP, as during the concessional agreement the ownership right will always belong to the public side. Part 12, p. 12 of the PPP Law proves the limitation of ownership rights, according to which the ownership right should be limited and is registered together with the ownership right for the object in the agreement. An example of such limitation is described in Part 13, p.12 of the PPP Lawprivate partner has no right to independently take control of the object in the agreement before the agreement becomes invalid, except for a change in the private partner, where the limitation of right doesn't terminate.
Rights limitation is also evident in the prohibiting the private partner to pawn the object, despite using it as a collateral for the financing party, given the presence of the direct agreement (Part 6, page 7 of the PPP Law). The borders of the ownership rights are formed in the process of right transfer to the public partner over a period of time, defined in the agreement (Part 4, Article 12 of the PPP Law). Therefore, relying on the examples stated above, it is possible to determine, that the PPP Law enforces a limited ownership right for the private partner in relation to the object of the agreement.
Given the registration of rights limitation (Part 12, Article 12 of the PPP Law), there are also limitations in transferring the object of the agreement to the third parties for them to utilize, as the group of these parties is seriously limited. We believe that the legislator has introduced a model of rights ownership, that doesn't fully reflect the interests of the private partner, as the main feature that distinguishes the PPP Law from the concessional agreement was in fact in emergence of "fully-fledged" ownership right for the private partner, while such right is limited while the agreement is valid and is no different from the rights for the object of concession.
The PPP Law doesn't allow multiple parties on the side of the public partner, although Part 1, Article 20 states that "in the interest of the private-public partnership agreement, the municipal-private agreement allows for 2 or more of the public partners to carry out a collective tender". However, Part 3 of the stated Article states, that a separate PPP agreement is made with each winner of the collective tender. The following rule complicate both the process of agreement, and the process of changing the agreement conditions, which may result in problems of implementation.
Therefore, in direct interpretation of the PPP law, there is an uncertainty regarding the expediency of holding a joint tender, as it can result in legal risks. On the other hand, the later conclusion of a separate agreement with each of the public partners questions the effectiveness of the tender, as the main aim is a joint implementation of the PPP agreement, and a joint implementation should be agreed to by all the parties. Given the specifics of relations between the parties and terms for participants of the PPP, we believe that there is a need to correct the PPP law, in the part that relates to the parties that can be allowed during the realization of the project. Reform of the PPP law is possible through normative definition of the criteria of allowing third parties to participate in the project on the side of the private partner.
The specifics of the PPP projects are in its large-scale financing not only by the public side, but by the private capital as well. Our view is that the government, as a participant, who should be more interested in implementing the PPP projects, as they are viewed as instruments of country's infrastructure development, should bear some sort of financial responsibility not only in project financing but also as a budget guarantor.
As the PPP institution in Russia is not well spread and developed, there are certain lags in legal-normative regulation of PPP financial mechanisms. This is further supported by the fact that at the moment, public financial entities, are mainly oriented at budget administration, tax collection, financial control and etc. While we believe that budget legislation should reflect the modern needs of infrastructure projects, as investment policy is directly related to PPP.
The PPP law foresees not only the right for the public partner to "extra" finance the PPP project, but also the opportunity of full financial and technical provision for the object. Part 5 of the Article 6 of the mentioned law states that project financing from the budget can only be carried out through the provision of subsidies in accordance with the budget law (Emelkina, 2016;Shekhovtsov and Shchemlev, 2017). Therefore, the PPP law limits the instruments for financing, state support for the PPP projects is provided solely through the provision of subsidies. It is also important to point out, that Article 19 of the PPP law, that sets out the criteria for the competitive selection of the private partner, doesn't contain the criteria that evaluates the applications of participants to obtain the subsidy. These participants, given their independence in defining the order by public-legal formation, as well as budget investment given individual character, do not require the use of competitive procedures in order to select the receiving parties (Shatkovskava et al., 2017).
Negative consequences of the excessively investment oriented budget policy, can lead to worsening of factors stimulating economically effective behaviour, as well as increasing power abuse from the officials. Statement 3 of the Part 3, Article 6 of the PPP law establishes the right of the partner to "ensure" financing. The law doesn't explain what is meant by the word "ensure", however it is sensible to assume that the ensuring instrument can be one or more ways to provide for the liabilities stated in the Article 329 CC RF (forfeit, collateral, guarantee, independent guarantee, deposit, provisional payment or other ways stated in the law).
From the abovementioned ways of liability guarantees, it is obvious that not all of them are applicable in the case of PPP project financing. However, if we consider other ways of guarantee stated in the law, then the application of state and municipal guarantees is possible, which can ensure the liabilities of the public partner.
Results
Implementation of the PPP project is relatively long-term, on the basis of which it is necessary to provide for the government guarantees for the whole term of agreement. Unequal time periods for the projects and guarantees is a legal problem, as the Budget Code of the Russian Federation (later BC RF) contains limitations in relation to the length of state guarantee duration (30 years), which at first glance is sufficient for the PPP project. However, the guarantees can only be satisfied in the case that annual budget law foresees the suitable budget item in relation to the terms of guarantee liabilities (Part 1 of Article 116, Part 1 of Article 117 BC RF). In other words the duration of the guarantee is formally defined by one year, i.e., by the budget law.
The following case is the deficiency in budget financing, which doesn't allow to establish certainty about the long-term financing of the PPP project, which in turn, can create long-term risks, for both private party and the investors. It is also important to point out another aspect, which opposes the realization of government guarantee issue to the private investors. In accordance to Part 5 of the Article 115 of BC RF, government guarantee should contain the information about presence or absence of the right of the guarantor to the principal regarding the refund, which have been paid to the beneficiary, given the state or municipal guarantee (regressive claim of the guarantor to the principal). According to Part 1 of the Article 115.2 BC RF, in case of the presence of a regressive claim, principal must provide the guarantor with provision, which complies with Part 3 of the Article 93.2 BC RF, according to which, the provision has to be of the following form "bank guarantees, bails, state and municipal guarantees, collateral of the sum that is no less than 100% of the loan. Provision of liabilities should be highly liquid." Obviously, such criteria do not fully reflect the commercial relations of the private party and the investors. If the principal (private partner) had the mentioned provision, then the need to obtain the state guarantee would be eliminated, as in market price terms, the provision is very similar to the guarantee and the principal can directly ensure its own liabilities in front of the investors, for example on the security of liquid assets, without having to use the mechanism of state guarantees. While making the decision about guarantee provision, a deep financial analysis of the private partner is performed. The results of this analysis may never be disclosed, even in case of the guarantee being issued.
Another important factor of budget financing of the PPP is the use of targeting programs, budget events aimed at co-financing of PPP projects from the side of the public partner. According to Article 179 BC RF, state programs are confirmed by the government entities of the same level as the one where PPP project is being implemented. The type of support should be established in the program; subsidy, or costs refunds (Article 78 BC RF), or budget investment (Part 5 of the Article 79 BC RF). At the moment, the provision of budget investment for the PPP projects is possible, but only for projects related to object of capital construction in compliance with concessional agreements. Therefore, we can also need for amendments in BC RF, as the procedures of budget investment provisions not compling with the PPP law requirements. Therefore, after the PPP has been passed, there has been an emerging need for reconsideration of a number of normative-legal acts of federal and regional significance in the aspects of public investment and guarantees provision. We believe that in the first place BC RF needs to be amended in the following aspects: 1. Provision of long-term state and municipal guarantees; 2. The opportunity to finance the PPP projects via accepting the target programs; 3. Regulation of long-term budget spending in the form of financial liabilities of the public partner in the PPP projects.
Apart from traditional sources of PPP financing, it is necessary to consider the new market mechanisms of financial sources. One of these sources of financing can be the "infrastructure bonds", which do not have a normative consolidation in Russia, however are used for large scale projects realization. The use of "infrastructure bonds" in Russia was first mentioned in 2007-2008. The "Strategy for Railroad Transport Development of the Russian Federation until 2030" defines the aim to finance railroad transport, including the use of such instruments as infrastructure bonds. The program for infrastructure bonds development is also included in the "Strategy for Financial Market Development in the Russian Federation for the period until 2020".
The abovementioned strategy for financial market development is offered for "aims to attract investment resources into the long-term projects for transport, energy, property and social infrastructure development, implemented on the basis of PPP, to foresee measures targeted at stimulating the investment into infrastructure bonds. In order to achieve these aims, there are required changes that need to happen to the Russian legislation, which will provide for rights protection of infrastructure bonds owners, as well as will provide the opportunity to invest the funds of credit organisations, pension savings accumulated by the Pension Fund of the Russian Federation, as well as private non-government pension funds into such bonds".
The mentioned directions for development are contained in the "Events plan from the Ministry of Economic Development of the Russian Federation regarding the implementation of actions aimed at improving the health of financial sector and other sectors of the economy" -" provision for issuance of infrastructure bonds, including the guarantees from Russian Federation and Vnesheconombank, for infrastructure product financing, realized on the principals of public-private partnership".
Despite the collective interest of government and business-societies to make investing in infrastructure bonds common, at the moment this institute is not normatively regulated, which creates uncertainty regarding its capabilities in the framework of the PPP agreement.
In order to achieve successful project implementation on the PPP basis, with investor interests being satisfied, the PPP law offers the private partner the right to transfer the rights regarding the PPP object on security terms. However, the actual PPP law doesn't disclose the institution of this security, which means that there is a need for complex analysis of security rights relations in the Russia law, accounting for the tendency to reform civil legislation. The PPP law accounts for imperative foundation to justify security: 1. The use of security to support the liability to the financing party; 2. The presence of direct agreement between the project agreement and the financing party. The PPP law doesn't contain any other basis for transfer of the agreement or private partner rights.
Therefore, relying on the interpretation of these rules, we can conclude that the circle of subjects of collateral relations is defined in three ways by the PPP law: 1. Depositorprivate partner; 2. Creditorfinancing party; 3. Public partnerstatus is not defined in the classic understanding of the term "security".
The public partner is given a special role, as one should act as guarantor of the project realization and unsure that no rights are breached from the side of the private partner and the financing party, as their actions can result in intended penalty for the object of security.
However, under the current framework of depositing a PPP object there is a number of deficiencies; Part 6 of Article 7 sets the delay period before incurring a penalty regarding the deposit at 180 days since the emergence of the reason to chase the penalty. It also establishes a ban on the penalty in case of early dissolution of the PPP agreement, given serious agreement breaching from the private side. Given such relationship, property protection of the financing party from violations by the private party seems ineffective.
According to the newly published Part 2 of the Article 336 CC RF, the object of the deposit can be in the form of property, which will be created or purchased by the depositor in the future. "Deposit is registered only after the main liability or purchase of the stated property by the depositor, in the exception of the case when the law or the agreement state otherwise".
However, the stated norms may complicate the realisation of depositor rights in case, when the object of the deposit is included in the property that need registration. Part 1 of the Article 339.1 CC RF states that deposit is subject to government registration if, in accordance to the law, the deposit rights are subject to government registration (Article 8.1 CC RF).
Therefore, the agreement of depositing non-existing property can result in burdening of the property which doesn't require government registration. The problems of depositing were pointed out by Makovskiy: "two serious threats caused by universal deposit are observable. First of all, it allows the more dominant party to technically enslave the other. Secondly, depositing all of the property to "your trustworthy" creditor, it is possible to exclude any other creditors issuing a penalty regarding your property".
Furthermore, the deposit agreement proposed by the PPP law, which doesn't define the status of the public party, imposes extra risks for the private party. Such risk is defined by Part 7 of the Article 7 of the PPP law, which states that "in case of penalty being imposed on the deposit, the public partner has the dominant right to purchase the object of the deposit at the price, equal to the debt of the private party, but no more than the price of the deposit itself".
It is obvious, that if the parties make a deposit agreement, where the property being deposited hasn't been created and registered yet, then at the moment of penalty request, the price of the deposit can be lower than the amount of monetary input from the financing party.
Therefore, in the case of unsuccessful project implementation, the public partner is protected by the law, as it has a dominant right to purchase the deposit at the price, higher than its actual price. At the same time, the private partner, while borrowing the funds to implement the project, is at risk to face responsibility before the financing party (Part 2, Article 334 CC RF).
An important update regarding the institute of creditors should be pointed out. According to the Article 335.1 CC RF it is possible to have a number of creditors in relation to the same property, or a number of solidary and share creditors in relation to the liability ensured by the deposit.
According to the BIS data (The Bank of International Settlement), in 2007, in Russia alone, the number of syndicated credit deals exceeded 2 billion US dollars. For comparison, the volume of syndicated credits issued in 2005 globally was around 3,5 trillion US dollars. Such a big difference doesn't only illustrate undeveloped institutions for large investment project financing, but also shows that deals related to syndicated loans, were mainly concluded under the foreign legal framework, even if both parties were Russian.
According to the International Court of Arbitration data in 2009, the majority of cases were based on the British law. Actuality of development of Russian rules for syndicated crediting is also supported by the Federal Law Project № 204679-7 "Regarding the changes in separate legislative act in the Russian Federation (regarding syndicated credits)", which has been introduced to Duma in 21.06.2017. The stated law proposal also focuses on the deposit legal relations.
Conclusions and recommendations
Based on the abovementioned facts we can conclude that at the moment, the focus of legislature is centred around the creation of new instruments to finance large scale investment projects, including those based on PPP. After the law about syndicated crediting being passed (given the right to use the PPP objects as a deposit), Russian business communities have been given an opportunity to conclude investment deals with the use of syndicated credit under the Russian rules, which should increase the attractiveness of PPP projects.
We believe that the actuality of reforming the Civil code in the sphere, if deposit relations cannot be overvalued as the changes are progressive and have potential for development, the deposit institute is not defined by the PPP law to have full capacity.
Therefore, in order to ensure effective rules implementation regarding depositing the object of PPP, the Russian Civil Code sets out new rules with respect to the deposit. At the moment, it is too early to talk about the direction of such a development, as creditors will have extra guarantees for the funds invested.
It is necessary to point out that the problems of implementing the agreements on the basis of the PP law are relevant, which is caused by the emergence of new forms of PPP agreements, and lack of research in the sphere. Effectiveness of the implementation of PPP mechanisms while creating large scale infrastructure products is defined by successful international practice, which proves that there is a need to research and improve the legal potential of PPP in Russia. The new opportunities for private capital attraction offered by the PPP, allow to define the need between the public and the private side. The rights to purchase the object of agreement and use the PPP object as a deposit, illustrate the development of Russian law in the sphere of connecting the interests of government and private business.
However, new legal mechanisms need to be developed. In the conditions of insufficient budget funds and decreasing interest for Russian economy from large scale investors, the government should take the responsibility to develop the legislation, in order to improve economic attractiveness, as well as PPP related activity. In order to achieve the targets of country's innovative development it is not only necessary to improve the current normative-legal base, but also the creation of new legal mechanisms, which will be able to support the effective legal and financial state of the project.
The key problem in achieving the stated target is the limited budget planning, which doesn't allow for long term liabilities between the public and private partners neither in project financing, nor in government guarantees provision. As the process of normative regulation development goes, it is necessary to refer to the concept of PPP legislation development, as well as account for the needs of parties implementing large scale projects. A good example of market relations overtaking the current normative-legal base is the implemented instrument for project financing in the form of infrastructure bonds issueingsuch factors highlight the deficiencies of the legal regulation of project financing.
Given the fact that PPP are of public importance, their realisation should result in positive public effect. Wide conditions for agreement provided by the PPP law, allow the parties to carry out effective government policy simultaneously economically stimulating the private partner.
Bearing the functions of the main regulator of PPP, the government should also apply indirect methods for project support, as well as indirect influence innovation project, which are largely defined by public interest. We believe that such indirect methods include collaborative engagement of parties in ecological programs, aimed at reducing the negative impact on the environment. The development of such form of partner relations will not only ensure public result, but will also allow the private partner to extract the benefits, to obtain extra guarantees which are not implied in independent project implementation. | 2019-05-30T23:47:10.062Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "578f8e93560472e27af3d0679340e2bfc3a42bb0",
"oa_license": null,
"oa_url": "https://www.ersj.eu/journal/1171/download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0c499c18a48347862c5ba7e0089d9bb32698ab5a",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Business"
]
} |
133897437 | pes2o/s2orc | v3-fos-license | ROOFING ASSESSMENT FOR ROOFTOP RAINWATER HARVESTING ADOPTION USING REMOTE SENSING AND GIS APPROACH
Rooftop rainwater harvesting refers to the collection and storage of water from rooftops whereby the quality of harvested rainwater depend on the types of roof and the environmental conditions. This system is capable to support the water supply in almost any place either as a sole source or by reducing stress on other sources through water savings. Remote sensing and GIS have been widely used in urban environmental analysis. Thus, this study aimed to develop the roofing layer in order to assess the potential area for rooftop rainwater harvesting adoption by integrating remote sensing and GIS approach. An urban area containing various urban roofing materials and characteristics was selected. High resolution satellite imagery acquired from WorldView-3 satellite systems with 0.3 m of spatial resolution was used in order to obtain spectral and spatial information of buildings and roofs. For quality assessment, the physical and chemical parameters of the rooftop harvested rainwater were performed according to the Standard Tests for Water and Wastewater. The potential area for rooftop rainwater harvesting adoption can be identified with the detail information of the rooftops and quality assessment in geospatial environment.
INTRODUCTION
Rainwater is harvested from rooftops and ground surfaces as well as temporary watercourses.Rainwater can also provides affordable water for household uses and agriculture.Rooftop rainwater harvesting (RRWH) refers to the process of harvested water collection from rooftops (Gould and Nissen-Petersen, 1999).Through this system, Ojwang et al. (2017) stated that the system is capable to support the water supply at most area through water saving.Besides, this system also can be a key strategic adoption measures for communities affected by climate change.
Water management is a priority in development area whereby human population is expected to rise about 70% by 2050 (UN, 2012) with water must be accessible without compromising sustainability.Unfortunately, freshwater sources are becoming limited and polluted.These phenomena have become major public concerns.Therefore, rainwater harvesting systems have become popular means of conserving potable water.These systems are the most significant approaches for providing sustainable water cycles, particularly in urban development (Lee, Bak, and Han, 2012;Lye, 2009;Zhang et al., 2014).Rainwater collected from rooftops is assumed to be a relatively safe source of non-potable and potable water and regarded as non-polluted storm water.However, Zhang et al. (2014) revealed that the quality of rooftop harvested rainwater is significantly affected by roofing materials and the effects of these materials on water quality must be analysed carefully.
The types of roofing material used for the catchment can affect the quality of harvested rainwater (Carolina et. al., 2010).Most of the studies focused on examining conventional roofing materials such as galvanized metal for rainwater harvesting.Besides, long-term exposures to chemicals affected the harvested rainwater and cause numerous biological disorders.Despite the fact that many chemicals accumulate in living tissues over time, there are no reports concerning health risks associated with human contact, inhalation or ingestion of harvested rainwater that contaminated with chemical pollutants.This problem occurred due to the lack of local guidelines for specifying the range of physical and chemical parameters that suitable for both non-potable and potable applications of rooftop rainwater harvesting.
Although non-potable applications may range from simple outside collection systems for irrigation of lawns and gardens to sophisticated residential uses such as toilet flushing and clothes laundering, each usage is subject to local scrutiny concerning the quality of rooftop harvested rainwater.
Segmentation technique in remote sensing leverage the advances in data acquisition with specific in terms of spectral and spatial resolution capability.High-resolution images contain rich texture information, which has been shown to improve segmentation results (Ryherd and Woodcock, 1996;Kim, Madden, and Xu, 2010).In image processing, a scale usually refers to the size of the operators or measurement probes used to extract information from image data.Improper scales can lead to over segmentation with segment corresponds to the portion of regions with one segment contains multiple landcover classes.Due to the inherent multiscale nature of realworld objects, many multiscale segmentation algorithms have been proposed (Tzotsos, Karantzalos, and Argialas, 2012;Johnson and Xie, 2011).However, manual interpretation is typically needed in order to utilize the segmentation results at multiple levels, which inevitably involve subjectivity.Moreover, it has been shown that, in specific cases, single-scale representation might be sufficient and more straightforward (Lang and Langanke, 2006).
GIS is capable to development the environmental model.From here, Mati et al., 2006 described GIS is a tool for collecting, storing, and analysing spatial and non-spatial data.Various thematic layers can be generated by applying spatial analysis with GIS software.The implementation of rooftop rainwater harvesting in urban area required specific studies in terms of data, analysis, and modelling.This research aimed to develop the roofing layer with various information in order to determine the potential area for rooftop rainwater harvesting adoption at Taman Seri Serdang by using high resolution of satellite imagery and the results of quality assessment on the rooftop harvested rainwater.
STUDY AREA
The selected study area was in Taman Seri Serdang whereby this area was estimated with population of 14360 people by the Local Planning of Subang Jaya (Lim, Shaharuddin, Sam, 2012).In this area, there are residential area, commercial area and a secondary school with the estimated area is 0.2km 2 .This study area is located at Seri Kembangan; which receiving 2600mm of rain per year.
METHODOLOGY
In this study, roofing layer was developed from the building footprint and the result of quality assessment.The information needed in roofing layer are roofing materials, roofing conditions, roofing slope, physical and chemical parameters, and area of each rooftops.
Image Segmentation
Image segmentation was performed using eCognition software.Object Based Image Analysis (OBIA) is based on the human process of object recognition that required spectral value, shape, texture, and context information for image classification.
This technique is suitable for high resolution imagery with focusing on building extraction.In this study, Spectral Difference Algorithms was applied.This algorithm implied the concept of the neighbouring image objects are merged if the spectral difference is below the value given by the maximum spectral difference.
Quality Assessment
There are five types of roofing materials that were tested in this sudy; concrete, clay, zinc, polycarbonate, and asbestos.All of these roofs were set up with 20° of slope.The rooftop harvested rainwater was flowed into first flush diverter until exceeding before being stored into the storage tanks.First flush diverters need to be incorporated into the system in order to protect the water quality in the collection tank from contamination (Abdulla and Al-Shareef, 2009;Gikas and Tsihrintzis, 2012).These prototypes were set up inside the Faculty of Engineering, Universiti Putra Malaysia whereby this faculty is next to Taman Seri Serdang.
The quality assessment was conducted based on physical and chemical parameters.For physical parameter, the rainwater properties such as colour, volume, and molecular weight were test without changing the chemical composition.In this testing, turbidity; the cloudiness of water samples and the tendency to transmit light properties in water , total suspended solid (TSS); the amount of filterable solids in the rainwater samples, and total dissolved solid (TDS); the amount of combination between organics and inorganic substances in water, are the physical parameters that being tested.
The assessment on the chemical parameter is based on the observation of any change on the chemical properties of a substances.In this assessment, pH; the measurement of the potential activity of hydrogen ions (H+) in moles per liter in the rainwater samples, heavy metal with focusing on Zinc and Manganese, and Chemical Oxygen Demand (COD); the estimation of the capacity of water to consume oxygen during decomposition, are the chemical parameter for quality assessment.
Development of Roofing Layer
The roofing layer is based on the building footprint and the quality assessment.The building footprint was classified into building lot.With this classification, the attribute for this layer can be designed for each building lots.
This layer is done in ArcGIS software.The layer was registered into 2D coordinates projection.The attribute classes of this layer were classified into roofing materials, roofing conditions, roofing slope, and physical and chemical parameters.The area of each building lots was calculated using geometry.For the roofing materials and conditions; the identification is based on the satellite imagery.
Satellite Imagery
There are three types of roofing material were identified in the study area; concrete, metal, and asbestos with the roofing condition; either new or old.These assessment were done by the true colour combination of band 5, band 3, and band 2 of World-View 3 image.The assessment for roofing materials and roofing conditions are based on the pattern, texture, and colour of roofs in the satellite imagery.
RESULTS AND ANALYSIS
Image segmentation using satellite imagery and quality assessment on rooftop harvested rainwater were performed according to the parameters.These results were used to develop the roofing layer.
Building Footprint
There are 764 of building lots were identified using segmentation method.The classification on roofing materials and roofing conditions were done based the pattern, texture and colour of the rooftops from the satellite imagery.
Figure 3: Building footprint at Taman Seri Serdang
Quality Assessment
The quality assessment of rooftop harvested rainwater was followed the Standard Water and Waste Water.The parameters used are in terms of physical and chemical.Then, the treatment of harvested rainwater were treated using coagulation and acidbased titration methods in order to remove colloidal impurities.
Here, the results of water treatment were verified with the Drinking Water Quality Standard from the Ministry of Health Malaysia (2016).Based on the quality assessment, the rooftop harvested rainwater is suitable for non-potable water source with new metal is suitable to use in rooftop rainwater harvesting adoption.Based on the comparison, the rooftop harvested rainwater is in good quality; in terms of physical.The TSS parameter is not listed in the standard.For chemical quality, further study should be carried out in terms of heavy metals (Zinc) and the COD.
Development of Roofing Layer
Roofing layer is based on the results of image segmentation; in terms of spatial data and the quality assessment for non-spatial data.The determination of roof materials and roof conditions basically from the satellite imagery.From here, 195 of building lots were identified as new condition of roof materials with 10 were identified as asbestos, 81 as metal, and 104 as concrete.These classification are labelled with colour; red colour for asbestos, yellow colour for concrete, and blue colour for metal.Each of building lots' area were calculated using geometry.
Next, the results from water treatment were tabulated according to the roofing materials.For roofing slope, all the building lots were assigned based on the parameter in the quality assessment.
CONCLUSION
Roofing layer was developed with integration results from image segmentation and quality assessment.This is the initial step for the model development for rooftop rainwater harvesting for urban environment adoption.There are various elements that may affect the quality of rooftop harvested rainwater such as land use, rainfall variation, weather condition, and air quality.These parameters will lead to the model development by implementing sort of analysis such as geostatistical and spatial analysis; as part of GIS environmental study.
Figure 4 :
Figure 4: Roofing material with new condition
Table 1 :
Results of Chemical Parameters
Table 2 :
Results for Physical Parameters | 2018-12-11T12:08:37.525Z | 2018-10-30T00:00:00.000 | {
"year": 2018,
"sha1": "465806fa8f6bc1354065f3f48c162610d976da81",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-4-W9/129/2018/isprs-archives-XLII-4-W9-129-2018.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "465806fa8f6bc1354065f3f48c162610d976da81",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
93789767 | pes2o/s2orc | v3-fos-license | Controlling photochromic properties of molybdenum oxide based composite fi lms by copper addition
MoO 3 based photochromic composite fi lms were fabricated using a Mo-IPA methanol solution and a transparent urethane resin, and the photochromic property of the fi lms was controlled by copper addition. All the composite fi lms colored by UV vis light irradiation, and bleached in placing a dark room. The initial color of the fi lms changed from blue to transparent by Cu addition to the composite fi lm, and Cu addition caused acceleration of the bleaching speed of the composite fi lms with the Cu / Mo ratio of higher than
Introduction
Photochromic materials show a reversible photosensitive property by light irradiation and interception of light, and these materials are silver chloride based glasses, 1)3) tungsten oxide based films or composites 4)9) and molybdenum oxide based film. 10), 11) The photochromic property of the MoO 3 based materials appear by means of reduction of Mo 6+ to Mo 5+ or Mo 4+ by UVvis light irradiation, thereby the MoO 3 color changes from transparent to blue or brown.
Hinokuma et al. reported fabrication of Mo-IPA using metal molybdenum powder and H 2 O 2 . 12) The resulting Mo-IPA has an empirical formula MoO 3 ·nH 2 O 2 ·mH 2 O where n and m depended on the extent of removing of H 2 O 2 in the precursor solution. In previous, we have reported fabrication of MoO 3 based photochromic composite films using an ¡-¢and £peroxoisopolymolybdic acid (Mo-IPA) aqueous solution and transparent urethane resin. 13) However, in the previous investigation, 13) the color at the initial state and the bleaching speed of the composite films could not be controlled.
In this investigation, we fabricated molybdenum-based photochromic composite films using a Mo-IPA methanol solution and transparent urethane resin as starting materials, where the £-Mo-IPA was employed to fabricate composite films because of its high methanol solubility. In the present study, £-Mo-IPA called as "Mo-IPA. We evaluated coloring and bleaching properties of the resulting composite. Furthermore, the photochromic property of the composite films was controlled by addition of CuCl 2 (Cu 2+ ions) in the films; referring the previous investigation since existence of Cu 2+ ion affected to the photochromic property of WO 3 based composite films. 9) 2. Experimental procedure Metal molybdenum powder (particle size of 4¯m, Kojundo Chemical Co. Ltd.) was used as a starting material. The molybdenum powder was dissolved completely into an ice-cooled 15% H 2 O 2 solution to achieve atomic molybdenum concentration of 1.0 mol/L. After reaction, the excess H 2 O 2 was removed catalytically using Pt nets for 3 days, and the (£-) Mo-IPA solution was obtained according to the previous study. 12) Mo-IPA was dissolved into methanol with the concentration of 0.01 mol/L, and subsequently CuCl 2 (Wako Pure Chemical Industries Ltd.) was dissolved into the solution with the Cu/ Mo ratios of 0 to 1.0. The 1.0 mL of the resulting Cu/Mo-IPA methanol solution was mixed into urethane resin (M-40, Asahi Kasei Chemicals Corp.) of 3.3 g (volume of 3 cm 3 ). The mixture slurry was mixed well, and degassed at 1 kPa for 60 min to expel the dissolved air in the precursor slurry. Then the precursor slurry was put between the slide glasses with film thickness of 1 mm, and the slurry was cured by UVVis light irradiation for 1 min using a 1 kW low-pressure Hg lamp. The resulting films were colored because of the UVvis light irradiation. Therefore, the composite film was put into a dark room to clarify the film for 7 days.
The photochromic properties of the films were evaluated at room temperature using a UVVis spectrophotometer (UV-1600; Shimadzu Corp., Japan). Through the investigation, a 1 kW lowpressure Hg lamp was used for coloration of the composite films.
Results and discussion
Composite films were fabricated using a Cu/Mo-IPA ethanol solution and a urethane resin. Figure 1 depicts the coloring property of the composite films with various Cu contents before and after UVvis irradiation, and the inset photographs are the films on the paper before and after 20 min UVvis irradiation.
The Cu un-doped film before UVvis irradiation indicated light blue color and a broad absorption at the peaks of 630 and 790 nm. The absorption was attributed to be existence of Mo 5+ . 10),11),13) The Cu un-doped composite film after UVvis irradiation showed broad absorption at the absorption peaks of 450 and 790 nm, and the color of those was brown. In this study, Mo-IPA methanol solution was used as the starting material.
Electrons were released from the OH group or MeOH in the composites by UV/Vis light irradiation. Mo 6+ and Mo 5+ in the films were received electrons and reduced to Mo 5+ and Mo 4+ by UV/Vis irradiation, and it assumed that the color of the resulting film became brown. These photo-chemical reaction is describes as follows: Mo 5þ þ e À ÀÀÀÀÀ! UV ÀÀÀÀÀ UVcut Mo 4þ ðbrownÞ: The composite films with Cu addition were transparent at the initial state, and the no absorption peaks were observed. Where, additive of Cu assumed to exist as Cu + or Cu 2+ ion in the composite. The Cu addition effect on controlling of the color of the composite films was described on the later. After UVvis irradiation, the composite films showed the broad absorption at the peaks of 450 nm and around 700800 nm, and the color of the composite films became brown.
In general, the color of MoO 3 based photochromic films was transparent at initial state, and changed from transparent to blue according to reduce of Mo 6+ to Mo 5+ by Uvvis irradiation. 10), 11) Where, the composite film (at the initial state) was put into a dark room to clarify the film for 7 days. On the contrary, the results in the present work were different from the previous investigations. The results of the present work assumed to originate from Mo 4+ or interaction with the resin polymer, but further investigations are required to explain this phenomenon.
Using the optical property of the composite films, the reaction rate constant k was estimated at the wavelength of 450 nm: the remarkable absorption peak of the films in the coloring condition. The method of calculating the photochromic reaction rate constant was described an earlier reports. 9),13) The reaction rate equation is described as where A 0 is the initial absorbance, t is the passing time and A is the absorbance at the time t passed. The calculated reaction rate constants of the composite films with the Cu/Mo ratio of 0, 0.1 0.5 and 1.0 were 0.122, 0.148, 0.105 and 0.0354 min ¹1 , respectively. The rate constants were very close to each other with less than the Cu/Mo ratio of 0.5, and that with Cu/Mo ratio of 1.0 was slower than that of the Cu un-doped film. The results suggested that Cu 2+ ions did not affect as a coloring sensitizer. A bleaching property of the films was evaluated in dark room at room temperature, and Fig. 2 presents the bleaching property of the films after 20 min UVvis irradiation. For the Cu un-doped composite film and the composite film with the Cu/Mo ratio of 0.1, the color of the films did not return completely to the initial state color. On the other hand, the color of the composite films with the Cu/Mo ratio of 0.5 and 1.0 became to the initial state color for 120 h.
Bleaching property of all the films was observed, and the bleaching speed was much slower than the coloring speed. Regarding the bleaching property, plots of the reactions on the Eq. (3) showed no linear property. The bleaching reaction therefore was not first-order, and the rate-determining-stage was two or more steps. The rate-control factors assumed to be electron mobility in MoO 3 clusters, returning electrons from Mo 4+ to matrix (hydroxyl function), and so on. Thus, the reaction constant of the films on the bleaching could not be evaluated.
To compare the bleaching speeds of the films semi-quantitatively, we calculated the half-life period of the films¸(h), wherȩ was the time taken from the transmittance of the sufficiently coloring state (20 min UVvis irradiation) to the transmittance at 450 nm of the on a half of its after 120 h. The calculated half-life periods¸of the films with Cu/Mo ratios of 0, 0.1, 0.5 and 1.0 were 21.8, 29.2, 9.7 and 4.8 h, respectively. The results suggested that CuCl 2 addition with higher than Cu/Mo ratio of 0.5 caused acceleration of beaching speed of the MoO 3 composite films.
In previous investigations, the bleaching speed of the AgCl and the WO 3 photochromic composite films was accelerated by CuCl 2 addition. 3),9) For the MoO 3 based composite films, Cu 2+ ion also acted as sensitizer on the bleaching of the composite films as follows: The composite films were placed in the dark room, the equilibrium of these Eqs. (4) and (5) move to the left side. Mo in the composite films was more oxidized by existence of Cu 2+ ions, thereby it assumed that the bleaching speed was accelerated. As well as this speculation, the composite films without CuCl 2 addition was color because of low molybdenum valence, and the composite films with CuCl 2 addition was transparent because molybdenum kept a high molybdenum valence by existence of Cu 2+ ions.
Conclusion
The MoO 3 based photochromic composite films were fabricated using Mo-IPA methanol solution and transparent urethane resin, and the Cu addition effect on the photochromic property of the films was evaluated. The initial color of the composite films could be controlled by cupper addition in the films, and the color of the composite changed from blue to transparent with increase of the Cu contents in the composites. Furthermore, the bleaching speed of the composite films accelerated by Cu 2+ ion addition with the Cu/Mo ratio of higher than 0.5. These results suggested that the initial state color and the bleaching property of the MoO 3 based composite films could be effectively controlled by Cu 2+ ion addition. | 2019-04-04T13:11:48.807Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "4852f08662e05ca1abe97062a6718a1660cb3cf8",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/122/1421/122_JCSJ-N13155/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0020f14406d795f67934ab1b755a87b6c1963bbe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
22487382 | pes2o/s2orc | v3-fos-license | Selective interactions of Kruppel-like factor 9/basic transcription element-binding protein with progesterone receptor isoforms A and B determine transcriptional activity of progesterone-responsive genes in endometrial epithelial cells.
The Sp/KLF transcription factor basic transcription element-binding protein (BTEB1) regulates gene transcription by binding to GC-rich sequence motifs present in the promoters of numerous tissue-specific as well as housekeeping genes. Similar to other members of this family, BTEB1 can act as a transactivator or transrepressor depending on cell and promoter context, although the molecular mechanism underlying these distinct activities remains unclear. Here we report that BTEB1 can mediate signaling pathways involving the nuclear receptor for the steroid hormone progesterone in endometrial epithelial cells by its selective interaction with the progesterone receptor (PR) isoforms, PR-A and PR-B. Functional interaction with ligand-activated PR-B resulted in superactivation of PR-B transactivity, facilitated the recruitment of the transcriptional integrator CREB-binding protein within the PR-dimer, and was dependent on the structure of the ligand bound by PR-B. By contrast, BTEB1 did not influence agonist-bound PR-A transactivity, although it augmented PR-A inhibition of PR-B-mediated transactivation as well as potentiated ligand-independent PR-A transcriptional activity in the presence of CREB-binding protein. We also demonstrate similar positive modulatory actions of BTEB1-related family members Krüppel-like family (KLF) 13/FKLF2/BTEB3 and Sp1 on PR-B transactivity. Further, we provide support for the potential significance of the selective functional interactions of PR isoforms with BTEB1 in the peri-implantation uterus using mouse and pig models and in the breast cancer cell lines MCF-7 and T47D. Our results suggest a novel mechanism for the divergent physiological consequences of PR-A and PR-B on progesterone-dependent gene transcription in the uterus involving select KLF members.
Progesterone (P) 1 plays a predominant role in the control of uterine endometrial growth and differentiation (1). In the ab-sence of P, the unopposed actions of estrogen can lead to uncontrolled cellular proliferation at the expense of cellular differentiation, an event highly correlated with the development of endometrial carcinoma (2)(3). The intracellular actions of P are mediated by the progesterone receptor (PR), a member of the nuclear receptor superfamily of ligand-activated transcription factors (4 -5). PR exists as two isoforms, PR-A and PR-B, which are transcribed from a single gene and display similar hormone and DNA binding specificities (6,7). PR-B differs from PR-A by the presence of an additional 164 residues in the amino-terminal region of PR-B, and the two proteins exhibit distinct transcriptional activities as a function of promoter and cellular contexts. In general, PR-B is a stronger transcriptional activator than PR-A (8,9). Moreover, PR-A exerts an inhibitory effect on PR-B transactivity as well as that of other steroid hormone receptors (10 -12), and endometrial differentiation and secretory phenotype are positively associated with PR-B rather than PR-A expression (8,13). Concomitant with a general decrease in the number of endometrial cells expressing PR, a shift in the ratio of PR-A to PR-B, favoring the predominance of PR-A-expressing cells, was noted during the progression of the endometrium from normal to hyperplastic to tumorigenic states (14).
The mechanism underlying the distinct transcriptional activities of the two PR isoforms is not well understood, although this has been attributed to discrete protein-protein interactions that are mediated by the B-upstream sequence unique to PR-B (15). In particular, B-upstream sequence has been shown to repress the inhibitory activity of a segment within the Nterminal region of PR-A, which interferes with the activation functions of the AF-1 and AF-2 domains of the PR (16). PR-A has also been shown to interact preferentially with co-repressors and less efficiently with co-activators, compared with PR-B (17)(18). In a recent study (18), however, PR-A was demonstrated to positively modulate the transcriptional activity of PR-B on a gene promoter that is induced synergistically by P and cAMP by binding to liver activating protein (LAP), an isoform of CCAAT/enhancer-binding protein  (C/EBP) in human endometrial stromal cells. Thus, promoter as well as cellular context can influence the transcriptional activity of each isoform, and binding to consensus PREs may not be necessarily required for P-responsiveness of target genes (19,20).
In a previous study, we have defined basic transcription element-binding protein (BTEB1), a member of the Sp/Krü ppel-like family (KLF) of transcription factors (21)(22)(23) as a PR-B interacting protein (24). We also showed that BTEB1 transactivates gene promoters containing GC-rich recognition motifs (25,26), but in the presence of ligand-activated PR-B, the functional complex formed between the PR-B dimer and BTEB1 favored the induction of PRE-containing promoters (24). Because the relative levels of PR-A, PR-B and BTEB1 are tightly regulated in normal uterine endometrium, and deviations in PR isoform ratios can affect tissue responsiveness to P, we examined whether BTEB1 alters the transactivity of PR-A for different promoters. Here we show that BTEB1 has no effect on PR-A transactivation but enhances PR-A-mediated repression of PR-B transcriptional activity. We also demonstrate that BTEB1 cooperates with the transcriptional integrator CREBbinding protein (CBP) (27) in enhancing the transcriptional activities of ligand-bound PR-B and unliganded PR-A, respectively. Further, we identify FKLF2/BTEB3 (28) and Sp1 as functional PR-B transcriptional partners, with BTEB3 closely mimicking the activity of BTEB1. Finally, we provide evidence to suggest that the functional interactions of BTEB1 and PR-B noted from transient transfection studies may be biologically relevant in the context of early pregnancy in the mouse and pig uterus and in the breast carcinoma cell lines MCF-7 and T47D. These results suggest that the selective utilization of BTEB1 and related KLF members by PR isoforms may underlie, in part, the distinct repertoire of progestin-regulated genes mediated by homodimers of PR-A and PR-B as well as by the PR-A/PR-B heterodimer, respectively, in target cells (29 containing 1143 bp of the 5Ј promoter and regulatory region of the P-regulated porcine uteroferrin (UF) gene was previously described (24). All plasmid DNAs were prepared using the Maxiprep system (Qiagen, Valencia, CA).
Cell Lines and Culture-The human endometrial carcinoma cell line Hec-1-A, a model of well differentiated carcinoma, was a gift of the late Dr. P. G. Satyaswaroop (Hershey Medical Center, Hershey, PA) and was routinely cultured in McCoy's 5A in the presence of 10% (v/v) fetal bovine serum (FBS). Monkey kidney COS-1 cells (American Type Culture Collection (ATCC), Manassas, VA) were propagated in DMEM containing 10% FBS. The human mammary carcinoma cell line T47D (ATCC) was maintained in RPMI 1640 containing bovine insulin (0.2 IU/ml) and 10% FBS. The human mammary carcinoma MCF-7 Tet-On cell line (Clontech, Palo Alto, CA) and its clonal derivatives B5 and C3 that were generated in our laboratory (see "Results") were cultured in DMEM containing 10% Tet system approved fetal bovine serum (TFBS) and G418 (0.1 g/ml), following the manufacturer's instructions. Cells were incubated at 37°C in an atmosphere of 5% CO 2 in air.
Transient Transfection and Reporter Gene Assays-Transfections were carried out using polybrene (Hexadimethrine Bromide, Sigma) for Hec-1-A cells or LipofectAMINE (Invitrogen) for COS-1 cells, as previously described (24). Approximately 6 ϫ 10 5 cells were plated in 6-well plates 24 h before transfection. Four h after transfection, Hec-1-A cells were treated with 25% Me 2 SO in Hank's Balanced Salt Solution (HBSS, pH 7.4) for 4 min, washed twice with HBSS, and then incubated for an additional 48 h in fresh McCoy's medium containing charcoal-stripped FBS (10%) in the presence or absence of the synthetic progestin R5020 (100 nM; PerkinElmer Life Sciences). COS-1 cells transiently transfected with the chloramphenicol acetyltransferase reporter gene construct pG5CAT (24) and specific combinations of pM-PR-A chimera, pVP16-PR-A chimera, (30), and pCDNA-BTEB1 expression vectors (each added at 0.5 g/well), as well as other constructs (described under "Results") were incubated for 6 h at 37°C and then fed fresh medium containing 20% FBS. Eighteen h later, the cells were transferred to charcoal-stripped FBS (10%)-containing medium with no or 100 nM R5020 (24). Treated cells were further incubated for 24 h, and Luc or CAT activity was measured on whole cell extracts as previously described (24). Results were normalized to total protein content for each sample, as determined by the Bradford method (31), and are presented as least-square means (LSM) Ϯ S.E. Individual transfections were done in triplicate and performed three or four times using cells of comparable passage numbers.
Western Immunoblot and Co-immunoprecipitation Analyses-Nuclear extracts from tissues or cells, prepared following previously described protocols (32), were fractionated on 10% SDS-polyacrylamide gels, and proteins were transferred to nitrocellulose membranes. Standard Western blot techniques were used to detect the levels of nuclear proteins (PR-A and -B isoforms, BTEB1, and Sp1) using the appropriate primary (1:1000 final dilution for each) and secondary antibodies as previously described (24,26), and the enhanced chemiluminescence (ECL) detection system (Amersham Biosciences). Co-immunoprecipitation with anti-rat BTEB1 antibody followed previously described protocols (24).
Animals and Tissue Isolation-Mice (Harlan, Indianapolis, IN) were housed in the animal care facility at the University of Florida in accordance with National Institutes of Health standards for the care and use of experimental animals. Adult females were mated with fertile males of the same strain to induce pregnancy. Day 0.5 of pregnancy was designated as the morning of observing the vaginal plug. Whole uteri were taken at the indicated pregnancy days and frozen for analysis after flushing with PBS. Pig uterine tissues were isolated as described previously (25).
Statistical Analysis-All numerical data were compared with appropriate controls and analyzed using ANOVA following the general linear models procedure of the Statistical Analysis System (33). Comparisons between groups were analyzed using predicted differences of the LSM. The statistical model included treatment and experiment, and only preplanned comparisons were made. Treatment means were considered significantly different at p Յ 0.05.
Lack of Functional Interaction of the PR-A Isoform with
BTEB1-To investigate whether the PR-A isoform interacts with BTEB1, as was previously demonstrated for PR-B (24), two functional assays involving the quantification of promoterreporter activities in cells transiently transfected with expression constructs for PR-A and BTEB1 were utilized. In the mammalian two-hybrid assay, the full-length PR-A, fused to the Gal4-DNA-binding domain in the pM vector or the activation domain in the pVP16 vector were co-transfected with pG5CAT in the presence or absence of full-length BTEB1 expression construct. Transfected COS-1 cells were then incubated for 24 h in medium containing R5020 (100 nM) in ethanol (vehicle) or vehicle alone. In the absence of BTEB1, the combination of the two PR single hybrids increased basal reporter activity with added R5020, suggesting the formation of a functional PR-A dimer (Fig. 1). BTEB1, however, had no effect on the basal or R5020-stimulated reporter activity in the presence of both PR-A fusion proteins.
To determine whether PR-A functionally interacts with BTEB1 within the context of a natural promoter with recognition sequences for both BTEB1 and PR, the UF promoter, which contains functionally responsive sequences for BTEB1 (25) and PR (34), was linked to the Luc reporter gene and used in transient transfections of Hec-1-A cells. As expected, BTEB1 alone increased UF promoter activity (Fig. 2). In the absence of its ligand, PR-A or PR-B decreased basal transcription from the UF promoter-Luc reporter, which was further attenuated with the co-expression of the two isoforms. Ligand-activated PR-B increased transcriptional activity from the reporter, and this was enhanced in the presence of BTEB1 consistent with our previous findings (24). By contrast, ligand-bound PR-A had no effect on this promoter's activity in the presence or absence of BTEB1. Moreover, PR-A inhibited P-dependent PR-B transactivity, and this repression was augmented by BTEB1 (Fig. 2). Results indicate that although BTEB1 preferentially interacts with PR-B homodimer to enhance PR-B transactivation, it acts as a negative modulator of P-dependent gene transcription mediated by PR-A/PR-B heterodimer.
PR-B Interaction with BTEB1 Is Dependent on Ligand Type-Because the type of ligand bound to a steroid receptor influences its subsequent interaction with other nuclear cofactors, the effects of type I (ZK98299) and type II (RU486) PR antagonists on the functional relationship of PR-B dimer and BTEB1 were compared with that of R5020 (PR agonist). RU486 and ZK98299 have higher and lower affinities, respectively, for PR than PR agonists, although both are known to elicit the formation of the PR-B dimer (30), resulting in inhibition of P-dependent promoter activity (see Ref. 35 for review). The transactivity of the PR-B dimer was observed only when bound to R5020, and this was augmented by BTEB1 co-expression (Fig. 3). By contrast, the respective complex formed with either ZK98299 or RU486, when used at the same concentration as R5020, had no effect on basal promoter activity but, as expected, inhibited the induction of this promoter's activity by R5020-bound PR-B. Interestingly, the negative effect of either compound on PR-B transactivity was not reversed by the presence of BTEB1. Taken together, results suggest that the initial formation of PR-B dimer of the correct conformation that is able not only to bind to its cognate response element but also permits the binding of other nuclear proteins is requisite for subsequent interaction with BTEB1.
CBP Modulates Ligand-dependent PR-B Interaction with BTEB-1-To determine whether the transcriptional integrator CBP, which has been shown recently to act synergistically with the steroid receptor co-activator-1 (SRC-1) in PR transactivation (36), modulates the transcriptional activity of either PR-A or PR-B in the presence of BTEB1, co-transfection experiments with the P-responsive MMTV-CAT reporter (37) and various FIG. 1. Functional interaction of BTEB1 and PR-A by mammalian two-hybrid assay. COS-1 cells were transiently co-transfected with pM-PR-A and pVP16-PR-A chimeric expression plasmids (0.5 g each) and the pG5CAT reporter construct (5 g) in the presence or absence of pCDNA3-BTEB1 expression vector or empty (pCDNA3 alone) expression vector (0.5 g). Twenty-four h post-transfection, cells were treated with vehicle (ethanol) or R5020 (100 nM) for 24 h. CAT activity in transfected cells was analyzed as described previously (24) and was normalized to protein content of cellular lysates. Data are presented as LSM Ϯ S.E. from three independent experiments, each performed in triplicate. The components present in each transfection experiment, in addition to pG5CAT reporter, are indicated by ϩ. * designates significant difference (p Յ 0.05) from the CAT activity of pM-PR-A/pVP16-PR-A group in the absence of added R5020.
FIG. 2.
Interaction of BTEB1 and PR-A within the context of a natural gene promoter. The human endometrial carcinoma cell line Hec-1-A was co-transfected with expression vectors for rat BTEB1 (pCDNA3-BTEB1), human PR-A or PR-B (in pSG5), and UF-Luc reporter or corresponding empty expression vectors in different combinations. Luciferase activity was determined from cell lysates of transfected cells, which were cultured for 24 h in the presence or absence of R5020. ** and * designate significant differences (p Յ 0.001 and p Յ 0.05, respectively) from control group (transiently transfected with empty vector alone) in the absence of R5020. Superscripts with different letter designations (a-e) indicate significant differences at p Յ 0.05 among the treatment groups in the presence of R5020.ˆindicates significant differences at p Յ 0.05 between R5020-treated and -untreated cells within the same transfection group.
combinations of expression constructs were carried out in COS-1 cells. As shown in Fig. 4A, the MMTV-promoter activity was unaffected by BTEB1, alone or in combination with either CBP or PR-A. On the other hand, CBP and unliganded PR-A individually increased basal promoter activity, albeit their combined effects were not additive. The activities of PR-A and CBP when added together were enhanced by BTEB1, independent of the presence of ligand. The P-dependent PR-B transactivity was not augmented by CBP (Fig. 4B). However, the combination of BTEB1, PR-B and CBP maximally increased promoter activity, but only in a ligand-dependent manner (Fig. 4B). These findings indicate that although BTEB1/ PR-B dimer association facilitates the recruitment of CBP into the PR-B complex, the functional interaction of PR-A with BTEB1 and CBP occurs through distinct mechanism(s). The latter finding may underlie, in part, the differing effects of each isoform on P-dependent gene transcription.
Effects of KLF Members Sp1 and FKLF2/BTEB3 on PR-B Activity-Sp1, BTEB3, and BTEB1 exhibit significant homologies in their DNA-binding domains but differ in their transactivation domains, which are located primarily in the aminoterminal regions of these proteins (21)(22)(23)28). To assess the modulatory roles of Sp1 and BTEB3 relative to BTEB1 on PR-B transcriptional activity, transient transfection experiments were performed using expression constructs for each, singly and in combination with PR-B. Sp1 had minimal, albeit significant, effects on P-dependent PR-B transcriptional activation of the MMTV-CAT promoter relative to BTEB1 in COS-1 cells, which express both BTEB1 and Sp1 endogenously (Fig. 5A). By contrast, BTEB3 mimicked the effect of BTEB1. In particular, both BTEB1 and BTEB3 showed no activity on the MMTV-CAT promoter but enhanced the P-dependent PR-B transactivity to a similar extent (Fig. 5B). Interestingly, within the context of the UF gene promoter, which contains recognition motifs for KLF members and PR-B, the transcriptional activities of BTEB1 and BTEB3 differed (Fig. 5C). Although both stimulated P-dependent PR-B transactivation of the UF promoter, only BTEB1 had a positive effect on basal transcription. Moreover, BTEB3 had no effect on BTEB1-mediated transcriptional activity, although both are known to bind GC-rich sequences within gene regulatory regions (21)(22)(23)28). These findings suggest that the DNA-binding domain, which is conserved among KLF members, is sufficient for interaction of Sp1, BTEB1, and BTEB3, respectively, with the PR-B dimer, although other domains within these molecules may contribute to the strength of their inductive effects on PR-B transactivation. . CAT activity of transfected cells was analyzed as previously described (24) and was normalized to protein content in cellular lysates. Data are presented as LSM Ϯ S.E. from three independent experiments, each performed in triplicate. * (p Յ 0.001) and ** (p Յ 0.0001) indicate significant differences of treatment groups from control group (transfected with pCMV5 empty vector) in the absence of BTEB1 (ϪBTEB1). Superscripts with different letters (a-c) indicate significant differences (p Յ 0.01) among the treatment groups in the presence of BTEB1.ˆindicates significant differences (p Յ 0.05) in CAT activities between R5020-treated cells in the presence and absence of BTEB1.
FIG. 4. Transcriptional enhancement of PR-A (A) or PR-B (B) activity by CBP and BTEB1.
A, COS-1 cells were transiently transfected with MMTV-CAT reporter plasmid and pCDNA3-BTEB1, pSG5-CBP, and pSG5-hPR-A expression vectors alone or in combination. Twenty-four h later the cells were treated with R5020 (100 nm), and CAT activity in transfected cells was analyzed after 24 h of incubation. The data are presented as LSM Ϯ S.E. from three independent experiments, each performed in triplicate. * (p Յ 0.05) and ** (p Յ 0.001) designate significant differences from the control group (empty vector alone) in the absence of R5020. R5020-treated groups without common superscripts differ significantly (p Յ 0.05). B, cells were co-transfected with pSG5-hPR-B expression vector and the same expression constructs (pCDNA3-BTEB1, pSG5-CBP) indicated in A. R5020-treated groups without common superscripts differ significantly (p Յ 0.05).
indicates significant difference at p Յ 0.05 level between R5020treated and non-treated cells within the same transfection group.
Expression Levels of PR-A, PR-B, and CBP in Pregnancy
Endometrium-To begin to assess the biological significance of BTEB1 interaction with specific PR isoforms in the presence of CBP, the expression levels of PR-A, PR-B, and CBP were evaluated in pig pregnancy endometrium by semi-quantitative RT-PCR and/or Western blot analysis. In previous studies (24,38), we have shown that the levels of uterine endometrial BTEB1 protein did not change with stage of pregnancy. By contrast, the levels of total PR(A/B) mRNAs shown here were higher at early than at late pregnancy, with peak levels observed at day 12, the period of maternal recognition of pregnancy for this species (39) (Fig. 6A). The pattern of PR-B mRNA levels followed that of PR(A/B), with levels highest at early pregnancy (days 12-14) and dropping precipitously beginning at day 30 until late pregnancy days (Fig. 6B). Similar to PR-B, CBP triplicate. *, p Յ 0.05, significant difference in CAT activity of transfected cells with those transfected with empty vector alone (Control) in the absence of R5020. Means without common superscripts represent significant differences at p Ͻ 0.05 among transfected cells treated with R5020.ˆindicates significant differences (p Յ 0.001) in CAT activities between R5020-treated cells within the same transfected group. C, the experimental protocols followed that of B (above), except that Hec-1-A cells and UF-Luc reporter constructs were used in transfection assays. Luciferase activity was determined from cell lysates of transfected cells, which were cultured for 24 h in the presence or absence of R5020. The designations for levels of significance are as indicated in B.
FIG. 5. Effects of KLF members Sp1 and BTEB3 on ligandactivated PR-B transactivity.
A, COS-1 cells were transiently transfected with MMTV-CAT reporter plasmid (5 g) and pCMV-Sp1 or pCMV5-PR-B expression vectors alone (0.5 g each) or in combination. CAT activity was determined from cell lysates of transfected cells after 24 h of culture in medium containing R5020 (100 nm). Data are presented as LSM Ϯ S.E. from three independent experiments, each performed in triplicate. Means without a common superscript differ significantly (p Յ 0.05). In the inset, endogenous Sp1 and BTEB1 in COS-1 cells were detected by Western blot analysis of nuclear extracts (100 g total protein per lane) following protocols described under "Experimental Procedures." B, COS-1 cells were co-transfected with MMTV-CAT reporter plasmid (5 g) and expression constructs for BTEB1 (pCDNA3-BTEB), BTEB3 (pCDNA3.1-BTEB3) and PR-B (pCMV5-PR-B) expression vectors, alone or in combination, in the presence or absence of R5020. CAT activity in transfected cells is presented as LSM Ϯ S.E. from three independent experiments, each performed in FIG. 6. Gene expression of PR isoforms A and B and CBP in pregnancy endometrium. Total RNA was isolated from pregnant pig endometrium (n ϭ 3 animals/pregnancy (Px) day) and analyzed for expression of the indicated mRNAs (A, PR A/B; B, PR-B; C, CBP) by RT-PCR, as described under "Experimental Procedures." Each point represents the mean (LSM Ϯ S.E.) of the ethidium bromide or hybridization intensities of the generated PCR products after normalizing to that of  2 -microglobulin, as a function of pregnancy day. * indicates significant difference (p Ͻ 0.05) from the values obtained at pregnancy day 12. D, nuclear extracts (100 g protein per lane) from pig endometrial tissues isolated at the indicated pregnancy days were analyzed for the presence of immunoreactive PR-A and PR-B proteins by Western blot as described previously (24). Each lane represents a different animal as source of nuclear extract. Results are representative of two experiments, with each experiment using different pairs of animals at the indicated pregnancy days. The migration position of the 100 kDa molecular mass marker is shown. mRNA levels were highest at early pregnancy and declined by day 30 (Fig. 6C). Western blot analysis of nuclear extracts prepared from uterine endometrial tissues indicated that although PR-B protein was easily detected at pregnancy days 10 and 12, a loss of this protein was observed at days 30, 60, and 90. On the other hand, PR-A protein remained constitutively expressed across most of pregnancy, except at day 90 when a modest increase in its levels was noted (Fig. 6D).
Interaction of PR and BTEB1 in Other Tissue and Cell Contexts-PR is a key regulator of diverse events in reproduction and in breast cell proliferation and differentiation (1,40). The interactions of PR and BTEB1 within the context of these physiological events were evaluated using mouse uterus at peri-implantation and two human mammary carcinoma cell lines, MCF-7 and T47D. The endogenous BTEB1 expression in these tissues or cell lines was initially determined by Western blot analysis. BTEB1 protein was expressed in the mouse uterus on days 1-8 of pregnancy (Fig. 7A), although the levels did not vary considerably during this period, consistent with the previous report for the pig uterus (24,38). Similarly, T47D cells had relatively abundant expression of BTEB1 protein, which was higher than that for MCF-7 cells (Fig. 7A). Nuclear extracts were prepared from early pregnancy mouse uterus and from T47D cells treated with R5020 (100 nM) for 48 h, immunoprecipitated with anti-BTEB1 antibody (2 g), and then immunoblotted with a specific anti-PR antiserum that recognized both PR-A and PR-B isoforms. Results indicate that although T47D cells (Fig. 7A) and peri-implantation mouse uterus (41,42) express both PR isoforms, only PR-B (ϳ110 kDa) co-immunoprecipitated with endogenous BTEB1 in these systems, consistent with the results obtained for the pig pregnancy uterus (Fig. 7B) (24). Parallel immunoprecipitations carried out with normal rabbit serum IgG did not detect any immunoreactive band corresponding to PR-A or -B in Western blots with anti-PR (data not shown). These data indicate that in P-responsive tissues or cells that endogenously express BTEB1 and both PR isoforms, direct physical association of BTEB1 with ligandbound PR-B occurs preferentially, over that with PR-A.
To demonstrate the functional relevance of BTEB1 and PR interactions within the context of an endogenous gene that responds to both PR-and BTEB1-mediated transactivation, the expression of cyclin D1 was examined in the MCF-7 Tet-On clonal derivative line B5, whose endogenous expression of BTEB1 was increased in response to the tetracycline homolog doxycycline (Dox). 2 The parental MCF-7 cell line from which B5 was derived is P-responsive and expresses low levels of BTEB1 (Fig. 7A), hence, it is ideal for evaluating PR-mediated transcriptional activity as a function of cellular BTEB1 content. The B5 clonal line was grown in the absence or presence of Dox (2 g/ml) for 24 or 48 h, and Dox-treated and control (vehicle alone) cells were then evaluated for cyclin D1 and -actin gene expression levels by semi-quantitative RT-PCR. Induction by Dox of BTEB1 protein levels (approximately by 2-fold as confirmed by Western blots; data not shown) increased cyclin D1 gene expression within 24 h, and this increase was maintained in cells treated for 48 h (Fig. 8A). By contrast, a similar induction in -actin gene expression was not observed in these cells (Fig. 8A), consistent with previous findings in a human endometrial carcinoma clonal line that overexpressed BTEB1 (43). In a control line C3, a clonal line derived from MCF-7 Tet-On cells that were stably transfected only with the empty expression vector (pTRE-6XHN), treatment with Dox under the same conditions did not result in induction of cyclin D1 gene expression (data not shown).
To examine P effects on cyclin D1 gene expression in Doxtreated cells, B5 cells treated with ethanol (vehicle), Dox (2 g/ml), R5020 (100 nM), or a combination of Dox and R5020 for 24 h were evaluated by semi-quantitative RT-PCR. Cyclin D1 gene expression was induced and repressed, respectively, in Dox-and R5020-treated cells, relative to those of control cells (Fig. 8B). In cells treated with both R5020 and Dox, the expression of cyclin D1 was further diminished from that of cells treated only with R5020. Thus, BTEB1 enhanced the repressive effects of R5020 on cyclin D1 gene expression, suggesting functional relevance of PR/BTEB1 interaction in the regulation of breast cell proliferation. DISCUSSION Here we show that the KLF family member KLF9/BTEB1 acts as a positive or negative modulator of PR-B transactivity, depending on whether PR-B exists as a homodimer or a het- FIG. 7. Co-immunoprecipitation of PR-B and BTEB1 in mouse uterus and in T47D mammary epithelial cells. A, whole uteri from mice at the indicated pregnancy days were isolated. T47D and MCF-7 cells were grown in RPMI 1640 containing bovine insulin (0.2 IU/ml) and 10% FBS and in DMEM containing 10% FBS, respectively, and were harvested upon reaching near confluence. Nuclear extracts were prepared, and samples (100 g protein) were analyzed in Western blots using anti-rat BTEB1 antibody, as described under "Experimental Procedures." The migration positions of immunoreactive PR-B and PR-A were determined from immunoblot analysis of nuclear extracts prepared from T47D cells (100 g total protein) using anti-PR antiserum. B, BTEB1 was immunoprecipitated from nuclear extracts (600 g per sample) prepared from mouse uteri at the indicated pregnancy days, from T47D cells pretreated for 48 h with 100 nM R5020, and from pregnant pig (days 12 and 30) uterine endometrium, with anti-rat BTEB1 antibody, followed by protein A-Sepharose. The precipitates were subjected to SDS-PAGE and analyzed by immunoblotting with anti-PR antibody. Results are representative of two separate experiments. erodimer with PR-A. In addition, we found that BTEB1, while having no effect on the transcriptional activity of the PR-A homodimer, functions as an activator of unliganded PR-A transactivity in the presence of the transcriptional integrator CBP. Moreover, BTEB3 and to a much lesser extent Sp1, mimicked the modulatory role of BTEB1 on ligand-dependent PR-B transcriptional activity, irrespective of gene promoter context. Further, we identified CBP as a transcriptional activator of ligand-bound PR-B, but only in the presence of BTEB1. Finally, we determined that the functional interactions of BTEB1 and PR-B noted here by transient transfection assays may be biologically relevant in the pregnancy endometrium of two species (mouse and pig) and to events that regulate breast cell proliferation, physiological conditions characterized by high P levels. Taken together, these results define BTEB1 as an important determinant of the cellular response to P.
The PR in humans and rodents, and as reported here also in pigs, has two well characterized isoforms, which are produced from a single gene either by transcription from two distinct promoters or by translation initiation from two alternative AUG sites (6 -7). Studies in mice null for both or either isoform demonstrated distinct as well as overlapping biological functions for each in a target tissue-specific manner (40,41,44), consistent with results obtained from analyses of promoter transactivation in transiently transfected cells (45,46) as well as of cell phenotypes upon over-expression of either isoform (47)(48)(49). In general, PR-A is a weaker transcriptional activator than is PR-B (8 -9) and indeed can suppress PR-B as well as ER-and androgen receptor-mediated transactivation (10 -12, 16); exhibits ligand-independent transcriptional activity in some contexts (50,51); and demonstrates altered expression levels relative to those of PR-B in normal versus transformed cells (14,52). In the present study, we provide evidence to indicate that the relative ability to interact with BTEB1 may serve to distinguish the transcriptional activities of PR-B homodimer from those of PR-A homodimer or PR-A/PR-B heterodimer. First, we demonstrated the lack of functional as well as physical interactions of PR-A homodimer with BTEB1, both of which were demonstrably present with PR-B. Second, we showed that the transactivation function of the PR-A/PR-B heterodimer, which is significantly reduced from that of PR-B homodimer, is further inhibited when BTEB1 is present. Third, we found that the functional interaction of unliganded PR-A or ligand-bound PR-B homodimer with CBP occurred only in the presence of BTEB1. Taken together, these results are consistent with the notion that BTEB1 interaction (or lack thereof) with each PR isoform defines in part the subsequent pathway by which each interacts with other transcriptional components, likely resulting in the modulation of a distinct repertoire of P-regulated genes (29).
One of the questions raised by our study is the exact mechanism by which BTEB1 mediates the selective transcriptional signaling of the two PR isoforms. Although this remains to be clarified, the differential recruitment of CBP/p300 or related transcriptional integrators, co-activators, or co-repressors to the agonist-bound PR may be involved. Based on data presented indicating the lack of a functional interaction between antagonist-bound PR-B and BTEB1 (Fig. 3), consistent with recent studies that focused on the proper conformation of the PR dimer as requisite for subsequent interactions with coactivators (27,35), the effect of BTEB1 in enhancing PR-A inhibition of PR-B transactivity may be a function of the conformation of PR-A/PR-B heterodimer distinct from that of PR-B homodimer, likely resulting in different contact points with BTEB1. Prior studies have reported that BTEB1 can contribute to the formation of a higher order "repressome" complex by its ability to bind the co-repressor mSin3b (53) and its ability to physically interact with the related Sin3a (54). Thus, the conformation of the PR-A/PR-B heterodimer might permit the subsequent recruitment of a co-repressor by BTEB1, in lieu of transcriptional co-activators or integrators such as CBP. However, we could not detect PR-A isoform when endogenous protein complexes associated with BTEB1 in intact cells (P-treated T47D) and tissues (pregnancy uterus) were immunoprecipitated with anti-BTEB1 antibody that recognized both PR isoforms, suggesting that BTEB1 does not physically associate with PR-A in these contexts. Alternatively, sequestration of co-activators that normally bind to the heterodimer might be responsible for the negative effect of BTEB1, independent of its physical interaction with the complex.
It was recently reported (36) that the synergistic effects of the SRC-1 and p300 in enhancing PR-dependent transactivation require the ordered recruitment by PR of SRC-1, followed by p300. Our present findings that document the selective physical (and functional) interactions between PR-B and BTEB1 (but not between PR-B and CBP) in a number of physiological contexts, coupled with the recent report (55) that the
FIG. 8. Effects of BTEB1 and P/PR on Cyclin D1 gene expression in MCF-7 cells.
A, the MCF-7-Tet-On derivative clonal line B5 was seeded in DMEM supplemented with 10% TFBS and G418 (0.1 g/ml) and treated for either 24 or 48 h with Dox (DOX, 2 g/ml) or ethanol (vehicle). Total RNAs were isolated and analyzed for cyclin D1 or -actin gene expression by RT-PCR, as described under "Experimental Procedures." The data are representative of four independent experiments. B, the B5 clonal cells were seeded in DMEM supplemented with 10% TFBS and G418, and upon reaching ϳ80% confluence, were transferred to phenol red-free medium containing 0.1% charcoal-stripped TFBS. After 24 h, cells were replenished with fresh medium containing 10% charcoal-stripped TFBS and either vehicle (C, ethanol), Dox (D, 2 g/ml), R5020 (P, 100 nM), or a combination of Dox (2 g/ml) and R5020 (100 nM) (D/P). Cells were harvested 24 h later and total RNAs were processed for analysis of cyclin D1 gene expression by RT-PCR. A representative ethidium bromide-stained gel is shown. The bar graphs (least-square means Ϯ S.E.) represent the intensities of the ethidium bromide-stained PCR fragments as quantified by microdensitometry from two independent experiments, with each experiment carried out in duplicate. * indicates significant difference at p Յ 0.05 from control (C) values;ˆindicates significant difference at p Յ 0.05 between R5020treated (P) and R5020 ϩ Dox-treated (D/P) cells.
KLF family member KLF13/BTEB3 with the greatest homology to KLF9/BTEB1, interacts with p300/CBP are consistent with the notion of a similar sequential order of recruitment of BTEB1 and then CBP by PR-B, which does not necessarily occur with PR-A homodimer. We cannot extrapolate the present data to infer the hierarchy in the recruitment by PR-B of BTEB1 relative to SRC-1, or the extent of BTEB1 contribution to the formation of a functional transcriptional preinitiation complex in the presence of SRC-1. Moreover, it is possible that BTEB1 might function independent of SRC-1, in a manner similar to that recently delineated for a novel PR-interacting protein, jun dimerization protein 2, which induces P-dependent PR-mediated transactivation by interacting with the AF-1 rather than the SRC-1 interacting AF-2 region of the PR molecule (56).
Recently, BTEB1 was identified as a P-induced gene in a screen of T47D mammary epithelial cells expressing both PR-A and PR-B isoforms (29). In uterine epithelial cells, we have shown that BTEB1 up-regulates the expression of cyclin-dependent kinase 2 (43), which others have demonstrated to phosphorylate seven sites within the PR-B receptor in vitro, contributing to its ligand-dependent activation (57). Taken together with our present findings that selective BTEB1/PR-B interactions result in enhanced PR-mediated transactivation whereas that of BTEB1 with PR-A/PR-B heterodimer results in transrepression, it is likely that the possible existence of this autoregulatory loop may serve to amplify the effects of P to ensure optimal target gene expression in different contexts. In both mouse and pig uterus, BTEB1 appeared to be constitutively expressed at high levels across pregnancy (Refs. 24 and 38 and this study) in contrast to PR-B, suggesting the existence of both PR-B-dependent and -independent functions for this nuclear protein. Preliminary findings from our laboratory utilizing a recently generated BTEB1 null mouse (58) indicated that litter size is compromised in homozygous females relative to control and heterozygous counterparts. 3 Further analysis of this mouse model should delineate important PR-related signaling pathways of BTEB1 in the uterus.
To begin to identify other possible partners of PR-B in Pregulated gene transcription, two other members of the KLF family were evaluated for their ability to modulate ligand-dependent PR-B-mediated transactivity. Sp1 exhibits significant homology with BTEB1 only in the DNA-binding region (21,23), whereas KLF13/BTEB3, also known as FKLF2, has extensive homologies with BTEB1 in primary sequence, including the amino-terminal region as well as in size (28). Results demonstrating a modest effect of Sp1 on PR-B-mediated transactivation in contrast to the very robust effect of BTEB3 analogous to that of BTEB1 suggest that although the activity is common to KLF proteins, the distinct contributions of the amino-terminal region are critical to these proteins' interactions. Surprisingly, however, BTEB3 does not transactivate the UF gene promoter that contains G/C rich sequences, although these motifs were previously shown to bind both Sp1 and BTEB1 with comparable affinity, resulting in an induction of basal UF gene transcription (25,26). BTEB3 has been reported to function as a transactivator or transrepressor, depending on whether it interacts with members of the histone acetyltransferase co-activators such as p300/CBP (55) or the co-repressors Histone deacetylase-1 and mSin3A (59). The absence of either type of activity for BTEB3 in the present study suggests that Hec-1-A cells used for the transient transfection assays may be lacking these nuclear factors or that the promoters tested here do not utilize these complexes. Analysis of the functional domains in BTEB1 that are required for mediating induction of PR-B transactivity will provide the essential amino acid sequences shared by KLF members in this novel mechanism of gene activation by PR.
Pregnancy is a P-dominated physiological state, and the appropriate expression of P-regulated genes, resulting in structural modifications as well as changes in specialized cellular functions of the uterine endometrium, is requisite for both pregnancy initiation and maintenance. Similarly, P plays a pivotal role in mammary gland morphogenesis and development (60), and abnormal mammary gland proliferation leading to carcinogenesis is highly dependent on PR function (61). In the present study, we assessed the potential physiological relevance of the functional protein complex formed by BTEB1 and PR-B/P in the context of the peri-implantation uterus and mammary epithelial cells by: 1) evaluating the preferential ability of endogenous BTEB1 to physically interact with PR-B, relative to PR-A under conditions of high P; 2) establishing a positive relationship between the expression profiles of PR-B, BTEB1, and CBP across pregnancy; and 3) demonstrating the functional consequence of PR and BTEB1 interaction on the endogenous expression of a P-and BTEB1-regulated gene, cyclin D1 (62,63). The collective results from these studies are congruent with the preferential interaction of BTEB1 with PR-B, support a model where CBP participates in the formation of a multimeric complex with PR-B and BTEB1, and define a role for BTEB1 as a PR-B interacting partner in the control of cell proliferation. In this regard, a previous report (48) demonstrated that in endometrial cells transfected with either PR isoform, PR-B caused a much more dramatic decrease in cell growth than did PR-A. In the context of early pregnancy, the consequence would likely be an endometrium with a differentiated secretory, rather than a proliferative phenotype, consistent with the requirement for production and secretion of essential proteins or other bio-molecules for embryo development and implantation during this time (64). We speculate that the stability of the higher order complex formed by PR-B⅐CBP⅐BTEB1 in cooperation with other well established PR co-activators (27), may ultimately guide the biological outcome of PR-mediated signaling. Clearly, direct physical evidence that this complex exists in vivo at defined physiological states will provide additional support for this mechanism. Chromatin immunoprecipitation using specific antibodies to each component will be useful in this regard (65).
The present study was aimed at dissecting the components of the PR signaling mechanism involving KLF family members. The differential effects of BTEB1 on the assembly of a functional complex involving PR isoforms provide a molecular basis in support of the hypothesis that selective utilization by PR of distinct partners may have evolved to provide a highly finetuned regulatory mechanism to ensure appropriate progestin action in the uterus, the biological consequence of which is requisite for normal growth and differentiation of this tissue. | 2018-04-03T03:59:02.165Z | 2003-06-13T00:00:00.000 | {
"year": 2003,
"sha1": "506b88c6b4eae6e4692580033237c286cfc603c0",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/24/21474.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7914242aca43c1c2663b4a66ffb5f23404720146",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
261629060 | pes2o/s2orc | v3-fos-license | Can We Recognize Skin Adnexal Tumours? Retrospective Evaluation of Clinical and Histopathologic Data in a Tertiary Dermatology Clinic
Background: Clinical diagnosis of skin adnexal tumours (SATs) is challenging. In this study, we aimed to determine the compatibility rate of clinical pre-diagnoses and histopathological diagnoses in histopathologically confirmed cases of SATs examined by dermatologists. Materials and Methods: Histopathologically confirmed cases of SATs in a single center dermatology clinic during May 2019-May 2023 were retrospectively retreived. We recorded demographic characteristics, clinical characteristics (elementary lesion type, tumour localization, when available dermoscopic features) and clinical pre-diagnoses from patient medical records. Results: A total of 39 SATs from 38 patients (18 female and 20 male) were included in the analysis. All 38 SATs (97.4%) were benign except one trichilemmal carcinoma. Lesions were most commonly located in head and neck region in 61.5% (n=24) of patients, presenting as nodules (n=21, 53.9%) and papule/plaques (n=18, 46.1%). Dermatoscopic features included linear vessels, structureless white areas, structureless pink/purple areas and blue-gray dots. Clinical pre-diagnoses were discordant in 53.8% (n=21) of cases where SAT was not mentioned among one or more pre-diagnoses. Most common erroneous pre-diagnoses were epidermal cyst, nevi and non melanoma skin cancer. Conclusion: With the exclusion of pilomatricoma, more than half of SATs are difficult to recognize in clinical and dermoscopic examination. Further studies with focus on clinical and dermoscopic differentiation of SATs from most common pitfall diagnoses are needed.
Introduction
Skin adnexal tumours (SATs) refer to heterogenous group of tumours arising from hair follicle, sebaceous and sweat glands [1].From dermatologists' perspective, SATs are important for several main reasons.SATs mostly have a non-specific clinical appearance, presenting frequently as asymptomatic papules or nodules [2,3].Some SATs may point to underlying genetic conditions [1].Malignant SATs are rare, however they might have an aggressive course with local invasion and distant metastasis [2].
In this study, we aimed to determine the compatibility rate of clinical pre-diagnoses and histopathological diagnoses in histopathologically confirmed cases of SATs.Our secondary aim was to document clinical and dermoscopic data of SAT cases.
Hospital, during May 2019-May 2023 were retrospectively retreived.Hematoxylin and Eosin stained slides and immunohistochemical stains were used to establish diagnosis.Non-syndromic SAT cases who were examined by at least one dermatologist (with/without plastic surgeons, pediatric surgeons, general surgeons) were included in the study.We retrospectively noted demographic characteristics, clinical characteristics including elementary lesion type, tumour localization and when available dermoscopic features.Clinical pre-diagnoses and biopsy technique (punch/excision) were also recorded.
Statistical Analysis
Descriptive statistics were presented with mean and standard deviation values.Categorical variables such as gender, tumour diagnosis, location were expressed in percentages.
Results
A total of 39 SATs from 38 patients (18 female and 20 male) were included in the analysis.Age range was from 7 to 82 years (41±21.5 years), most patients being in their fifth decade.Demographics of the patients can be seen in Table 1.
All 38 SATs (97.4%) were benign except one trichilemmal carcinoma.Histopathological diagnoses of the SATs and their origin are given in Table 2. Lesion origin was hair follicle in 59% (n=23) and sweat gland in 41% (n=16) of cases.None of the tumours were of sebaceous origin.Histopathologic confirmation was performed with total excision in 84.6% (n=33) and with punch biopsy in 15.4% (n=6) of cases.
Discussion
Thirty-nine histopathologically confirmed SAT cases were evaluated in dermatology our department during a four year period.This data points to a rare diagnosis, thus SATs are difficult to recognize for both dermatologists and pathologists [2,3].Vast majority of SATs are benign tumours, with a proportion ranging from 69.41% to nearly 100% in different studies, as observed in our study [2,3,4,5].Consistent with existing literature, in our cohort SATs were observed in a wide age range [1,2,3,4,6] with nearly 20% of patients being in their 5 th decade [3].Although some data report female preponderance in SAT cases [2,3] which may be attributed to cosmetically unacceptable nature of the lesions [2], we did not observe a gender predilection in our series as Pujani et al. [5] and Bartoš [6], while other authors have reported a male preponderance [4].Majority of the lesions are typically located in head and neck region, as this is an area rich in skin appendages [1,2, In most studies, SATs arising from sweat glands outnumbered tumours with follicular or sebaceous differentiation [1,4,5,7].However, similar to data from a Slovakian center, in our series follicular tumours were more common [6].In the present study, the most common type of SAT was pilomatricoma as in many studies [4,8], followed by proliferating trichilemmal tumour (PTT).PTT was also reported to be frequent among SATs with hair follicle differentiation in another study [2].Syringomas of our series was in equal number with PTTs.Of note, three out of four syringomas were located on trunk and extremities, where diagnostic difficulties are even increased.On the other hand, sebaceous SATs were not biopsied or excised in our series.SATs with sebaceous differentiation presented only 5% of a total of 1615 SATs biopsied or excised in the extensive study by Cook et al. [1] where sebaceous nevus was excluded from analysis.Similar to our results, very few cases of sebaceous SATs were reported in some series [2,6,7].
Clinically SATs present as non-specific asymptomatic papulonodules [2,3].In addition, reported dermoscopic features of SATs were also mostly non-specific, mimicking melanocytic lesions, non-melanoma skin cancer and other benign cutaneous disorders [9,10].Consistent with literature, dermoscopic features noted in our cohort were non-specific and even misleading.A very recent multi-center study by Longo et al. [11] better characterized dermoscopic features of trichoepitheliomas.Ivory white background color, small, unfocused vessels and grey-purple structureless areas were most common dermoscopic features of trichoepithelioma and trichobastomas, while ulceration and erosion favored diagnosis of basal cell carcinoma [11].
In studies from various institutions from different countries, correctness rates of pre-biopsy diagnoses in SATs ranged from 6.4% to 48% [1,2,3,5,12].The study by Aslan Kayiran et al. [3] conducted in an experienced tertiary dermatology clinic in Turkey has reported a clinicopathologic compatibility rate of 45%, similar to our cohort.This relatively high concordance rate in our study and in the latter study might be attributed to study design, as these were studies conducted by dermatologists.Studies involving SATs pre-diagnosed by all clinicians report lower rates of concordance [1,2,5,12].Aslan Kayiran et al. [3] have noted higher concordance rate for the two most common subtypes of SAT in their series, which were sebaceous hyperplasia and pilomatricoma (65.2% and 50% respectively).Diagnostic accuracy of pilomatricoma was also higher (58.3%) in our series.Similar to our series, epidermal cysts, melanocytic and nonmelanocytic tumors were among most common erroneous clinical prediagnoses [2,3].
Definitive diagnosis of SATs relies on histopathologic examination [3].
As local surgical excision is curative in most cases [2], total excision is preferred over punch biopsies in diagnostic management of SATs.
In our series, majority of cases were simultaneously diagnosed and treated with total excision.
Study Limitation
Limitations of the present study include its retrospective nature, which restricts available clinical data to medical records written by one examining dermatologist.
Conclusion
With the exclusion of pilomatricoma, more than half of SATs are difficult to recognize in clinical and dermoscopic examination.Atypical localizations of commonly observed SATs also constitute a diagnostic concern.In our study, most common erroneous clinical pre-diagnoses were epidermoid cysts, nevi and non-melanoma skin cancers.Further studies with focus on clinical and dermoscopic differentiation of SATs from most common pitfall diagnoses are needed.Informed Consent: Informed consent was waived due to retrospective design.
Ethics
Peer-review: Externally and internally peer-reviewed.
Figure 1 .
Figure 1.Clinical, dermatoscopic and histopathologic photographs of adnexal tumour (trichoblastoma) with clinical pre-diagnoses of basal cell carcinoma and Kaposi's sarcoma.(a): Clinically a solitary nodular lesion on the helix of a 81 year old man was observed.(b): Dermoscopy reveals purple structureless areas, shiny white lines, blue-gray ovoid nests.(c): Histopathologically a palisading uniform cellular proliferation with basaloid morphology as a dermal tumour nodule without epidermal connection was diagnosed as trichoblastoma (Hematoxylin and Eosin x200) Before starting the study, approval was obtained from the Institutional Review Board of University of Health Sciences Turkey, Istanbul Sancaktepe Sehit Prof. Dr. Ilhan Varank Training and Research Hospital (decision no: 113, date: 21.06.2023).
Table 2 . Histopathological diagnoses according to tumour origin Histopathologic diagnosis Origin n (%)
Before starting the study, approval was obtained from the Institutional Review Board of University of Health Sciences Turkey, Istanbul Sancaktepe Sehit Prof. Dr. Ilhan Varank Training and Research Hospital, (decision no: 113, date: 21.06.2023). | 2023-09-10T15:33:14.355Z | 2023-09-07T00:00:00.000 | {
"year": 2023,
"sha1": "9f936011aff685f165fc6cee17c82295d3ce3570",
"oa_license": null,
"oa_url": "https://doi.org/10.4274/jtad.galenos.2023.21939",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c48d6b94152a154ab3ab08f55cd31d0741abe5e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
22051681 | pes2o/s2orc | v3-fos-license | The Effect of Patient Portals on Quality Outcomes and Its Implications to Meaningful Use: A Systematic Review
Background The Health Information Technology for Economic and Clinical Health (HITECH) Act imposes pressure on health care organizations to qualify for “Meaningful Use”. It is assumed that portals should increase patient participation in medical decisions, but whether or not the use of portals improves outcomes remains to be seen. Objective The purpose of this systemic review is to outline and summarize study results on the effect of patient portals on quality, or chronic-condition outcomes as defined by the Agency for Healthcare Research and Quality, and its implications to Meaningful Use since the beginning of 2011. This review updates and builds on the work by Ammenwerth, Schnell-Inderst, and Hoerbst. Methods We performed a systematic literature search in PubMed, CINAHL, and Google Scholar. We identified any data-driven study, quantitative or qualitative, that examined a relationship between patient portals, or patient portal features, and outcomes. We also wanted to relate the findings back to Meaningful Use criteria. Over 4000 articles were screened, and 27 were analyzed and summarized for this systematic review. Results We identified 26 studies and 1 review, and we summarized their findings and applicability to our research question. Very few studies associated use of the patient portal, or its features, to improved outcomes; 37% (10/27) of papers reported improvements in medication adherence, disease awareness, self-management of disease, a decrease of office visits, an increase in preventative medicine, and an increase in extended office visits, at the patient’s request for additional information. The results also show an increase in quality in terms of patient satisfaction and customer retention, but there are weak results on medical outcomes. Conclusions The results of this review demonstrate that more health care organizations today offer features of a patient portal than in the review published in 2011. Articles reviewed rarely analyzed a full patient portal but instead analyzed features of a portal such as secure messaging, as well as disease management and monitoring. The ability of patients to be able to view their health information electronically meets the intent of Meaningful Use, Stage 2 requirements, but the ability to transmit to a third party was not found in the review.
Monthly no-show rates across all medical clinics in the system were significantly reduced among patients registered for portal use.
Palen et al [11]. Retrospective study of patients >18 years old who had access to a patient portal. Online chronic disease management portals increase patient access to information and engagement in their health care, but improvements in the portal itself may improve usability and reduce attrition.
Delbanco et al [13]. Quasi-experimental design, no randomization (N=105 providers and N=13,564 patient); studies encounters with at least one doctor's note in a primary care setting at 3 large, but independent locations. 11,757 of 13,564 patients (86.78%) with notes available opened at least 1 note, but only 5391 (39.74%) completed a post intervention survey. Surveys reported up to 87% of these patients felt greater control of their care, up to 78% reported greater medication adherence, 36% had privacy concerns, and up to 42% reported sharing notes with others. Volume of electronic messaging between patients and providers increased 5%. Providers reported up to an 8% workload increase to answer patients' questions outside of visits. Of those providers, 21% reported taking additional time to write notes, and 36% reported changing documentation content.
A majority of patients who accessed their doctors' notes and filled out a survey reported positive, clinically relevant benefits and very few concerns about access.
Osborn et al [14]. Mixed methods design of patients with type 2 diabetes (N=75) to study quantitative data to identify differences between portal and non-portal users.
81% (61/75) attended a focus group and/or completed a survey. Portal users tended to be Caucasian/white, have higher income, be privately insured, have more education, and better A1C test results. All results were statistically significant (P<.05) except for education, which was mildly significant (P=.05).
Patients noted a preference to use the portal to manage prescriptions and medication adherence. More frequent users of the portal were more likely to have better glycemic control.
52% were registered users, among those 36% used SM. Greater levels of trust were associated with white, Latino, and older patients.
Patient-provider relationships encourage portal engagement.
Wade-Vuturo et al [16]. Mixed-methods, survey tool, non-experimental, qualitative study (N=15) Participants were 57.1 years old, 65% female, 76% Caucasian/white, 20% African American/black. SM was reported as enhancing patient satisfaction, efficiency and quality face-to-face visits, and access to clinical care outside traditional faceto-face visits. Greater SM use was significantly associated with patients' glycemic control (P=.29) Portal use enabled collaborative decision making about diabetes management. Nazi et al [17]. Web study (N=688) of Veterans 84% reported positive satisfaction agreeing that the information and services were helpful. 66% agreed that the portal increase quality of care, and 90% agreed that they would recommend the portal to another veteran.
Veterans demonstrated a high rate of motivation to access their own health information and viewed such action as an increase in quality of care.
Ketterer T et al [18]. Cross-sectional, retrospective analysis (N=84,105) 38% (31,360/84,105) enrolled, 26% (21,867) activated the account. Portal enrollment was lower for adolescents, Medicaid recipients, low-income families, Asian or other race, and Hispanic ethnicity, and higher for patients with more office encounters, and presence of autism on the problem list (95% CI).
Sociodemographic disparities exist in portal enrollment/activation in primary care pediatrics. Proximity had a negative effect, number of office encounters and comorbidity had a positive association on portal enrollment.
Zarcadoolas et al [19]. Qualitative study of four focus groups (N=28) with loweducation level, English-speaking consumers Portal users felt a high level of patient engagement / empowerment. These users extended office visits to ask additional questions of their provider. There was also an increase in preventative and overall health maintenance.
Portal users demonstrated enthusiasm about the increased utility and value of their medical encounter when augmented with the portal.
Shimada et al [20]. Cross-sectional, retrospective cohort study of 32 veterans affairs (VA) facilities implementing SM in primary care Technical assistance (coordinators), computer resources, and leadership support for coordinators were positively associated with increased SM adoption rates. Higher SM use was associated with lower urgent care (UC) rates, early adopters of SM achieved a greater decrease in UC utilization over time than later adopters.
A path of associations linking SM and reductions in UC utilization exists.
Schprechman et al [21]. Cross-sectional study of older adults aged 50-85 years (N=119)
Internet and email use were reported in 78.2% and 71.4% of this sample of patients with heart failure (HF), respectively. Controlling for age and education, higher health literacy predicted email, but not Internet use. Global cognitive function predicted email (P<.001) but not Internet use. Only 45% used the Internet to obtain information on HF.
The majority of HF patients use the Internet and email, but poor health literacy and cognitive impairment may prevent some patients from accessing these resources. (1) Enrollees felt they did not need the patient portal and had sufficient access to information elsewhere,(2) they preferred other types of communication such as phone or face-to-face, (3) they were unable to log in from lack of technical support. Patients were satisfied with the opportunity to send messages to health care providers through the portal, even if they did not use the Portals should be offered to the patients at an appropriate time when the patient needs the service and when they are receptive to information about the service.
feature.
Neuner et al [24]. Mixed-methods study of 2 independent data sources to examine patient enrollment in local portal in both primary care and specialty care settings (N=124,379) for 2010-2012 13.2% (16,418/124,379) of patients enrolled in the portal in 2010, and by 2012 enrollment increased to 23.1% (28,731). Median patient access of portal per year was 14 times, with a range of 1-660. Over 93% accessed the system at least twice, 78% accessed the system 4 or more times, and 15.3% accessed the system 50 or more times per year.
Portal users were slightly older and more likely to be female (P<.001 The mean quarterly number of primary care contacts increased by 28% between the pre-PCMH baseline and the post implementation periods, largely due by increased SM, quarterly office visits declined by 8%, 10% increases in SM threads, and phone encounters were associated with increases of 1.25% (95% CI, P<.001) and 2.74% increase in office visits (95% CI) in office visits, respectively.
Before and after a medical home redesign, proportional increases in SM and phone encounters were associated with additional primary care office visit for individuals with diabetes. | 2017-05-02T20:43:17.495Z | 2015-02-01T00:00:00.000 | {
"year": 2015,
"sha1": "9891c114857ab4f926eee069fe1732dd3a47a873",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2196/jmir.3171",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9891c114857ab4f926eee069fe1732dd3a47a873",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17703242 | pes2o/s2orc | v3-fos-license | Pharmacogenomics in Pediatric Patients: Towards Personalized Medicine
It is well known that drug responses differ among patients with regard to dose requirements, efficacy, and adverse drug reactions (ADRs). The differences in drug responses are partially explained by genetic variation. This paper highlights some examples of areas in which the different responses (dose, efficacy, and ADRs) are studied in children, including cancer (cisplatin), thrombosis (vitamin K antagonists), and asthma (long-acting β2 agonists). For childhood cancer, the replication of data is challenging due to a high heterogeneity in study populations, which is mostly due to all the different treatment protocols. For example, the replication cohorts of the association of variants in TPMT and COMT with cisplatin-induced ototoxicity gave conflicting results, possibly as a result of this heterogeneity. For the vitamin K antagonists, the evidence of the association between variants in VKORC1 and CYP2C9 and the dose is clear. Genetic dosing models have been developed, but the implementation is held back by the impossibility of conducting a randomized controlled trial with such a small and diverse population. For the long-acting β2 agonists, there is enough evidence for the association between variant ADRB2 Arg16 and treatment response to start clinical trials to assess clinical value and cost effectiveness of genotyping. However, further research is still needed to define the different asthma phenotypes to study associations in comparable cohorts. These examples show the challenges which are encountered in pediatric pharmacogenomic studies. They also display the importance of collaborations to obtain good quality evidence for the implementation of genetic testing in clinical practice to optimize and personalize treatment.
Abstract It is well known that drug responses differ among patients with regard to dose requirements, efficacy, and adverse drug reactions (ADRs). The differences in drug responses are partially explained by genetic variation. This paper highlights some examples of areas in which the different responses (dose, efficacy, and ADRs) are studied in children, including cancer (cisplatin), thrombosis (vitamin K antagonists), and asthma (long-acting b2 agonists). For childhood cancer, the replication of data is challenging due to a high heterogeneity in study populations, which is mostly due to all the different treatment protocols. For example, the replication cohorts of the association of variants in TPMT and COMT with cisplatininduced ototoxicity gave conflicting results, possibly as a result of this heterogeneity. For the vitamin K antagonists, the evidence of the association between variants in VKORC1 and CYP2C9 and the dose is clear. Genetic dosing models have been developed, but the implementation is held back by the impossibility of conducting a randomized controlled trial with such a small and diverse population. For the long-acting b2 agonists, there is enough evidence for the association between variant ADRB2 Arg16 and treatment response to start clinical trials to assess clinical value and cost effectiveness of genotyping. However, further research is still needed to define the different asthma phenotypes to study associations in comparable cohorts. These examples show the challenges which are encountered in pediatric pharmacogenomic studies. They also display the importance of collaborations to obtain good quality evidence for the implementation of genetic testing in clinical practice to optimize and personalize treatment.
Key Points
Implementation of pharmacogenomic testing in pediatric care is still scarce.
To enable implementation of pharmacogenomic testing in clinical practice, consensus should be reached on the criteria that should be met before implementation.
Heterogeneity of study populations is an important factor for impeding replication of pharmacogenomic associations.
Introduction
Individuals with the same disease will often respond differently to the same drug. Some individuals will have a good response to the drug, while others experience little or no effect. Some patients will experience severe adverse drug reactions (ADRs), whereas others will not. In addition, some patients require a higher or lower dose compared with the standard dose defined in clinical trials to benefit optimally from the drug. In other words, personalizing drug treatment is required. Pharmacogenomics studies the relationship between genetic variation and drug responses. Single nucleotide polymorphisms (SNPs) can lead to changes in the function or the amount of proteins (e.g., enzymes, receptors, ion channels) and therefore in the drug response [1]. Pharmacogenomics covers associations with both germline and somatic mutations. In this review only the influence of germline mutations will be discussed.
The first pharmacogenomic studies were designed as candidate gene studies. The candidate genes are selected based on potential involvement with drug response, such as genes coding for metabolic enzymes and drug target proteins. However, the cause of a different drug response is not always in the potentially involved genes, which makes it difficult to choose candidate genes.
The design of a genome-wide association study (GWAS) is more data driven than the hypothesis-driven candidate gene association studies. In a GWAS, the whole genome of participants is screened for all frequently occurring SNPs. With GWAS, besides SNPs in candidate genes, also previously unknown associations between a specific SNP and a certain response to a drug can be found. The newest innovations in pharmacogenomics, enabled by the rapid improvement of genomics technology, are phenome-wide association studies (PheWAS), whole exome sequencing (WES), and whole genome sequencing (WGS), which bring new opportunities to study the association between response and genetic variants [2].
Most pharmacogenomic research has been performed in adults. However, it is important to realize that findings in the adult population cannot be applied directly to the pediatric population [3]. Processes and systems (such as the metabolic system and hemostasis and drug biotransformation) are still under development in children [3,4]. Therefore, drugs may act differently in children compared with adults. Although genetic variations remain stable, the contribution to treatment heterogeneity may be different at a younger age. In this article, we highlight examples of pharmacogenomic studies in pediatric patients. Pharmacogenomic research in childhood cancer is, apart from the focus on tumor genetics, focused on predicting which patients will suffer from severe ADRs. In the treatment of thrombosis, the studies have focused on predicting the right anticoagulant dose for each pediatric patient; and in asthma the main issue is to predict the efficacy of a bronchodilator drug. These are representative and extensively studied examples of the earlier mentioned sorts of differences in drug responses (ADRs, dose, and efficacy). These examples will give an insight into the challenges of pharmacogenomic research in children, but will also address the potential of pharmacogenomics to optimize and personalize treatment for children.
Childhood Cancer
In 2012, the worldwide estimated number of children under the age of 15 years diagnosed with cancer was 163,300 [5]. The mean 5-year survival rates in the US are just above 80 %, but it largely depends on the type of cancer [6]. With the increase in survival rates, the ADRs, which can cause lifelong damage, are becoming increasingly important during and after treatment. Anticancer drugs that are well known for their ADRs are cisplatin (ototoxicity, renal toxicity), anthracyclines (cardiotoxicity), and vincristine (neurotoxicity). These ADRs can have a large impact on quality of life. Many pharmacogenomic studies in the field of childhood cancer have focused on the toxicity of treatment. However, clinical implementation of pharmacogenomic testing is still pending in many centers because of inconclusive study results or uncertainty about whether and for which patients implementation is clinically relevant. We will discuss cisplatin as an example. This drug has been associated with a risk of ototoxicity, which can be very impairing, especially for children who are developing their speech skills [7]. Several candidate gene studies have been conducted to investigate specific SNPs which are associated with an increased or decreased risk of ototoxicity. Variations in the following genes were found to influence the risk of cisplatin-induced ototoxicity: TPMT, COMT, ABCC3, SOD2, GSTT1*1, GSTP1, XPC, LRP2, Otos, SLC22A2, CTR1 and GSTM3*B [8][9][10][11][12][13][14][15][16][17][18]. However, a major issue is the reproducibility of these initial findings. Several groups have conducted relatively small candidate gene studies on the association between ototoxicity and variations in COMT and TPMT in different cohorts [9,[19][20][21]. The cohorts are very heterogeneous (Table 1) and some lack statistical power. For TPMT, the association was replicated in two similar cohorts [8,9]. One small Spanish cohort (n = 38) also showed an association for TPMT; however, because of the lack of power it was not statistically significant (rs12201199, odds ratio (OR) 6.79, 95 % confidence interval (CI) 0. 34-13.71) [20]. The association Table 1 Overview of the characteristics of cisplatin-induced ototoxicity studies Ross et al. [8] Pussegoda et al. [9] Yang et al. [19] Lanvers-Kaminsky et al. [21] Hagleitner et al. [20] Xu et al. [22] Discovery with COMT was replicated twice [8,20], but in one of the studies the association was in the opposite direction [20]. Another problem with COMT and TPMT is the lack of information on the mechanism in which these two enzymes are involved in cisplatin-induced ototoxicity.
Hagleitner et al. conducted a meta-analysis for COMT and TPMT in 2014 and found only a small association for COMT (rs4646316) (OR 1.52, 95 % CI 1.16-1.99, p = 0.003). For the analyzed TPMT mutations there was a trend towards increased risk (rs12201199; OR 2.15, 95 % CI 1.16-1.99, p = 0.003) [20]. However, it is debatable if these results give an accurate effect estimation, because of the heterogeneity in populations of the included studies.
Recently, a GWAS failed to find any association for TPMT, COMT or any of the other genes studied in the candidate gene studies [22]. The GWAS study was conducted in 238 pediatric patients with newly diagnosed brain tumors. A strong association was found for a mutation in ACYP2 (rs1872328, hazard ratio 4.5, 95 % CI 2.63-7.69, p = 3.9 9 10 -8 ), which was replicated in a new cohort of 68 patients that was almost similarly treated as the discovery cohort. In the discovery and the replication cohorts, 100 % of the patients that carried at least one mutated allele developed ototoxicity. In the patients with no mutated allele, still more than 70 % developed ototoxicity [22].
ACYP2 encodes for an acylphosphatase which is, among other places, expressed in the cochlea [23]. The exact mechanism by which this mutation in ACYP2 increases the risk of ototoxicity is still unclear.
The problems with replication of the results found in the different candidate studies has led to an extensive discussion about the underlying reasons [24][25][26][27][28]. The replication issues could be largely due to small sample sizes and differences in the study populations (age, ethnicity, and type of cancer), scoring of ototoxicity, length of follow-up, cumulative dose of cisplatin, and concurrent drug treatment (e.g., use of otoprotectants and craniospinal irradiation) ( Table 1). Heterogeneity also existed within the studies, like different treatment regimens and types of cancer. The heterogeneities complicate replicating the results and it is uncertain if the associations found are true or only a result of confounding or bias. At present, only TPMT is mentioned in the label information of cisplatin as a possible contributor to ototoxicity, but no clinical recommendations are provided [29].
From these studies we can conclude that the mutation in ACYP2 seems to be an important predictor of ototoxicity in children, but that it explains only a small part (12.4 %) of ototoxicity [22]. More research is needed to replicate these findings, and to find practical solutions for the implementation of ACYP2 testing in clinical practice. Studies of the mechanism for TPMT and COMT involvement in cisplatin- Table 1 continued Ross et al. [8] Pussegoda et al. [9] Yang et al. [19] Lanvers-Kaminsky et al. [21] Hagleitner et al. [20] Xu et al. [22] Discovery induced ototoxicity and independent replication in similar cohorts are required. For some patients the toxicity is unacceptable (e.g., ototoxicity for a patient who is blind). In such patients, decisions on therapy will be influenced by genetic polymorphisms that enhance the risk of developing toxicity. With the identification of significant risk variants, patients who are at an increased risk can be identified and might be given alternative treatments and/or undergo closer monitoring during treatment and the follow-up period. Adapting complex treatment regimens in an attempt to reduce side effects is complicated since efficacy must remain intact. Different approaches maybe explored: identifying a protecting agent against ototoxicity is an attractive option. The knowledge gained from the identification of variants that influence the risk of cisplatin-induced ototoxicity can be used to identify new drug targets for protecting agents. This research is promising and will eventually lead to a more personalized anticancer treatment.
Thrombosis
In recent years there has been a higher incidence of thrombosis in children [30], mainly due to intensified medical treatments and increased awareness of the risk of thrombosis. Currently, low molecular weight heparins (LMWHs) and vitamin K antagonists (VKAs) are the only two drugs approved for the treatment or prevention of thrombosis in pediatric patients. The relatively new direct oral anticoagulants (DOACs) are currently being tested in pediatric patients. In 2018, the first phase III studies with DOACs in pediatric patients will be completed. At this time, the VKAs are the only oral drugs which are approved for the treatment of thrombosis in pediatric patients.
VKAs inhibit the action of vitamin K epoxide reductase (VKORC1), which leads to lower levels of active vitamin K-dependent clotting factors, and thus to inhibition of the coagulation cascade [31]. In clinical practice, a large variability in dose requirement of VKAs is seen [32]. This is problematic because VKAs also have a narrow therapeutic window. Dosing all patients equally leads to an increased risk of bleeding and thrombotic events. In children, this problem is even more compelling because of the developing hemostatic system and the growing body. In the last decade, many studies have been carried out to explain the large interindividual dose variability in children and adults [33,34]. In addition to clinical factors such as age, weight, and gender, genetic factors play an important role [34]. Mutations in VKORC1 lead to less enzyme production and to a lower dose requirement. Loss-of-function mutations in CYP2C9 (*2 and *3) lead to a decrease in the enzyme activity. The S-isomer of VKAs is almost completely metabolized by CYP2C9; therefore, the mutation leads to a decrease in the required dose [31,35]. To a lesser extent, mutations in CYP4F2 and CYP2C18 have also been found to be (possibly) contributing to the dose variability [33,[35][36][37].
Seven regression dosing models have been constructed for pediatric patients, almost all for warfarin [38][39][40][41][42][43][44]. No pediatric dosing model is available for acenocoumarol. What these pediatric models have in common is that factors related to ontogeny (i.e., age, weight, and height) explain roughly one-third of the dosing variability. The variability explained by the CYP2C9 and VKORC1 genotypes fluctuates between the different models. The CYP2C9 genotype explained 0.4 [38] to 12.8 % [39] of the variability in dose requirement, the VKORC1 genotype 3.7 [38] to 47 % [41]. One of the possible explanations is the small sample size of the cohorts ranging from 37 to 120 children. Only two studies included at least 100 patients [39,44].
Also, two pharmacokinetic/pharmacodynamic (PK/PD) dosing models have been built for pediatric patients [45,46]. Hamberg and Wadelius evaluated the regression and PK/PD models in a retrospective pediatric cohort [34]. Of the evaluated models, the PK/PD model of Hamberg et al. [46] performed best with regards to the proportion of patients for whom the predicted maintenance dose was within ±20 % of the observed dose. Hamberg et al. developed a tool for their model which can run on every computer without licensing for a program and is easy to use [47]. The best performing regression model incorporates the CYP2C9 and VKORC1 genotype, height, and indication and can be used with a simple pocket calculator [39].
Until now, no randomized controlled trial (RCT) has been conducted with a regression dosing model in children. One trial has just started, in which a PK/PD dosing model is tested against standard dosing [48]. In adults, 12 RCTs have been carried out to evaluate the dosing algorithms [49]. These trials gave conflicting results with regards to improving the time within therapeutic range (TTR) and outcomes such as bleeding and thromboembolic complications. In a recent meta-analysis, a statistically significant increase in TTR and decrease in minor bleeding was found when comparing fixed standard dosing with genotypeguided dosing [49].
Currently, the American College of Chest Physicians (ACCP) guideline for antithrombotic therapy and prevention of thrombosis does not recommend genotyping before starting VKAs in adults [50]. The FDA follows this recommendation, while still including information on the impact of pharmacogenomics in the drug label [51]. When genetic information is available, the physician can use this to adjust the dose. In pediatric patients, this information should not be used. Studies showed that the adult models overestimate the VKA dose in children [39,42]. Therefore, pediatric models should be used when genetic information is available. An RCT for examining a pediatric regression model does not seem to be a realistic option for determining the usefulness of genotyping before starting a VKA. The numbers of children using these drugs are very low, and therefore such a trial would be very costly and time consuming. We think the pediatric algorithms should be implemented and evaluated in a clinical setting. Using a dosing model can only lead to an increase in the quality of treatment. There are no risks involved, because adjustments of the dose can still be made based on the International Normalized Ratio (INR). The costs of using a model only consist of the price of genetic testing and these costs are already quite reasonable compared with other medical tests, and will probably decrease further over time. It might be possible that genotyping becomes cost effective, because when INR stability increases it is likely that fewer INR measurements will be needed, and fewer bleeding and thrombotic events will occur. Evaluations should be carried out during implementation in order to determine if the genetic testing is increasing the quality of treatment and/or lowering the costs.
Asthma
Asthma is the most common chronic disease in children. Asthma is treated with a stepwise approach [52]. Shortacting b2 agonists (SABA) as needed are prescribed initially to relieve symptoms of bronchoconstriction. Inhaled corticosteroids (ICS) are added to the regimen if asthma symptoms persist to reduce the airway inflammation and are considered to be the cornerstone of asthma treatment [52]. Additionally, long-acting b2 agonists (LABA) or leukotriene receptor agonists (LTRA) can be added if a child's asthma remains insufficiently controlled. Although asthma treatment is effective in many patients, there is a large variability in the level of symptom control or lung function improvement. Already more than 15 years ago, Drazen et al. suggested that up to 80 % of the interindividual variants in drug response in asthmatic patients could be due to genetic variations [53]. Since then, candidate gene approaches and a handful of GWAS studies have described several genetic variants associated with asthma treatment response, yet effect sizes are often small and a successful replication remains rare [54][55][56].
Pharmacogenomics of LABA seems closest to clinical implementation. An SNP of interest (ADRB2 Arg16) has been replicated and prospectively tested and the risk genotype is relatively frequent within the population. Variation in the gene that encodes the b2 receptor (ADRB2) is associated with LABA response in children [57][58][59], yet not all studies point in the same direction [60]. Nevertheless, a recent meta-analysis of 4226 children of white Northern European and Latino origin showed that this variant (ADRB2 Arg16) was associated with an increased risk of asthma exacerbation when treated with ICS ? LABA (OR 1.52, 95 % CI 1.17-1.99; p = 0.0021) [61]. In addition, further evidence has been provided by a small prospective study of 62 children with the genetic variation randomized to ICS ? LABA or ICS ? LTRA. The trial showed that children treated in the ICS ? LTRA arm had fewer exacerbations (exacerbation score of -0.39, 95 % CI -0.15 to -0.64; p = 0.049) and school absences (difference in scores of 0.40, 95 % CI -0.22 to -0.58; p = 0.005) compared with the group treated with ICS ? LABA [62]. Approximately 16 % of the children with asthma are homozygous for this variant [57], and may benefit from genotyping before initiation of LABA treatment. Larger trials are necessary to assess the clinical value and cost effectiveness of ADRB2 genotyping.
Defining treatment response in asthma is complicated. Symptoms vary over time and different dimensions of response (lung function, exacerbations, and symptoms) can be associated with different genetic risk profiles [63]. Furthermore, asthma consists of a heterogeneous population of various distinct phenotypes (e.g., eosinophilic versus neutrophilic asthma), which seems to differ for children and adults. Performing studies in children is therefore of the uttermost importance. Recently, the Pharmacogenomics in Childhood Asthma (PiCA) consortium has been formed to bring asthma researchers in this field together to perform meta-analyses in well defined joined pediatric asthma cohorts [61,64].
Challenges and Future Directions
Although the research field of pediatric pharmacogenomics is rapidly growing, few applications have made it to clinical practice. We have provided examples of three pediatric diseases where pharmacogenomics holds a promise to personalize treatment: childhood cancer, thrombosis, and asthma. These examples illustrate that gathering evidence for a pharmacogenomic association in children is challenging. Replication of genetic associations is complicated by the heterogeneity in both outcome measures and in small study populations in terms of ethnicity, disease phenotype, and age, which leads to underpowered biased studies. To overcome this obstacle, collaborations should be undertaken to enlarge the number of patients studied.
More studies have been performed on pharmacogenomic associations in adults, including a couple of RCTs, but unfortunately these results in adults cannot be simply extrapolated to children. Pharmacogenomic studies in pediatric populations remain essential. The therapeutic goal of a certain treatment is often different for adults and children. In addition, differences in co-medication, diet, and duration of drug use can also lead to dissimilar results. Before data can be extrapolated to children it should be clear if the association is not influenced by ontogeny. Children not only differ from adults in body size, but also in the dynamic expression of metabolic enzymes, drug transporters, and drug targets [3,65]. Furthermore, the organs involved in drug metabolism and elimination (liver and kidney) are under the influence of developmental processes during childhood [3]. Besides these physical differences, the disease can also manifest itself differently in children, as seen, for example, in asthma [66]. These differences make it hard to predict the PK/PD of a drug in children. The drug response can differ between children, but also within one child over time. Therefore, the extrapolation of results between children of different ages should be done with the same caution as the extrapolation of adult data to children. Pediatric patients span a period from birth to adulthood by most definitions. An RCT is still considered the gold standard to collect evidence. However, performing RCTs in children is complicated by the large sample sizes which are required, especially in rare diseases such as cancer and thrombosis. For example, in the case of VKAs, obtaining the required sample size is a large problem. For the EU-PACT (European Pharmacogenetics of Anticoagulant Therapy) trial in adults, investigating the effectiveness of the pharmacogenomic dosing models for acenocoumarol, phenprocoumon, and warfarin, the calculated sample size was 400 per VKA [67,68]. To put this in perspective, in the Netherlands, currently only 226 children under the age of 15 years use VKAs [69]. To obtain the number of patients needed, international collaborations are essential. Besides the large sample size, the high costs of an RCT need to be considered. This type of research is usually not in the direct interest of pharmaceutical companies, especially if it concerns off-patent drugs. Therefore, it is difficult to find funding for these kinds of trials, and specific financial or other incentives might be required to bridge this obstacle [70].
As stated in the introduction, the improvement of genomics technology creates opportunities to study pharmacogenomics in new ways. The newest is PheWAS, which is the opposite of GWAS. Instead of studying genetic associations with a predefined phenotype, patients with a certain mutation are the starting point to search for the matching phenotype. Other examples are WES/WGS in which all DNA mutations will be considered, in contrast to GWAS, which is directed to known (frequently occurring) SNPs.
There is no one method better than the others. Which method or combination of methods is the most appropriate depends largely on the research question/situation (e.g., knowledge about drug mechanism, available budget). Findings of a GWAS, for instance, can be subsequently replicated in a candidate gene study, which requires far fewer patients and is less expensive than an additional GWAS.
The progression from gathering evidence to clinical relevance is not easy. Even when an association is strong it does not mean that it is clinically relevant. For example, in the case of ACYP2 and ototoxicity, the association was quite strong, but it still could explain only 12.4 % of the ototoxicity cases. The clinical relevance largely depends on the relative frequency of the risk allele in the population of interest, the disease phenotype, the severity of the outcome, and the risk attribution of the risk-allele to the outcome. Cost effectiveness of a pharmacogenomic test is inevitably necessary to reach clinical implementation. Even when the costs of genetic testing decline, other costs such as the costs of the possible alternative treatment, use of protective agents, and/or extra monitoring should be considered.
To be able to proceed with implementation of pharmacogenomic testing in children, consensus should be reached about what evidence is needed to implement a pharmacogenetic test into clinical practice if RCTs are not feasible. Furthermore, in some cases performing an RCT could be considered unethical. An important example of this is the risk of codeine-induced infant mortality based on a CYP2D6 genotype of breastfeeding mothers [71]. This has led to a change in the registration of codeine. Codeine is no longer approved for pediatric use in the EU and is contraindicated in women during breastfeeding [72].
When an RCT is impossible, at least worldwide replication studies are needed to support the generalizability of the association. This is only possible with international collaboration. However, the healthcare systems and availability of treatment options (e.g., differences in authorized VKAs) differ largely between countries and treatment protocols vary between countries, study populations, and over time. This makes finding a comparable replication cohort challenging. Therefore, international treatment harmonization would ease the process of worldwide replication studies.
Strong evidence in adults might support the associations found in pediatric patients. However, because of differences related to ontogeny, adult-derived information should be considered with caution and is not essential. This caution should also be applied when using the dosing guidelines available for adults. As seen in the example for VKAs, using the adult models would lead to an overestimation of the required dose. Pharmacogenomics needs to be considered as valuable information in addition to clinical parameters to guide treatment decisions.
It is important that consensus is reached about the evidence needed for implementation and that healthcare professionals also support these criteria; published, peerreviewed clinical practice guidelines could be of particular help here. Clinicians need to be appropriately educated on the value of pharmacogenomic testing. Only then will pharmacogenomics be implemented in pediatric clinical practice.
Conclusion
Pharmacogenomics is a promising research field, but has not reached the pediatric clinic yet. International collaborations are needed to gain a more structured approach for pharmacogenomic research in children. When heterogeneity is reduced and research groups work together in order to obtain larger numbers of patients, it is possible to get stronger evidence, both qualitatively and quantitatively. The criteria for implementing a pharmacogenomic test without the presence of a supporting pediatric RCT should be further elaborated by healthcare professionals and researchers. Reaching consensus could lead to easier acceptance by healthcare professionals to the use of these tests in daily clinical practice. | 2018-04-03T02:52:03.381Z | 2016-05-03T00:00:00.000 | {
"year": 2016,
"sha1": "4380215fade9e48afde820186e623a9621249cd6",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4920853?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4380215fade9e48afde820186e623a9621249cd6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259182494 | pes2o/s2orc | v3-fos-license | Neuroprotective effects of sesamol against LPS-induced spatial learning and memory deficits are mediated via anti-inflammatory and antioxidant activities in the rat brain
Objective: Sesamol is a phenolic lignan extracted from sesame seeds, and it possesses anti-inflammatory and antioxidant activities. Lipopolysaccharide (LPS) is known to produce neuroinflammatory responses and memory impairment. The current study aimed to investigate the protective influence of sesamol against LPS-mediated neuroinflammation and memory impairment. Materials and Methods: Sesamol (10 and 50 mg/kg) was injected to Wistar rats for two weeks. Then, animals received LPS injection (1 mg/kg) for five days, while treatment with sesamol was performed 30 min before LPS injection. Spatial learning and memory were assessed by the Morris water maze (MWM), two hours after LPS injection on days 15-19. Biochemical assessments were performed after the end of behavioral experiments. Results: LPS-administered rats showed spatial learning and memory deficits, since they spent more time in the MWM to find the hidden platform and less time in the target quadrant. Besides these behavioral changes, tumor necrosis factor-α (TNF-α) and lipid peroxidation levels were increased, while total thiol level was decreased in the hippocampus and/or cerebral cortex. In addition, sesamol treatment (50 mg/kg) for three weeks decreased the escape latency and increased the time on probe trial. Sesamol also reduced lipid peroxidation and TNF-α level, while enhanced total thiol level in the brain of LPS-exposed rats. Conclusion: Supplementation of sesamol attenuated learning and memory impairments in LPS-treated rats via antioxidative and anti-inflammatory activities in the rat brain.
Introduction
Alzheimer's disease (AD) is the most prevalent type of dementia that devotes up to seventy percent of dementia cases (WHO, 2021). AD is diagnosed by cognitive and memory impairments, amyloid plaques and tau tangles in the hippocampus, entorhinal cortex, amygdala and basal forebrain (Brown, 2019;Perl, 2010). Neuroinflammation is a crucial factor involved in the occurrence and progression of AD (Rather et al., 2021). Prior studies have reported the existence of astrocytes and microglia around the amyloid plaques in AD (Varnum and Ikezu, 2012), as well as increased expression of different proinflammatory cytokines, including tumor necrosis factorα (TNF-α), interleukin-1β (IL-1β) and free radicals in the blood samples and brain of AD patients (Akiyama et al., 2000;Ganguly et al., 2021).
Lipopolysaccharide (LPS) is a bacterial endotoxin which acts as an inducer of inflammation and participates in neuroinflammation and eventually neurodegeneration (Batista et al., 2019). Systemic LPS injection leads to inflammatory responses by binding to tolllike receptor-4 (TLR4) and subsequently NF-κB transcriptional activation of different proinflammatory genes such as TNFα, IL-1β and IL-6 (Bryant et al., 2010;Morris et al., 2015). This event impairs neuronal function in the hippocampus and eventually leads to neuronal death and memory dysfunction (Zakaria et al., 2017). LPS also produces high amounts of reactive oxygen species (ROS), which ultimately causes neuronal death and memory dysfunction (Khan et al., 2016;Amooheydari et al., 2022). Elevated ROS formation leads to oxidative damage to proteins, lipids and nucleic acids, resulting in deregulation of cellular function and neurodegeneration (Ammari et al., 2018). LPS also enhances Aβ formation and aggregation (Zhu et al., 2021) and tau hyperphosphorylation (Gardner et al., 2016).
The present study investigated the impact of sesamol on memory dysfunction, TNF-α level and oxidative stress biomarkers in LPS-administered rats.
Animals
Male Wistar rats (200-250g) were housed in a colony room under controlled temperature, 12hr light:dark cycles, and they had free access to food and water. The Ethic Committee for Animal Experiments at Isfahan University of Medical Sciences approved the study (IR.MUI.MED.REC.1398.571).
Experimental design
Animals were assigned to four experimental groups (n=9), including control, LPS, LPS+Sesamol10 and LPS+Sesamol50 groups. LPS (Escherichia coli) was injected (1 mg/kg) on days 15-19, two hours before behavioral assessment in the Morris Water Maze (MWM) (Ammari et al., 2018). Sesamol was injected (10 and 50 mg/kg) two weeks before LPS injection, and 30 min prior to LPS injection on days 15-19. After the behavioural experiment, animals were euthanized by CO2 and then decapitated. The cerebral cortex and hippocampus were immediately dissected out and homogenized with 10% NaCl for cytokine and oxidative assessments.
MWM
The maze was a circular pool with a diameter of 150 cm which was filled with water (23±1°C). A circular platform (diameter 10 cm) was placed 2 cm below the surface of water at the midpoint of southeast quadrant. During acquisition training, animals were trained to find the platform within 60 sec (4 trials/day × 4 days) with 60 sec intersession intervals. In each trial, a rat was released at one of the four starting points to find the platform. The software NeuroVision (TajhizGostar Co.) calculated the escape latency for each animal. A probe trial was carried out on the 5 th day to evaluate memory retention for the location of platform. The platform was taken off, and animals had permission to swim in the maze for 60 sec. The time spent in the southeast quadrant was recorded (Azmand and Rajaei, 2021).
Cytokine level
The levels of TNF-α were determined by an ELISA kit (eBioscience Co., USA). Hippocampal and cortical homogenates were centrifugated at 3000 rpm for 5 min, and then, supernatant was collected to detect TNF-α. The level of TNF-α is presented as pg/ml.
Oxidative stress biomarkers
Thiobarbituric acid reactive substances (TBARS) and total thiol level were measured in the cortical and hippocampal homogenates as explained before (Rajaei et al., 2013).
Statistical analysis
Two-way repeated measures ANOVA and one-way ANOVA followed by Tukey's test was used to analyze data. Data are expressed as mean±S.E.M. A value of p<0.05 was considered significant.
Results
The impact of sesamol on spatial learning and memory Data analysis showed that the time to reach the hidden platform was reduced during four training days in experimental groups, demonstrating the acquisition of spatial learning (F(3,84)=41.21, p<0.001, Figure 1A). Moreover, the rats in the LPS group spent more time to reach the hidden platform in comparison with the controls (F(3,32)=13.02, p<0.001, Figures 1A and 1B), demonstrating a deficit in acquisition of spatial learning. Furthermore, treatment with sesamol (50 mg/kg) reduced the time during all training days in comparison with the LPS group (p<0.01, Figure 1A and 1B). These findings suggest that the LPSmediated spatial learning and memory deficit was rescued by sesamol. Additionally, comparison of latencies on the first day did not show any significant change on the first trial in the experimental groups ( Figure 1C). On probe test, the time spent in the southeast quadrant was reduced in the LPS group in comparison with the controls (F(3,32)=6.12, p<0.05, Figure 1D). Moreover, sesamol treatment (50 mg/kg) enhanced the time spent in the platform quadrant (p<0.01, Figure 1D) in comparison with the LPS group.
The impact of sesamol on TNF-α level
TNF-α level was enhanced in the hippocampus (p<0.05) and cerebral cortex (p<0.05) of the LPS group in comparison with control group (Figure 2). Moreover, sesamol (50 mg/kg) decreased the TNF-α level in the hippocampus (p<0.01) and cerebral cortex (p<0.05) in comparison with the LPS group (Figure 2).
The impact of sesamol on TBARS level
The results demonstrated that the cortical (p<0.05) and hippocampal TBARS levels (p<0.05) were increased in the LPS group in comparison with the control group ( Figure 3). Moreover, sesamol treatment (50 mg/kg) reduced the hippocampal TBARS levels in comparison with the LPS group (p<0.05, Figure 3).
The impact of sesamol on total thiol level
Total thiol levels were decreased in the hippocampus of the LPS rats as compared to the controls (p<0.05, Figure 4). In addition, sesamol supplementation (50 mg/kg) enhanced the hippocampal total thiol level as compared to the LPS group (p<0.05, Figure 4).
Discussion
Our findings demonstrated that LPS alone induced brain inflammation, oxidative stress and deterioration of spatial learning and memory abilities. Moreover, supplementation of sesamol (50 mg/kg) ameliorated memory impairments by inhibition of brain inflammation and oxidative damage.
Ample studies have indicated that brain inflammation is a critical factor for developing cognitive decline and neuronal damage in AD (Voet et al., 2019;Millington et al., 2014). Neuroinflammation could result in cognitive impairments due to nuclear retention of NF-κB and proinflammatory mediators release. Activation of glia and increased neuroinflammatory responses have been reported in patients with AD (Wyss-Coray, 2006). Experimental studies have also shown that neuroinflammatory responses induce cognitive impairments in rodents (Czerniawski and Guzowski, 2014). Systemic LPS injection causes neuronal damage in the hippocampus, and subsequently memory deficits (Valero et al., 2014;Batista et al., 2019). LPS induces strong microglia activation, up-regulates the expression of different proinflammatory cytokines such as TNF-α and IL-6, and eventually causes neuronal death (Monje et al., 2003). LPS identifies and binds to CD14/TLR4 complex, subsequently activates NF-κB and induces proinflammatory cytokines release (Park and Lee, 2013;Parajuli et al., 2012). In this study, subacute treatment with intraperitoneal injection of LPS for five days was used to develop an inflammation model. Our findings revealed that learning and memory performances were impaired in LPS-administered rats, evidenced by prolongation of the time spent to find the platform along with a decrement in time on probe trial. Comparison of latencies on the first day did not show any significant change on the first trial in experimental groups; however, it was enhanced on trials 2 to 4. This result indicates that LPS administration did not affect motor behaviour.
Our results also demonstrated that sesamol could enhance learning and memory as evident by decrement in escape latency and enhancement of time on probe test. In other words, sesamol treatment improved learning abilities and restored memory in LPS-administered animals. These effects indicated the protective action of sesamol against LPS-induced abnormalities. The memory-enhancing effects of sesamol have been shown in diabetic animals (Kuhad and Chopra, 2008) and streptozotocin-induced memory impairments (Sachdeva et al., 2015).
Systemic LPS injection also increased brain TNF-α level, but the level of this inflammatory mediator was suppressed by sesamol. This result shows that sesamol possesses an anti-neuroinflammatory activity. This finding is in agreement with prior studies showing that sesamol inhibited the expression of inflammatory cytokines. For example, it was shown that sesamol prevented the production of TNFα and nitrite in LPS-treated macrophages (Chu et al., 2010). Sesamol also reduced the mRNA expression of different proinflammatory factors such as TNF-α in cerebral ischemia (Gao et al., 2017). Therefore, sesamol exerts an antiinflammatory action by inhibiting the release of TNF-α in the rat brain.
Considerable evidence indicates a strong association between oxidative stress, an imbalance between ROS production and elimination, and cognitive decline in AD (Campos et al., 2014;Barnham et al., 2004). The nervous system is susceptible to oxidative stress, since high levels of polyunsaturated fatty acids present in the brain make it more susceptible to lipid peroxidation and oxidative modification (Uttara et al., 2009). LPS can produce oxidative stress by releasing free radicals, which is considered a critical factor for memory decline following LPS administration (Ammari et al., 2018;Amraie et al., 2020). In accordance with this, our results indicated that LPS-induced memory deficits were followed by brain oxidative stress, as evident by enhanced levels of TBARS and decreased total thiol levels in the brain. In addition, supplementation of sesamol (50 mg/kg) for three weeks reduced the level of TBARS and enhanced total thiol level in the hippocampus, indicating the antioxidant activity of sesamol in the brain. Previous studies have also reported the neuroprotective action of sesamol by removing free radicals and decreasing lipid peroxidation in cerebral ischemia (Gao et al., 2017), diabetes (Kuhad andChopra, 2008), and aluminium chloride and streptozotocin-induced cognitive impairments models (John et al., 2015;Sachdeva et al., 2015). It was also shown that treatment of aging mice with sesamol improves aging-related cognitive dysfunction by suppressing malondialdehyde production and enhancement of antioxidant enzymes in the hippocampus (Ren et al., 2020). Conclusively, the beneficial impact of sesamol on memory loss in this study could be partially attributed to the antioxidant activity of sesamol.
Evidence indicates that the cholinergic system plays an essential role in memory, and its dysfunction contributes to the pathology of neuroinflammation (Nizri et al., 2006). Degeneration of cholinergic neurons in the basal nucleus of Meynert occurs in early forms of AD, and is related to cognitive decline (Winkler et al., 1998). It was shown that LPS induces cholinergic neuronal loss (Houdek et al., 2014) and enhances acetylcholinesterase activity (Tyagi et al., 2010). Sesame indicum was shown to improve memory impairments induced by scopolamine in rats (Chidambaram et al., 2016). Additionally, the anti-cholinesterase activity of sesamol has been previously reported (Topal, 2019). Therefore, the advantageous effect of sesamol on memory function in LPSinjected rats may also be mediated via anti-cholinesterase activity and potentiating the cholinergic system.
Conclusively, supplementation of sesamol alleviated spatial learning and memory impairments in LPS-exposed rats. The neuroprotective influence of sesamol on LPS-induced memory impairments could be attributed to the inhibition of neuroinflammation and oxidative damage. Thus, sesamol may be used as a potent adjuvant in the treatment of memory impairments in AD due to its neuroprotective effects. activation underlying NF-κB and JNK | 2023-06-18T05:14:17.085Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "de4a08a8feee178e9875cd5b29ce6479ae9a128e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "de4a08a8feee178e9875cd5b29ce6479ae9a128e",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14795804 | pes2o/s2orc | v3-fos-license | Processing of Intermetallic Titanium Aluminide Wires
This study shows the possibility of processing titanium aluminide wires by cold deformation and annealing. An accumulative swaging and bundling technique is used to co-deform Ti and Al. Subsequently, a two step heat treatment is applied to form the desired intermetallics, which strongly depends on the ratio of Ti and Al in the final composite and therefore on the geometry of the starting composite. In a first step, the whole amount of Al is transformed to TiAl3 by Al diffusion into Ti. This involves the formation of 12% porosity. In a second step, the complete microstructure is transformed into the equilibrium state of -TiAl and TiAl3. Using this approach, it is possible to obtain various kinds of gradient materials, since there is an intrinsic concentration gradient installed due to the swaging and bundling technique, but the processing of pure -TiAl wires is possible as well.
Introduction
Titanium aluminides have gained much attention in literature according to their excellent mechanical properties at high temperature such as tensile strength, Young's modulus or creep behaviour.In addition, their low mass density makes them especially attractive to mobile applications like it is the case in aeronautic or automotive industries [1].The lack of room temperature ductility is the main reason why these intermetallics have to be produced by non-conventional and costly production routes.This involves powder metallurgy [2,3], near-net-shape casting from very high temperatures [4], selective laser melting [5,6], electron beam melting [7] and hot forging methods [8].
All these preparation routes are restricted in terms of the final shape and size.In particular, it costs enormous efforts to produce semi-finished products such as wires following the aforementioned attempts.Nevertheless, Acoff's group [9,10] showed the possibility of preparing γ-TiAl sheets using a rolling processing route combined with adequate heat treatments.
Nevertheless, possibilities to produce wires made from this material are very limited, but possible applications such as cables or welding wires for repairing turbine blades via additive manufacturing routes are promising.In a recent work, a preparation route for multi-filamentary Ti/Al wires using an accumulative swaging and bundling technique was set up by the authors [11].This technique in combination with a subsequent heat treatment opens up the possibility to obtain wires made from titanium aluminides, which is one concern of the present work.
The second concern is on the reaction kinetics and the microstructural evolution.In literature, the making of titanium aluminides from elemental foils [12], powders [13] or by reaction synthesis [14] usually involves the formation of the tetragonal TiAl 3 phase when annealing is conducted below the melting point of Al [9,12,[14][15][16][17].Above the melting point of Al, e.g., at 950 °C, all phases present in the equilibrium phase diagram of Ti and Al [18] are observed.After a subsequent adequate heat treatment, the whole amount of aluminium has reacted with Ti to an intermediate titanium aluminide phase.Further phase reaction annealing at higher temperatures will allow the complete transfer of the remaining Ti and the intermediate TiAl 3 to the equilibrium titanium aluminide composition.
Experimental Section
Wires were prepared using the experimental setup and materials previously described by our group [11,19].The process starts with an AA5049 rod (simply named Al or Al alloy in the following) and a Ti tube, which are being co-deformed to a wire.After attaining a certain diameter (2.8 mm), the wire is cut and stacked into an undeformed Ti tube of the same dimension as the first.This procedure is referred to as one stacking cycle.
To end the preparation route with a composite composed of 50 at.%Ti and Al each, two stacking cycles are applied using the materials and the dimensions shown in Table 1.In addition, the table shows calculated and measured mass density values of the composites during the processing.
The measured mass densities are given in brackets and are slightly higher than the calculated values.They can exactly be reproduced by calculating the whole composite as if it was prepared by a starting Al rod of 17 mm in diameter.This indicates that parts of the Al-due to the differences in deformation behaviour-was pressed out of the composite at the very beginning of the preparation route.
Table 1.Calculation of the adopted processing route from [11,19] Deformation was performed at room temperature without any subsequent heat treatments.To evaluate the intermediate heat treatment parameters and phase reaction kinetics, the wires obtained were isothermally annealed in an electrical resistivity measurement set-up at temperatures of 540 °C, 560 °C, 580 °C and 600 °C in argon atmosphere.Heating was conducted by using a two step ramp.The first step heated to 500 °C using 2 K min −1 , while during the second step the temperature was increased to the nominal temperature with 1 K min −1 .This was necessary in order to minimize temperature overdrive.
After completion of the phase reactions, the wires were annealed at 1300 °C for 12 h in a quartz glass tube under Ar atmosphere to transfer the material in the complete cross section to the equilibrium titanium aluminide phases.
X-ray diffraction (using a Co Kα source (40 kV, 40 mA) in combination with a secondary graphite monochromator) was carried out at the cross section of the wires, which were prepared by grinding using conventional P4000 SiC paper.The measurement was made on the rotating sample collecting X-rays for 30 s in each 0.02°step in the range of 37°-57°and 73°-114°in 2Θ.
The area fractions of single phases in the wire cross section were determined by optical microscopy (OM) with an Epiphot microscope using the analysis software A4i Docu.
Scanning electron microscopy (SEM) was conducted using a Zeiss Gemini 1530 LEO microscope equipped with a field emission gun operating at 20 kV acceleration voltage.
Mass density measurements were performed using the Archimedean principle inside and outside a C 4 H 9 I bath at 25 °C.The accuracy of the used scale was 0.1 mg.
Mesostructure
The cross section of the as prepared wire is shown in Figure 1a.The bright and dark phases refer to Al and Ti, respectively.The wire mass density at this condition is 3.71 g cm −3 .That value implies a composition of 45.3 at% Al and 54.7 at% Ti, neglecting alloying elements in the Al alloy.Therefore, the intended atomic ratio was not exactly reproduced.This is due to inhomogeneity of deformation at the beginning, as already stated.Al and Ti are irregularly dispersed in the center of the wire cross section; the original filaments are broken due to the strong deformation during processing.Nevertheless, diffusion paths should be short enough to allow for complete phase reaction within finite time.The wire is shielded by Ti, which means that the longest diffusion paths during the following heat treatments occur at the very outside of the wires.
Figure 1b shows the cross section after low temperature heat treatment at 580 °C for 116 h.A large number of voids are visible and a closer look reveals an indication for a two phase composition of the remaining microstructure.The two phases correspond to Ti and TiAl 3 , as will be given proof in Section 3.3.
The observed pores in this phase reaction are usually referred to as a consequence of the differences in the molar volume of the starting materials and the end products of the reaction as well as to the Kirkendall effect [18].The difference in molar volume of Ti (10.64•10 −6 m 3 /mol) and Al (10.00•10 −6 m 3 /mol) weighted with their atomic ratio is only 5.8% compared with TiAl 3 (9.57•10−6 m 3 /mol).Since the overall composition of the wire is far from 25:75 (at%) Ti:Al, not the whole material participates at this phase reaction, as can be seen from Equation 1: Following this argumentation, pores resulting from the different molar volume should be even less than 5.8% of the wire cross section.However, the area fraction of pores after the phase reaction in the wire cross section is estimated to be 11.8%.Table 2 shows all measured geometrical data and mass densities of the wires before any heat treatment, after 116 h at 580 °C, and after the reaction annealing at 1300 °C for 12 h.The high porosity value of roughly 12% indicates that other aspects than the differences in molar volume are dominating the observed void formation.One possibility is the Kirkendall effect, which without doubt plays a role in the Ti-Al interdiffusion system.After annealing, an increase of the wire diameter from 2789 ± 5 µm to 2966 ± 10 µm is observed.This increase in cross sectional area of 11.5% correlates with the observed porosity.
It is believed that this is mainly due to the strong diffusion of Al into Ti at 580 °C.However, there is also evidence that the opposite behaviour is the case for this diffusion couple [17,20], namely that Ti is predominantly diffusing into the Al rich side.In contrast, earlier works show that the only diffusing element in that system below 660 °C is Al [21].The increase in wire diameter in the present work indicates an effective material transport from the inside (Al rich) to the outside (Ti rich) of the wires.This diffusion will then cause the Ti shell to expand and therefore increase the wire diameter.In the early stages of the phase formation, or at different temperatures this behaviour may differ, but under the present circumstances, Al is believed to diffuse faster within the TiAl 3 phase than vice versa.The material transport can even be noticed in the OM images, since the thickness of the former Ti shell expanded when comparing Figure 1c with Figure 1a or 1b.
Figure 1c shows the wire cross section after the final heat treatment at 1300 °C.The wire diameter has not significantly changed compared with the wire after the low temperature heat treatment.The number of voids is reduced while their size has increased, as it is typical for Ostwald ripening.The overall porosity has increased to almost 17%-18% caused by the effective mass transport from the inside of the wires to the outside in order to reduce the concentration gradient.
Microstructure
The focus of the present article is on phase reactions.Other microstructural features and texture of the deformed state are already discussed in more detail elsewhere [11].While the initial microstructure of the as-deformed state is only composed of Al solid solution and commercially pure Ti, it completely changes after heat treatment.Figure 2 shows the wire microstructure after all low temperature heat treatments applied.Al is completely consumed by the reaction of Al and Ti to TiAl 3 (dark grey), while still a significant amount of Ti (bright phase) remains as bridges enclosed by TiAl 3 .Many voids have formed, but they have already been discussed.By eye, there is no obvious difference in morphology of the phases visible within the temperature range shown.The microstructure looks similar for all cases, except that the Ti bridges are thinner for the highest temperature when comparing 600 °C and 540 °C.The interface lines also look more serrated for the lower temperatures.This is better seen at higher magnification in Figure 3 and indicates a change in kinetics that is also observed and discussed in Section 3.4.A change from interphase to volume dominated diffusion may be one possible explanation for this observation.Additionally, the phase visible in between Ti and TiAl 3 after all heat treatments is found to be more pronounced at higher temperatures.Although this interface layer is too narrow to be reliably analyzed by EDX in the SEM, Figure 4 indicates that it might be composed of Ti 3 Al and TiAl 2 .Anticipating the further discussion, the rising resistivity in case of the 580 °C and 600 °C heat treatment (black arrow) after completion of the phase reaction shown in Figure 5 also indicates the growth of at least one additional phase after formation of the TiAl 3 .The microstructure after the reaction annealing at 1300 °C is shown in Figure 6.Compared with Figure 1c, Figure 6a shows a strong phase contrast indicating at least two separate phases.Figure 6b shows the outer region (Ti-rich) of the wire close to the surface, which is composed of lamellae.These lamellae become finer the closer they get to the surface of the wire (right side in the image).Section 3.3 will give proof that the dark inner part of the wire is composed of γ-TiAl, while the outer region is two-phased with lamellae of γ-TiAl and Ti 3 Al.
Phase Identification
Figure 4 shows two sections of the X-ray diffractograms of the initial wire, the initial wire after the intermediate temperature heat treatment and the one additionally heat-treated at 1300 °C.Some high indexed reflections were removed for clarity reasons.Directly after processing, in Figure 4a, only Al and Ti reflections are visible.Their intensity ratios indicate a strong deformation texture, which is expected when considering the wire processing route and is proven elsewhere [11].
After a heat treatment at 580 °C (Figure 4b), four different phases are observed: Ti, TiAl 3 , TiAl 2 and Ti 3 Al.TiAl 3 is the dominant phase formed during the prior phase reaction.TiAl 2 and Ti 3 Al only exist in small quantities, indicated by their low overall intensities.These phases developed after completion of the TiAl 3 formation.Figure 5b gives also proof to this assumption by a slight increase in resistivity for the highest three temperatures after reaching the saturation level of about 0.9 •10 −6 Ωm (black arrow).This increase gives sign for further (but slower) phase reactions, which will then result in the formation of all other equilibrium phases according to the phase diagram.γ-TiAl is not observed in this temperature range.
After the high temperature reaction annealing, only two phases remain in the microstructure.Figure 4c shows reflections of γ-TiAl and Ti 3 Al, which are the expected equilibrium phases for the overall composition.
Kinetics of TiAl 3 Phase Formation
Figure 5 presents electric resistivity measurements of the wires at different temperatures.Starting at room temperature, Figure 5a shows a linear increase in resistivity with temperature for all wires investigated in the range between room temperature and 400 °C.Above 400 °C, an additional increase compared with the linear slope before is observed.This indicates the onset of a phase reaction according to Equation 1.For this reason, isothermal resistivity measurements were performed well above this temperature in order to investigate the effect of phase transformations on the electrical behaviour.This is depicted in Figure 5b for temperatures in the range of 540 °C to 600 °C.A decrease of the cross sectional area due to void formation can be excluded as a main reason for the increase in resistivity.The cross section measured before and after the heat treatment at 580 °C is 6.109 mm 2 and 6.094 mm 2 , respectively.This yields a reduction of 0.2%, which is not enough to explain an increase in resistivity by roughly 300%.Doing the same calculation taking the porosity value into account gained by density measurements, the effective cross sectional area even slightly increases after the heat treatment at 580 °C.Therefore, the increase in resistivity is discussed with regard to the phase transformation according to Equation 1.
The data presented in Figure 5a allows a detailed study of phase arrangement of Ti and Al in the as-prepared state.By using specific room temperature resistivities ρ 0 of 4 • 10 −8 Ωm and 43 • 10 −8 Ωm and resistivity temperature coefficients ρ 0 α of 1.825 • 10 −10 Ωm/K and 1.2 • 10 −10 Ωm/K for the Al alloy [22] and Ti [23], respectively, general models describing the temperature dependence of the resistivity of the compound can be applied.As expected from the deformation procedure, the assumption of parallel connection of Al and Ti is a good approximation of the temperature dependence of the electrical resistivity of the compound.Another description of the compound can be given by adopting the Hashin-Shtrikman bounds.Assuming Al as matrix material in which isotropic Ti is homogeneously distributed, the observed temperature dependence of the compound material can be well described, too.
As can be seen in Figure 1a, the initial filament structure is destroyed during accumulative swaging and bundling due to necking of the Ti, which is commonly observed in Ti-Al composites [9,10,17,24].Thus, the composite can be treated as an Al matrix with Ti particles embedded within the large scale filaments of the second stacking.The larger filaments themselves do not break up and behave more like a parallel arrangement.This is probably the reason why the measured resistivity curves are in between the parallel and the lower Hashin-Shtrikman bounds (Figure 5a).
After reaching the nominal temperature, the reaction starts after a temperature dependent incubation time, which indicates that the reaction is dominated by nucleation and growth.After the incubation time is reached, the resistivity increases with a very fast rate until saturation is reached.Above the inflexion point, the reaction continues noticeably slower than expected from the beginning of the reaction.This is probably due to a lowered diffusion rate due to void formation.
According to the work of van Loo and Rieck [21], TiAl 3 is the only phase that is formed during the reaction in spite of possible interface layers as long as non-reacted Al is still existent.This is due to the high diffusivity of Al in TiAl 3 .Other phases do not start to develop before the entire Al is consumed by the reaction.The slight increase in resistivity for the samples annealed at the highest temperatures after reaching the saturation level indicates the start of a second phase reaction, e.g., TiAl 2 or Ti 3 Al, as already discussed.The formation of all possible aluminides once all Al reacted to TiAl 3 is also reported by van Loo and Rieck [21].
During cooling, the linear slope of the resistivity of the wires, ρ 0 α, has significantly increased from 2.38•10 −10 Ωm/K to 1.08•10 −9 Ωm/K (Figure 5a).This value corresponds to the new composite, consisting of Ti and the titanium aluminide mixture.
To characterize the phase reaction, the curves in Figure 5b were fitted using the JMAK equation [25] for the normalized reacted volume fraction v(t) given in Equation 2to obtain the time dependent resistivity value ρ(t) (Equation 3) in the range of the starting ρ 0 and the final value ρ 1 of the measured resistivity curves: Note that ρ(t) is assumed to be in parallel circuitry as this was already proven for the linear thermal resistivity behaviour during heating, as shown in Figure 5a.Despite the fact that certainly not all assumptions made for the validity of Equation 2 in [25] are fulfilled, the fitting parameters t R and q allow an objective characterization of the measured time dependency for all temperatures.Original curves and those fitted with Equation 3 are shown in Figure 7a, while the Arrhenius plot of the calculated t R values is shown in Figure 7b.The JMAK equation fits the measured data very well, which is expected for a phase transition based on nucleation and growth.The calculated t R value characterizes the time, after which the volume fraction v(t R ) = 1 − 1 e has been transformed.The activation energy of the phase formation is calculated to 1.92 eV, which is in good agreement with the activation energy for Al diffusion in TiAl 3 (1.86 eV [21]).Therefore, it is concluded that the phase reaction is governed by nucleation mechanisms, while the growth is dominated by Al diffusion in TiAl 3 .
Figure 7b shows the JMAK exponent q as a function of temperature, which is close to 1 for temperatures up to 580 °C and almost 2 for the highest temperature investigated.The JMAK exponent is an indication for the growth dimensionality, which seems to change with increasing temperature.However, there is no visible proof for this assumption from the final microstructure shown in Figure 2 (except for the slight change in phase boundary shape discussed in Section 3.2).Different dominating diffusion mechanisms might be the reason for the change in dimensionality.
Conclusions
In the present study, the possibility of processing wires of titanium aluminides using the accumulative swaging and bundling technique is shown.This technique allows room temperature co-deformation of Ti and Al.The atomic ratio of Ti and Al at the end of the deformation process is nearly one.Phase reaction to titanium aluminides is accomplished by a two step heat treatment.
The first step up to 600 °C mostly forms the intermetallic phase TiAl 3 until the entire Al in the composite is consumed by the reaction.About 12% of porosity is introduced by this treatment, mainly due to the strong Al diffusion from the center to the wire rim region.Al diffusion is the dominating process during this phase formation since the activation energy for this reaction of 1.92 eV is close to that of Al diffusion in TiAl 3 .With increasing temperature, the growth mechanism of TiAl 3 seems to change from one-to two-dimensional growth, since the JMAK exponent increases from about 1.2 to 2.
A second step heat treatment at high temperature allows homogenization of the microstructure.Due to the initial Ti and Al concentration gradient from the inside to the outside, the final composition of the wires varies from γ-TiAl in the wire center to a lamellar microstructure composed of γ-TiAl and Ti 3 Al near the wire surface.The final mesostructure and phase composition are governed by the geometry of the Ti tubes and Al rods used.
Figure 3 .
Figure 3. BSE images showing the interfaces between Ti and TiAl 3 : Bright: Ti, dark: TiAl 3 , in between: thin interface phase.
Figure 4 .
Figure 4. X-ray diffractograms of (a) the initial wire; (b) after 116 h at 580 °C and (c) additionally annealed for 12 h at 1300 °C.
5 .
Resistivity measurements of the wires during isothermal heat treatments at different temperatures.(a) Electrical resistivity vs. temperature; (b) same data as shown in (a) but plotted vs. time.
Figure 6 .
Figure 6.BSE images: (a) Overview of the wire (same sample as in Figure 1c.Outer region is Ti rich, while the dark core is mainly composed of γ-TiAl; (b) shows the transition region from the core (left side) to the shell (right side).Note the fine lamellar microstructure.
Figure 7 .
Figure 7. (a) JMAK fits (dashed red lines) of the data from Figure 5b, indicated here as filled squares without showing all data for clarity reasons; (b) Arrhenius plot of the parameter t R from Equation 2 and the time exponent q from the same equation plotted to the same abscissa.
leading to a composite wire composed of almost 50 at.%Ti and Al each after stacking two times (right column), the values in brackets give measured mass densities.D o and D i refer to outer and inner diameter.
Table 2 .
Comparison of porosity values measured by OM area analysis in cross section and by density measurements.
a calculation considering the lower Al content. | 2014-10-01T00:00:00.000Z | 2013-05-10T00:00:00.000 | {
"year": 2013,
"sha1": "509e731d247e3522ff5b9a127ff50801375d4f30",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/3/2/188/pdf?version=1368205884",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "509e731d247e3522ff5b9a127ff50801375d4f30",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
256290576 | pes2o/s2orc | v3-fos-license | Role of Metal Cations of Copper, Iron, and Aluminum and Multifunctional Ligands in Alzheimer’s Disease: Experimental and Computational Insights
Alzheimer’s disease (AD) is the most common form of dementia, affecting millions of people around the world. Even though the causes of AD are not completely understood due to its multifactorial nature, some neuropathological hallmarks of its development have been related to the high concentration of some metal cations. These roles include the participation of these metal cations in the production of reactive oxygen species, which have been involved in neuronal damage. In order to avoid the increment in the oxidative stress, multifunctional ligands used to coordinate these metal cations have been proposed as a possible treatment to AD. In this review, we present the recent advances in experimental and computational works aiming to understand the role of two redox active and essential transition-metal cations (Cu and Fe) and one nonbiological metal (Al) and the recent proposals on the development of multifunctional ligands to stop or revert the damaging effects promoted by these metal cations.
INTRODUCTION
Nearly a century has passed since Alzheimer's disease (AD) has been recognized as a type of dementia. Since then, the incidence of the disease has been increasing and has become the most common form of dementia nowadays, covering from 60% to 70% of all global cases. 1 In its 2022 report the Alzheimer's Association presented an evaluation of the impact of this disease in the United States, showing that, while cardiovascular diseases or stroke-associated deaths decreased from 2000 to 2019, the reported deaths due to AD increased by 145%. In 2019 121,499 deaths related to AD were registered, which places it as the fifth leading cause of death for adults aged 65 years and older. On the other hand, the estimated economic impact on 2022 was about $321 billion USD meant for the treatment and care of this dementia. This amount is expected to increase to about $1.0 trillion USD by 2050. 2 Only 5% of reported AD cases are caused by familialinherited genetic mutations, generally found on genes codifying for the amyloid precursor protein (APP), presenilin-1 (PS1), presenilin-2 (PS2), and apolipoprotein E (APOE); it is identified as familial AD. 3 Remaining cases, which account for about 95% of the total, occur sporadically, with the older adults being the highest risk population. It is worth mentioning that AD's related massive neuronal damage, as well as other of its pathological characteristics, do not take part in natural aging.
Even though different drugs have been tested to stop the progression of the disease, currently there is not an effective treatment available. On a review of 413 clinical trials conducted between 2002 and 2012 it was found that there was only success in 0.4% of them, 4 showing that the probability of failure of the potential pharmacological agents in the testing phase is very high. In this way, an even greater economic and investigative effort is necessary.
AD is a multifactorial disease; the risk factors are both genetic and environmental, and it is still not possible to point out among its characteristics which is a cause or consequence, giving rise to many hypotheses looking forward to clarify the root cause of the disease but without certainty about which of them is the most determinant. A variety of diagnostic imaging methods and therapeutic strategies for many AD targets were reviewed by Ramesh et al. 5 The pathological hallmarks involved in AD are extensive, but among the most recognized and studied are those related to the aberrant processing of brain proteins and its consequences. However, throughout the years the research field has been evolving to the design of pharmacological agents able to tackle multiple targets of the disease. In this sense, many complementary hypotheses to protein misfolding have arisen around it as alternatives to give a better understanding of AD and as valuable options to elucidate new biological targets. 6,7 Herein, we discuss two of them: metallic ion dysregulation and oxidative stress, from a beta amyloid peptide (Aβ) point of view. This review will focus on two essential metal cations (copper and iron) and one exogenous metal (aluminum) and their role in the increase of oxidative stress related to AD from experimental and computational means and in the multifunctional ligands that have been proposed in order to stop some causes of the disease. Although zinc has also been found in senile plaques its activity is more related with the promotion of peptide aggregation and not in the increase of oxidative stress. 8
AMYLOID CASCADE HYPOTHESIS
The amyloidoses are disorders characterized by an extracellular accumulation of proteins in various tissues, developing insoluble fibrillar structures that adopt mostly parallel β-sheet conformations. Examples of proteins involved in amyloidosis are the immunoglobulins in primary systemic amyloidosis, protein fragments of amyloid A in secondary systemic amyloidosis, fragments of apolipoprotein A-1 in familial amyloid polyneuropathy, and amylin in the pancreas on diabetes' patients. 9 One of the most notable features found in the brains of AD patients is the accumulation of deposits of senile plaques. These are composed by the Aβ peptide, whose most common sequences consist of 40 to 42 amino acids and contain a hydrophobic fragment (G29-V40 or A42) that reduces its solubility, inducing its aggregation, as well as a fragment with high affinity to transition-metal cations (D1-K16). 10 The Aβ derives from the amyloid precursor protein (APP), an integral membrane protein that may be degraded by two proteasemediated mechanisms ( Figure 1).
The first step in the process is an ectodomain cleavage of the protein. The pathway known as nonamyloidogenic occurs through α-secretase and generates an APPsα fragment, while the amyloidogenic pathway begins with β-secretase and generates an APPsβ fragment. The second step is executed by γ-secretase, which proceeds to cut through the transmembrane domain of the APP. This releases the peptide p3 (not amyloidogenic pathway) or Aβ (amyloidogenic pathway) toward the cerebral medium. 11 The p3 peptide is harmless, but Aβ monomers aggregate to form oligomers, fibrils, and finally, senile plaques. Among these structures, the oligomeric forms have been pointed out to be the most toxic, even more than the senile plaques featured on advanced stages of the disease, since this conformation may be responsible for a detriment in the neuronal synapse. 12,13 Interestingly, this is contrary to the beneficial effects found for Aβ monomers, including neuronal growth, synaptic function modulation, and protection against oxidative stress, toxins, and pathogens. 14 However, when the monomeric form interacts with transition-metal cations, the ions are able to promote the catalytic production of hydrogen peroxide promoting an increase in the oxidative stress. 15 Although the specific role of the APP is not clear, a study with APP knockout mice has shown that there is an increase of copper concentration in the cerebral cortex relative to normal mice. 16 This suggests that there could be a copper regulation function performed by the APP on the brain. A feature article by Rajasekhar and co-workers covers various aspects of the Aβ peptide in depth; these include the aggregation processes, beneficial behaviors, toxicity, and a variety of oriented therapies. 17 The main idea of the amyloid cascade hypothesis holds that the senile plaques' accumulation is responsible for triggering the disease. This hypothesis has been the most recognized and studied since it was first suggested in 1992, due to the clear example seen in familial AD, where genetic mutations are associated in one way or another with the APP. In accordance with this, drug development has been directed toward the inhibition of the Aβ production through inhibition of βsecretase, its decrease in the brain medium, and its deposition mechanisms. Unfortunately, the list of drugs focusing on this hypothesis that have failed over more than 25 years of research is extensive, and many of these have resulted in cognitive deterioration and/or a null efficiency. 18 Immunological monoclonal antibody-type drugs, such as solanezumab, crenezumab, and aducanumab, that are focused on disaggregation or inhibition of the production of Aβ, did not produce substantial improvement in any of the clinical stages of AD, i.e., early, prodromal, or established. 19,20 On the other hand, an autopsy performed on 59 individuals from advanced age and normal cognitive conditions revealed that the accumulation of Aβ deposits can also be found in healthy individuals. In fact, several examinees would qualify as AD patients under criteria based on the deposition of plaques of Aβ and neurofibrillary tangles. 21 Furthermore, only 10 subjects presented no significant degenerative brain damage. In addition to this, a low correlation has been found between the amount of senile plaques formed, the degree of cognitive impairment, and both neuronal and synaptic loss. 22 This could suggest that Aβ depositions may not be responsible for triggering the AD but rather a consequence of an underlying cause or even a compensatory response that arises to lessen the consequences of other homeostatic dysregulations, such as those presented in the following sections.
OXIDATIVE STRESS IN AD
Oxidative stress occurs when a high concentration of reactive oxygen species (ROS) exceeds the biological load that antioxidant species can handle, triggering not only neurodegenerative diseases such as Parkinson's disease, Huntington's disease, and AD but also psychiatric disorders such as schizophrenia, anxiety, depression, and bipolar disorder. 23 This is, in part, because the brain is especially vulnerable to oxidative stress due to its high contents of oxidizable unsaturated fats and its high oxygen consumption rate. ROS can arise naturally in the mitochondrial metabolism, through an electron leak to oxygen in the electron transport chain and subsequent superoxide anion (O 2 · − ) generation, which is known to be the precursor of many other ROS. 24,25 The control of oxidative stress in biological systems is of high importance. Even though at low concentrations the ROS participate in signaling processes, 26,27 it has been shown that plants increase their ROS under stressful conditions and heavy metal-associated toxicity, inducing cell damage that must be controlled by antioxidant mechanisms. 28,29 In the same way, the human body possesses a variety of defense mechanisms against ROS, 30 but when these species get out of control they can cause severe damage to important biomolecules like lipids, proteins, and DNA by several mechanisms. 31 There is sufficient evidence of the role of transition-metal cations in ROS production and the increment in the oxidative stress. 32,33 For example, free iron and copper cations can produce ROS through Fenton and Haber-Weiss reactions 34−37 and also decrease the available biological amounts of ascorbate. 38,39 Additionally, intracellular iron can stimulate calcium signaling pathways, resulting in the activation of kinases involved in synaptic plasticity through ROS mechanisms. In healthy brains this is a beneficial interaction since iron-mediated ROS are generated at low levels and improve the already present calcium signaling at the synapse; nevertheless, when pathological amounts of iron are present, it can cause an accumulation of calcium signals within the cell leading to mitochondrial dysfunction. Likewise, high levels of calcium in the cell increase the redox active iron in the medium, which creates a toxic cycle for the cell and could be an important neurodegeneration pathway related to AD. 40 Calcium itself is also capable of generating oxidative stress within the mitochondria, increasing its membrane permeability and releasing pro-apoptotic factors to the cytoplasm, such as cytochrome C and the apoptosis-inducing factor, causing neuronal death. 41 Regarding copper, its complexes with Aβ can catalyze the activation of O 2 into the superoxide anion, 42,43 and computational studies for different Cu-Aβ complexes have shown that their respective reduction potentials are below the value for the O 2 /H 2 O 2 pair under physiological conditions (0.295 V 44 ), favoring its catalytic reduction. According to this mechanism, the first step is the reduction of the copper complex by a natural reducing agent. Then the Cu 1+ complex reduces O 2 to O 2 · − , and finally a second Cu 1+ complex reduces O 2 · − to H 2 O 2 ( Figure 2). 15 This shows the potential oxidative stress that it can provoke; in fact, it has been established that the Cu-Aβ complex has a higher efficiency in catalyzing the formation of H 2 O 2 than the respective complex with iron, 45 which emphasizes the high redox activity of copper. Between the two physiological copper species Cu 1+ and Cu 2+ , the reduced specie is considered the most toxic state of copper, and then, the inhibition of its redox activity in the free state is also of research interest. 46 All this activity suggests a considerable oxidative brain damage due to copper, which is why there is a special need to develop chelating agents for copper as potential pharmacological agents in the early stages of AD.
Other in vitro studies indicate that mutations associated with familial AD on genes encoding the APP, PS1, and PS2 proteins involve a considerable increase of ROS that stimulate different apoptotic pathways with a wide variety of caspases and JNK kinases; 47,48 this oxidative stress can be controlled by using antioxidants such as α-tocopherol or vitamin E, which results in an inhibition of the apoptosis. 49 In vivo studies with mice also highlight the presence of ROS in the development of AD associated with these genes, which, in addition to inducing apoptosis, causes synaptosomal protein oxidation and lipid peroxidation. 50−52 Brain samples from familial AD patients also show a high oxidative stress in conjunction with higher oxidative deterioration of the temporal cortex in contrast to sporadic AD patients. 53 However, oxidative stress is not only present in familial AD, as it is well-known that the most associated risk factors for sporadic AD are correlated with oxidative stress, either by ROS production, or by a decrease of endogenous antioxidants: aging, diabetes, traumatic brain injury, high cholesterol levels, hypertension, and stroke, 54−57 whereas antioxidant intake is a protective factor. 58−61 The oxidative stress hypothesis states that a dysregulation of ROS is responsible for triggering the disease. Evidence supporting this hypothesis is the fact that the effects of oxidative stress precede the formation of Aβ deposits. Postmortem studies revealed that oxidative damage of nucleic acids and proteins in the brain is worse in early stages of AD and decreases as Aβ plaque deposition progresses. 62 The same trend is observed in oxidative damage of nucleic acids in cerebrospinal fluid. 63 It was also found that RNA oxidation is prominent in the brain regions affected by AD, while other regions remain with no substantial RNA oxidation increase. 64 An examination of familial AD patients' brains revealed that the oxidative stress is greater on the frontal cortices of brains exhibiting less amount of Aβ deposition. 65 The idea is supported by studies in mice, where lipid and protein oxidation and the decrease of antioxidant activity are more pronounced in cases with less fibrillar deposition of the peptide. 66 In fact, it has been proven that the Aβ peptide at low physiological concentrations has an antioxidant effect, which protects lipoproteins from oxidation in cerebrospinal fluid, 67 and that Aβ monomers can inhibit neuronal death caused by oxidative damage of Fe 2+ , Fe 3+ , and Cu 2+ species by means of coordination and subsequent disruption of their redox activity with O 2 . 68 This strengthens the idea that plaque formation is a biological response, rather than a cause. According to this evidence, it is suggested that early oxidative stress in AD has effects such as mitochondrial dysfunction, metallic dysregulation, and metabolic imbalances, typical characteristics of early stages of AD. 69 Along with the formation of Aβ deposits, the high levels of metallic ion concentration, and oxidative stress incidence, an additional neuropathologic mark is the neurotransmitter alteration in the absence of acetylcholine due to hydrolysis reactions produced by the acetylcholinesterase enzyme (AChE). The Food and Drug Administration of the United States (FDA) approved four drugs for the treatment of AD; all of them are inhibitors of AChE (i.e., tacrine, donepezil, rivastigmine, and galanthamine) capable of regulating the neurotransmitters for a period from one to two years in moderate cases of the disease. However, after this period it is not possible to change the course of the disease, and still there is no definitive cure. 70 All these drugs present serious side effects such as vomiting and diarrhea; for this reason, the use of tacrine was discontinued in the United States since 2013. The scarce therapeutic benefit observed with these treatments produces serial doubts about the cost-benefit relation of these drug treatments. The only drug approved by the FDA that is not an AChE inhibitor is memantine; its form is fused with donepezil (namzaric), and it is capable of improving intracellular regulation of calcium ions. 71 Unfortunately, the therapeutic effects of drugs last between 6 and 12 months; after this period, the AD continues its progress.
METAL IONS IN AD
Metal ions have significant functions in biological systems such as catalysts, electron-transfer reactions, or activation and transport of small molecules. Numerous body proteins and enzymes have metals in their structure as prosthetic groups, which operate as active sites and help them to accomplish their biological functions. The properties of metal ions in biological environments are mainly caused by the geometry of the metal complex due to the nature of the ligands coordinated to the metal center and the metal complex coordination environment. Evidence supports that the presence of unregulated metals can cause severe health conditions. Abnormal distributions of certain metals in the brain have been attributed in the diagnosis of diverse diseases of the central nervous system, such as AD, Parkinson's disease, dementia with Lewy bodies, bipolar disorders, and depression. 39 Several investigations have been focused on the biological regulations of copper, iron, and zinc, and there is sufficient in vitro and in vivo evidence linking them to the development of various diseases, including AD. 72−74 In addition, there is strong evidence suggesting that neurochemical reactions other than Aβ production must contribute to the development of AD. 75 For instance, through a post-mortem study, the concentration of these metal ions in the amygdalas of AD patients was found to be higher compared to that of control subjects (Cu 2+ : 5.7, Fe 3+ : 2.8, and Zn 2+ : 3.1 times higher). 76 The coordination spheres of these metal ions in their respective Aβ complexes and their affinities for the peptides and aggregates have also been reviewed by del Barrio et al. 77 Here we present some important characteristics of these metals whose action in the brain has been found to be related to AD development and the computational works aimed to determine their structures and molecular properties.
Copper.
This metal can be found in metalloenzymes such as superoxide dismutase (SOD), cytochrome c oxidase, ascorbate oxidase, and ceruloplasmin. Its controlled release from synaptic vesicles is an integral part of neurotransmission 78,79 and accomplishes important functions related to neuropeptide synthesis and the proper functioning of the immune system, 80 so it is essential to maintain its consumption in traces. Among its main transporters are ATP7A, ATP7B, Atox1, and the copper transporter protein (CTR1), for transport through the cell membrane, whereas ceruloplasmin helps to transport throughout the body. A direct correlation has been found between the location of CTR1 and Atox transporters and the copper levels in different regions of the brain. 81 Copper is found in higher concentrations in regions like substantia nigra, locus coeruleus, and hippocampus, areas that are intimately related to memory. The locus coeruleus is a brainstem structure that performs functions related to sleep, attention, learning, stress, and memory; it is recognized as the main source of norepinephrine (neurotransmitter produced by the copper-dependent enzyme dopamine-β-hydroxylase from dopamine); therefore, it contains a higher concentration of copper than other brain regions, demonstrating the importance of this metal in the circadian cycle. 82 Although the biological amounts of copper are lower than those of iron and zinc, it has been found that very low levels of intracellular free copper, on the order of 1 × 10 −18 M, can cause oxidative damage. 83 The toxicity of Cu 2+ in the brain is attributed to an inhibition of the interaction between nerve growth factor and ubiquitin, 84 to a decrease in the available levels of glutathione, 85,86 and to the induced formation of oligomers from already formed fibrils, whose interactions with the cell membrane increase its permeability. 87 Aβ could form a metallopeptide in combination with Cu 2+ and Zn 2+ (and possibly Fe 3+ ) ions, which are intrinsically present in the brain, mediating peptide toxicity (through the production of radicals and hydrogen peroxide) and peptide aggregation. 75 Studies in mice show that consumption of trace amounts of copper from water generates senile plaques and considerable cognitive impairment. 88 In AD patients, it has been shown that there is a 52.8−70.2% decrease in total brain copper levels in regions such as the hippocampus and the entorhinal, motor, and sensory cortexes, presenting severe damage. 89 This decrease takes similar values to those found in Menkes' disease, where copper deficiency is recognized as the main cause. This evidence is in agreement with the regulatory effect of copper over the APP, since a high intracellular concentration of this metal promotes the nonamyloidogenic degradation, while a low concentration promotes the amyloidogenic pathway, generating senile plaques, 90,91 suggesting that intraneuronal copper deficiency is key in the pathogenesis of AD. 92 Computational Modeling of Cu-Aβ Systems. The computational treatment of copper in the context of AD has been mainly focused on the structural determination of its complexes with Aβ peptide and its participation in the production of reactive oxygen species. However, the study of Cu-Aβ complexes has been hampered by the fact that there are no reports of Cu-Aβ structures obtained by crystal or NMR studies, to the best of our knowledge. In this context, computational methods are a valuable tool to elucidate the structure and reactivity of these complexes. Nevertheless, the modeling of these complexes is challenging due to the openshell nature of the Cu 2+ systems and the high number of coordinating atoms from Aβ peptide. In addition, relevant properties to understand the participation of these complexes in ROS production (e.g., the standard reduction potential) depend on subtle electronic and solvent effects that could be incorrectly described by some computational methodologies.
In this sense, the initial Cu-Aβ models were built considering copper cations and its coordinating ligands, since a complete description of Cu-Aβ systems is computationally demanding. 93,94 Besides, to understand the geometrical changes that occur during the reduction or oxidation of the metal center, molecular dynamics (MD) simulations are appropriate due to the considerable size that these kinds of systems have. Nevertheless, due to the electronic effects that occur in this process and in the reaction with oxygen, classical molecular dynamics simulations are not adequate, as long as bond-breaking and bond-formation processes cannot be correctly modeled by these means. Instead, ab initio molecular dynamics (AIMD) simulations should be considered even if they are computationally demanding. A recent review by Strodel and Coskuner-Weber summarizes the computational studies on transition metals and Aβ. 95 In this work, we are going to highlight the computational studies intended to elucidate the structure and redox properties of copper, iron, and aluminum complexes with Aβ peptide.
Model Systems. Experimental studies on the determination of the Cu-Aβ structure have been performed by using electron paramagnetic resonance (EPR), NMR, and other techniques. The main conclusion of these experiments is that copper coordination to Aβ is highly dependent on the surrounding pH. Thus, two main components have been considered as plausible for the coordination of copper to Aβ peptide ( Figure 3). 96,97 Cu-Aβ model systems that consider copper cation and its coordination sphere have been used to study copper preferences in the Aβ sequence and to propose plausible mechanisms to the reactions that increase oxidative stress. 98,99 Recently, Bertini et al. have used density functional theory (DFT) methods to study the mechanism of the O 2 reduction to OH − + ·OH radical mediated by Cu-Aβ model systems and ascorbate. 100 As has been pointed out, the use of small model systems has allowed the understanding of copper coordination and reactivity of Cu-Aβ complexes. However, the use of small models does not allow us to study the effect of the peptide on copper coordination and the role of the peptide chain in the stability of the Cu-Aβ complexes. To consider these effects it is necessary to model the Cu-Aβ complexes by considering a larger fragment of the Aβ peptide.
Cu-Aβ 1−16 Models. Modeling the 1−16 region is important to understand the effect of the peptide chain on the stability of Cu-Aβ complexes. This region is considered since, as stated before, it corresponds to the metal affinity region. Initially, larger fragments of the peptide were considered to build the models, and finally the whole region was considered. Rauk et al. proposed a model by using classical MD to provide insight into the flexibility and most frequent conformations of the peptide chain. However, due to its classical treatment of the copper center it has not considered its coordinating preferences. 101 La Penna et al. proposed models considering key fragments of the copper affinity regions of Aβ using firstprinciples molecular dynamics simulations in a Car−Parrinello scheme for Cu 1+ and Cu 2+ and Aβ complexes. 102 In this work they were able to explain the coordinating preferences of copper to Aβ. Al-Torres et al. also proposed a computational protocol for building Cu-Aβ 1−16 models considering the full copper affinity region by combining homology modeling and DFT calculations. 103 A schematic representation of this protocol is presented in Scheme 1. The model building starts from the possible metal coordination spheres, and the peptide moiety is included by simulated annealing. Finally, full DFT calculations on the most plausible models are carried out to properly describe the stabilizing interactions.
The use of these larger models allowed us to understand the impact of the peptide organization on copper coordination preferences. In addition, it allowed us to study the reactivity of these Cu-Aβ complexes. In this sense, Mirats et al. reported superoxide as an intermediate in the water peroxide production confirming the results that Hewitt et al. obtained using model systems. 42,98 The presence of superoxide was later confirmed by experimental means using superoxide dismutase in the reaction media. 105 To explore the conformational and coordination changes on the Cu-Aβ1−16 complexes it was necessary to employ ab initio MD simulations. The results show that the Cu + -Aβ1−16 systems change from a square planar coordination sphere to a T-shaped structure, which is suitable for facilitating the production of hydrogen peroxide from molecular oxygen. 43 The modeling of these Cu-Aβ1−16 complexes has provided molecular insights to understand the electronic and geometrical phenomena that drive the coordination of copper to Aβ and to propose plausible mechanisms to explain the oxidative damage observed in the development of AD.
Iron.
Iron is the most abundant transition metal in the human body. This ion is present in hemoglobin, myoglobin, cytochromes, ferritin, hemosiderin, and other enzymes. 106 Iron its known to be of great importance in fetal and postfetal neurodevelopment, 107 and as it is part of the heme prosthetic group, it is closely related to diseases such as anemia and diabetes. 108 Its transport to the interior of the neurons is carried out by the transferrin, DMT1, and lactoferrin transporters; its transport to the exterior is facilitated by ferroportin, and to be transported throughout the organism requires the ferritin fragment. Thus, any imbalance of these transporters leads to an erroneous distribution of this metal in the brain. Multiple neuronal disorders can be associated with iron dysregulation, and many of them keep their origin in mutations in different genes, e.g., the C19orf12 gene, whose mutations are related to hereditary spastic paraplegia and pallidopyramidal syndrome. 109 Oral administration of iron in mice showed long-term alterations in biomolecules related to the synapse and mitochondrial function, causing memory loss. 110 Increasing iron concentration in regions such as the neocortex is attributed to normal aging, but this is not the case in areas such as the hippocampus and amygdala. 111,112 APP influences the stability of ferroportin in the cell membrane, and therefore it has a regulatory effect that aids iron transport out of the neuron 113 and maintains control of transferrin and ferritin levels, which prevents iron accumulation in the brain. 114 It is also known that iron overload can cause low levels of ferroportin and hepcidin, a fact that could be related to the iron accumulation found in this region in patients. 115,116 Additionally, considerable low levels of ferroportin and hepcidin have been found in AD patients' brain hippocampus, a region highly associated with memory and consciousness. 117 The presence of Fe 3+ ions in the brain environment induces APP production through multiple mechanisms, higher βsecretase activity (through a possible mechanism involving reactive oxygen species 118 ), and lower α-secretase activity (through furin modulation 119 ), with subsequent release of Aβ to the medium. In addition, iron slows down the aggregation process by preventing the orderly arrangement of the peptide, which is thought to increase its toxicity, 120 and by its deposition along with the plaques, it creates new redox active sites that increase oxidative stress. 121 This increase is supported by computational studies, which show that the complexes formed with Aβ have a reduction potential about 0.5 V lower than that of free iron under physiological conditions, increasing its reducing activity. 122,123 On the other hand, other studies suggest that this mechanism could be beneficial. An in vitro study showed that Pb 2+ and Pb 4+ intoxication inhibits APP translation in neurons, which produced a toxic accumulation of iron in the cytosol, but when iron was added to the medium, APP production was stimulated and then toxicity decreased. This could mean that Aβ deposition can be interpreted as a compensatory mechanism, as the concentration of copper, iron, and zinc metal ions has been found to increase within senile plaques compared to the surrounding tissue. 124 Fe 2+ cation is related to the production of neurofibrillary tangles through activation of CDK5 and GSK-3β kinases. 125 This cation neurotoxicity led us to think for a while that magnetite (Fe 2+ O·Fe 3+ 2 O 3 ) played an important role in AD. However, this hypothesis was discarded when it was shown that it is, in fact, very stable under biological conditions, and therefore it does not exhibit oxidative catalytic activity with peroxidase substrates. 126 Furthermore, no significant interaction was found with Aβ peptides in in vitro assays, and it was discarded along with iron oxides and hydroxides.
Computational Modeling of Fe-Aβ systems. Fe-Aβ systems have been less studied by computational means; in part, this is due to the fact that, even when iron is more abundant than copper in the human body, most of the biologically available iron forms the heme group and is not likely to facilitate the redox reactions involving in the development of AD. 127 Another reason is the poor solubility found in the Fe-Aβ complexes, which has made it difficult to conduct experiments in vitro. This lack of experimental information makes modeling Fe-Aβ systems a challenging task. Nevertheless, model systems have been explored computationally considering the information on the coordination sphere obtained by Raman experiments and the plausible coordinating ligands in an Aβ sequence. 128 Alí-Torres et al. used a combination of DFT and MP2 calculations to explore the different coordination spheres of Fe-Aβ complexes and to calculate their standard reduction potential, which could explain its participation in oxidative stress. 122 The computational modeling of Fe-Aβ complexes increases its difficulty with respect to copper by the fact that both reduced and oxidized forms of the complexes are openshell systems. In addition, three different spin states must be considered for all models, which increments the computational cost significantly. The description of the electronic structure has to be accurate since small changes in the electronic properties could change the calculated properties. In this sense, Orjuela et al. proposed a computational protocol to calculate the standard reduction potential (SRP) of iron complexes, and this protocol was applied to the calculation of the SRP of Fe-Aβ model systems (Figure 4). This work showed that a careful evaluation of density functional and basis sets as well as solvent models and thermodynamics cycles are needed for a proper description of these systems. The inclusion of explicit solvent models has been shown to improve the calculation of the solvation free energy. However, this increases the computational cost considerably. 123 According to the calculated SRP values, Fe-Aβ complexes could promote the catalytic production of hydrogen peroxide leading to an increase in the oxidative damage observed in AD brains.
Aluminum.
Although it is the most abundant metal in earth's crust, there is no known biological function related to it, as there is no enzyme or protein to make use of this metal. 129 For this reason, it has no associated transporters or chaperones, and negative effects related to its consumption have been reported, such as kidney dysfunction, anemia, and neurological disorders. 130 The role of aluminum in dementia began to be suspected in 1965, when after an injection of aluminum salts on mice, neuronal degeneration similar to the neurofibrillary tangles of AD occurred. 131 In the 1970s, it was found that patients with chronic renal failure could develop an aluminum-associated encephalopathy, whose defining symptoms were: speech impairment, epileptic episodes, and movement disorders. 132 Subsequent studies focused on relating this metal to AD. McDermott analyzed the aluminum content in 19 brain samples by atomic spectroscopy and then concluded that, even though aluminum concentration increases with age, no significant differences are found between AD and healthy brains. 133 In a clinical study of patients with this dialysisassociated encephalopathy it was found that there is no correlation between the amount of aluminum ingested and the morphology of AD in the brain. In contrast, the associated dementia disappeared when dialysis fluids were deionized, and aluminum's role on AD was finally discarded. 134,135 However, recently the pro-oxidant ability of aluminum has been related to the formation of a stable aluminum-superoxide complex. 136 Computational Modeling of Al-Aβ Systems. Characterizing the coordination preferences of aluminum with ligands of biological interest is difficult due to the larger number of coordinating atoms available in the cellular environment. In this sense, computational chemistry could help by decreasing the number of possibilities and identifying the most stable ligands and coordination modes of this metal. Aluminum has been related to form stable complexes with the Aβ peptide, but there are no experimental reports on its coordination spheres and their role on the production of radical species. Nevertheless, there are computational reports dealing with these two aspects: modeling Al-Aβ complexes and mechanistic studies on the possible role of aluminum complexes in Fenton reactions related to the development of AD and other metal-promoted neurodegenerative disorders.
As aluminum has been used on a regular basis, its interactions with the Aβ peptide have been studied by computational means. The study of this complex pretends to provide more insight into the role of nonbiological metals in the development of AD. Mujika et al. has proposed a computational protocol to build Al-Aβ complexes. This methodology considered the possible coordination spheres of aluminum and explored all the possible binding sites using a preorganized form of the peptide that allowed them to build plausible Al-Aβ models (Scheme 2). 137 The main difference between this protocol and the one used for the Cu-Aβ model construction is that, in this protocol, there is no need of a metal-Aβ complex as a template.
The reactivity of Al-Aβ model systems was also studied by computational methods. Mujika et al. have found that the complex of aluminum with citrate, the main low-molecularmass chelator biologically available, favors the Fenton reaction by reducing Fe 3+ to Fe 2+ . This is due in part to the formation of a stable aluminum-citrate-superoxide complex. 138 In summary, the metal ion hypothesis establishes that maintaining the homeostasis of metal ions at the brain level is key to healthy functioning and that, rather than an accumulation or deficiency of metals, there is an imbalance between the different brain regions 139 as well as changes in their bioavailability causing alterations at the synapse. The understanding of the role and coordination modes of metal cations relevant to Alzheimer's disease has inspired the design of molecules able to coordinate these metal cations and avoid the reactions promoting ROS production. Besides the chelating properties, ligands including other multiple properties to target other AD pathological factors have been designed in the last years. Some examples of these multifunctional ligands will be presented in the next section.
MULTIFUNCTIONAL LIGANDS
The multifactorial nature of AD and the scarcity of effective treatments have promoted the development of multifunctional agents capable of preventing or reverting some of the most important neuropathological hallmarks in AD. In the previous sections the role of three important metals involved in AD were reviewed. However, in addition to metallic ion dysregulation, other key factors with strong relationships among them could potentially trigger the disease. This is the case of Aβ deposits, high ROS levels, AChE inhibition, and monoamine oxidase (MAO) dysregulation. MAO is an enzyme located in the mitochondrial membrane that exhibits two isomeric structures A and B, upon which some neurotransmitter regulation is dependent. The catalyzed reaction of MAO isomers involves the release of hydrogen peroxide as a product, and, hence, an alteration of its enzymatic activity could lead to oxidative stress. 140 This multifactorial nature inspired the synthesis and testing of molecular scaffolds to obtain derived chelators with additional features.
One of last century's paradigms in drug design consisted in the development of single-purpose drugs, i.e., one molecule as a drug active principle for each therapeutic target (metal, protein, enzyme, etc.). Over the years, results have shown that these kinds of compounds present undesirable side effects in the treatment of diseases characterized by having multiple pathological factors, such as neurodegenerative disorders. 141 Nowadays it is more desirable to develop molecules with multiple therapeutical functions included within the same chemical structure. Sampietro et al. reviewed through a bibliographic search how in vitro and in vivo studies of multitarget compounds have been shown to be more effective compared to the use of single-target molecules in AD. In the same work they show that combinations of AChE/BChErelated oxidative stress, Aβ aggregation, and metal chelationinduced oxidative stress are among the top ten combinations used for the design of multitarget compounds. 142 The design of molecules that can carry out multiple functions has led to the proposal of candidates with chelating, intercalating, and antioxidant features. To ensure their safe application in AD treatment, these ligands need to exhibit low toxicity, good metabolic stability, and moderate affinity toward metals that are essential to metalloproteins and also possess the ability to cross the blood-brain barrier (BBB). 143,144 The design of multifunctional chelators is a complex task, and the development of new drug candidates is associated with high costs and large time periods. Currently, rational drug design is supported by computational tools that help to identify chemical structures with adequate drug-like properties and desirable activity toward therapeutical targets. 145−147 Diverse examples of multifunctional compounds have been reported in the literature throughout the years. 148−152 The implementation of different functions into a single structure for drug design follows three principal procedures. 153 The functionalization approach relies on organic chemistry reactions to generate chemical modifications in a compound with well-known drug-like properties, thus modulating its reactivity and mode of action. In contrast, the main goal of the attachment strategy is to obtain hybrid molecules by merging two or more known chemical structures using a linking bridge. This method has the advantage of leaving the active groups from the starting structures intact, building up a compound with mixed biological functions. 154 The attachment procedure is used to prevent undesired interactions for the metal chelator when crossing the BBB in order to reach the brain; this can be seen when glycoside fragments or nanoparticles are attached to the starting structure. 155,156 Lastly, the incorporation technique employs a structure whose function has to be optimized by the insertion of diverse substituents. 157,158 However, factors such as molecular weight, charge, and lipophilicity are important to achieve a proper absorption of the ligand into the organism and should be checked during the incorporation method. 159 One of the most common drug design filters is the wellknown rule of five established by Lipinski, which consists in a set of structural constraints for an appropriate absorption into the organism: the ligand should have less than 500 amu in molecular weight, less than five hydrogen-bond donor groups (O−H or N−H), less than ten hydrogen-bond acceptor atoms (O or N), and an octanol/water partition coefficient lower than five. 145 These criteria potentially apply to water-soluble structures that display good permeability for biological barriers. According to these rules, highly polar compounds would be too hydrophilic to cross the BBB, and this would likely translate to a poor brain diffusion, making them innocuous or even toxic candidates, whereas more hydrophobic structures are desired for this purpose. In this sense, Quantitative Structure Activity Relationship (QSAR) models are very useful tools to filter initial candidates, as they make it possible to perform the fast calculation of drug-like related properties for large sets of compounds, such as BBB permeability, a key factor for drug candidates in the context of AD. However, it must be noted that the accuracy of these methods is usually moderate for these kinds of approaches. 147 The huge amount of possible structure modifications and functional diversity is one of the main challenges during the drug design process. Computational methodologies are essential for the design and evaluation of potential multifunctional chelators in AD and other metal-promoted neurodegenerative disorders. These tools allow us to identify and design compounds with potential therapeutic effects using diverse techniques, such as a database search, algorithms for the comparison of substructures, fast calculation of molecular descriptors through QSAR schemes, and virtual screening, among others. In addition, emerging tools such as artificial intelligence and machine learning models have been used to speed up and improve the screening process in computer-aided drug design. 160 A proposed computational protocol is presented in Scheme 3. 146,161 The strategy starts with the rational drug design of a Scheme 3. Example of a Computational Protocol with the Essential Steps for the Design and Evaluation of Multifunctional Chelators structure that should present desirable properties, in addition to metal chelation, which will be used as a reference scaffold. Then, a virtual screening analysis is performed using various databases to find structures similar to this scaffold by checking chemical likeness, and so, the obtained list is expected to have a similar chemical structure and reactivity. At this point, a series of filters must be imposed to further detect the molecules with the best chances to become drug candidates; this can be done by executing a fast evaluation of the pharmacological properties over the entire molecular set. The last part of the protocol is to study the chelating and other desired properties, as antioxidant activity, for the reduced set of compounds by means of quantum-chemical calculations.
Evidence supporting the role of metal ions in Aβ aggregation and the increase of oxidative stress has rendered metal ion chelation as a promising treatment in anti-AD drug design. Nevertheless, after more than 20 years of research, chelating therapy is in continuous debate. Recently, Drew suggested the possibility of abandoning therapeutic chelation, especially focusing on copper ions due to the lack of evidence of clinical benefits. 162 However, Siotto et al. reviewed a meta-analysis that showed the relation between copper levels and AD development. 163 The initial proposals regarding metal chelators were focused mainly on the coordination of metal cations, and high metal affinities were desirable. 121 However, chelators should not sequester metal ions that are part of essential proteins to avoid side effects and toxicity issues, but at the same time they must hold a competitive affinity compared to Aβ-metal complexes to induce therapeutical effects. 164 These agents are also called metal-protein attenuating compounds (MPACs), and, in addition to owning drug-like qualities, they possess regulated affinities toward metal ions. Many studies have been performed for in vitro and in vivo models and also in small groups of AD patients to assess Aβ-metal-related toxicity in the brain using MPACs. 165,166 Compounds with demonstrated metal chelation activity have served as inspiration in the design of new drug candidates with improved qualities by means of the functionalization, attachment, and incorporation techniques mentioned previously. Among these scaffolds clioquinol is a remarkable example. 167 After this, a whole generation of chelating agents were developed by including fluorescence activity through marker fragment incorporation, which makes them able to track the temporal evolution of Aβ senile plaques in the brain upon drug treatment. 161 Some examples consist in thioflavin and p-Istilbene structure-like compounds, where chelating properties are induced during drug design. 161,168 Most recently, chelating agents with appropriate antioxidative activity have been proposed to diminish the oxidative stress and neurotoxicity in some critical stages of the disease. Natural products and their derivatives are representative of this group of compounds with antioxidant properties; some examples are neoflavanoids, resveratrol derivatives, and curcuminoids. 169−171 The design of these multifunctional agents is typically carried out by empirical knowledge of the antioxidant activity given by functional groups or molecular structures (e.g., natural products). Nevertheless, the hydrogen peroxide cycle shown in Figure 2 involves multiple steps that are SRPdependent, highlighting it as a potential key aspect to be considered throughout the drug design process. Chaparro et al. proposed a computational protocol for the design of promissory candidates with appropriate affinity toward the metallic ions of interest, drug-like characteristics, and controlled antioxidant properties by calculating the SRP values of the involved metal complexes. 172 This methodology is presented in Figure 5, where it was applied to the search for copper multifunctional agents. It started with the selection of a set of 64 copper complexes with reported experimental SRP values in aqueous media, which were then modeled by DFT calculations in their oxidized and reduced states. The latter allowed us to estimate the metal affinity toward both Cu 2+ and Cu 1+ ions, selecting a first set of compounds by their proper chelating ability. Then, further filtering was accomplished by applying Lipinski's rule of five and calculating the BBB permeation. The remaining set of compounds was classified into two groups according to their SRP: antioxidants, whose SRP values are higher than the corresponding to produce hydrogen peroxide from oxygen in biological conditions (0.295 V), 44 and redistributors, whose SRP values are in the range from −0.32 to 0.30 V (the lower limit being defined by the most abundant natural reducing agents present in neurons).
In these terms, antioxidant molecules would be able to disrupt ROS generation thanks to their larger associated SRP value, whereas redistributor-like compounds would exhibit the characteristics needed to accomplish a mechanism for metal redistribution into neuronal cells. These SRP calculations were performed by using an adapted methodology for isodesmic reactions that allows for an accurate prediction, as it was evaluated with a wide variety of copper complexes with different types of coordination spheres. 173 Finally, molecular scaffolds with the desired properties were obtained and used to build up a set of derivatives that fulfill the previous requirements.
Extensive research has been carried out in recent times regarding promissory multifunctional chelating agents; some examples of these compounds and their most important characteristics are presented in Table 1. Here, the most relevant metal chelating affinities, experimental tests, computational studies, and additional features are listed for some representative derivative compounds obtained from commonly known scaffolds in AD drug design. Most of the derivatives contain N and O donor atoms, which can coordinate the metal ions and form 1:1 or 1:2 metal−ligand complexes depending on the geometrical constraints of the ligand. Frequently, the computational chemical studies of candidates include a QSAR analysis to assess their drug-like properties and filter a large set of structures, DFT calculations of chelators and its corresponding metal complexes, and molecular docking simulations to explore the interactions of the promissory candidates with Aβ monomers and fibrils. It is important to emphasize that, in many cases, the chelators described in Table 1 need further investigation in animal models before they can be tested in clinical trials.
The increasing number of multifunctional agents has turned this area into an intense field of research. The number of molecules tested at several levels has allowed us to understanding the role of metal cations and to propose new and more efficient candidates to prevent or revert the neuronal damages observed in AD patients. This review attempts to provide a state of the art on the role of these metal cations in the increase of the oxidative stress and the multifunctional ligands proposed to fight the effects of the disease.
CONCLUSIONS
Alzheimer's disease is currently one of the major challenges in human health. The projection about people that will become AD patients is a concerning situation for the upcoming years. This highlights the urgency of further research toward a full understanding of the disease and the development of alternative and effective treatments. Based on experimental observations, some pathological hallmarks in AD have been well-established, and several hypotheses have been formulated as an effort to explain them. The present review focused on the role of the main metals involved in AD such as the formation of highly stable Aβ-metal complexes or the production of ROS that cause oxidative damage in neurons. This is also related to the multifactorial nature of the disease, which increases the difficulty to find a cure due to the existence of different therapeutical targets.
Accordingly, rational drug design in AD aims to develop multifunctional chelating compounds capable of having multiple desirable features. Here, we reviewed the current state of multifunctional chelators, and a classification based on their therapeutical functions is provided. A list of remarkable chelating compounds along with their principal characteristics and related works is given. Furthermore, we also present the impact of computational chemistry in AD drug design during the last years. | 2023-01-27T16:05:47.638Z | 2023-01-25T00:00:00.000 | {
"year": 2023,
"sha1": "34aa56f8db8fe1d02114ab0d52a2558b26b06cc4",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2574bac2fa4e790a5f3d306c2b2122d174a6631c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
20827514 | pes2o/s2orc | v3-fos-license | Opposite cortical fracture in high tibial osteotomy: lateral closing compared to the medial opening-wedge technique.
BACKGROUND AND PURPOSE
The aim with high tibial valgus osteotomy (HTO) is to correct the mechanical axis in medial compartmental osteoarthritis of the knee. Loss of operative correction may threaten the long-term outcome. In both a lateral closing-wedge procedure and a medial opening-wedge procedure, the opposite cortex of the tibia is usually not osteotomized, leaving 1 cm of bone intact as fulcrum. A fracture of this cortex may, however, lead to loss of correction; this was examined in the present study.
PATIENTS AND METHODS
We used a prospective cohort of 92 consecutive patients previously reported by Brouwer et al. (2006). The goal in that randomized controlled trial, was to achieve a correction of 4 degrees in excess of physiological valgus. In retrospect, we evaluated the 1-year radiographic effect of opposite cortical fracture. Opposite cortical fracture was identified on the postero-anterior radiographs in supine position on the first day after surgery.
RESULTS
44 patients with a closing-wedge HTO (staples and cast fixation) and 43 patients with an opening-wedge HTO (non-angular-stable plate fixation) were used for analysis. 36 patients (four-fifths) in the closing-wedge group and 15 patients (one-third) in the opening-wedge group had an opposite cortical fracture (p < 0.001). At 1 year, the closing-wedge group with opposite cortical fracture had a valgus position with a mean HKA angle of 3.2 (SD 3.5) degrees of valgus. However, the opening-wedge group with disruption of the opposite cortex achieved varus malalignment with a mean HKA angle of 0.9 (SD 6.6) degrees of varus.
INTERPRETATION
Fracture of the opposite cortex is more common for the lateral closing wedge technique. Medial cortex disruption has no major consequences, however, and does not generally lead to malalignment. Lateral cortex fracture in the medial opening-wedge technique, with the use of a non-angular stable plate, leads more often to varus malalignment.
Background and purpose The aim with high tibial valgus osteotomy (HTO) is to correct the mechanical axis in medial compartmental osteoarthritis of the knee.
Loss of operative correction may threaten the long-term outcome. In both a lateral closing-wedge procedure and a medial opening-wedge procedure, the opposite cortex of the tibia is usually not osteotomized, leaving 1 cm of bone intact as fulcrum. A fracture of this cortex may, however, lead to loss of correction; this was examined in the present study.
Patients and methods We used a prospective cohort of 92 consecutive patients previously reported by Brouwer et al. (2006). The goal in that randomized controlled trial, was to achieve a correction of 4 degrees in excess of physiological valgus. In retrospect, we evaluated the 1-year radiographic effect of opposite cortical fracture. Opposite cortical fracture was identified on the posteroanterior radiographs in supine position on the first day after surgery.
Results 44 patients with a closing-wedge HTO (staples and cast fixation) and 43 patients with an openingwedge HTO (non-angular-stable plate fixation) were used for analysis. 36 patients (four-fifths) in the closingwedge group and 15 patients (one-third) in the opening-wedge group had an opposite cortical fracture (p < 0.001). At 1 year, the closing-wedge group with opposite cortical fracture had a valgus position with a mean HKA angle of 3.2 (SD 3.5) degrees of valgus. However, the opening-wedge group with disruption of the opposite cortex achieved varus malalignment with a mean HKA angle of 0.9 (SD 6.6) degrees of varus.
Interpretation Fracture of the opposite cortex is more common for the lateral closing wedge technique. Medial cortex disruption has no major consequences, however, and does not generally lead to malalignment. Lateral cortex fracture in the medial opening-wedge technique, with the use of a non-angular stable plate, leads more often to varus malalignment. ■ High tibial valgus osteotomy (HTO) is a generally accepted treatment for medial unicompartmental osteoarthritis of the knee with varus alignment, especially in younger, active patients (Virolainen et al. 2004). A successful outcome for the osteotomy relies on proper patient selection, and achievement and maintenance of adequate operative correction (Hernigou et al. 1987, Berman et al. 1991, Spahn et al. 2006a). The two most commonly used surgical techniques are the closingwedge HTO with fibular osteotomy and the opening-wedge HTO with plate fixation, the second of which has become popular more recently. Avoidance of fracture of the opposite cortex in HTO can be difficult, and is generally limited by the angular size of the wedge. A fractured medial cortex in a lateral closing-wedge osteotomy may lead to progressive movement of the distal tibia into a varus position (Kessler et al. 2002). Instability at the opening-wedge osteotomy site due to disruption of the lateral cortical hinge may pos-sibly result in displacement of the osteotomy, and in this way it may contribute to recurrent varus deformity (Miller et al. 2005).
Opposite cortical fracture in HTO is an adverse event for which one cannot randomize the patient. The highest practicable level of evidence for evaluation of the consequences of intraoperative opposite-cortex fracture in HTO would thus be a retrospective analysis of a well-designed randomized controlled trial. We therefore used a cohort of 92 patients who had been included in a prospective level-I study on two different techniques of HTO, which has been described extensively by Brouwer et al. (2006). The objective of the present analysis was to determine the effects of opposite cortical fracture on whole-leg alignment, for both closingwedge and opening-wedge osteotomy.
Patients and methods
The patients included in this study were part of a randomized, controlled, and consecutive trial published by Brouwer et al. (2006), comparing lateral closing-wedge and medial opening-wedge osteotomy in 92 patients. The goal in that trial was to achieve a correction of 4 degrees in excess of physiological valgus. For the closing-wedge group, a slotted wedge resection guide of Allopro (Zimmer, Winterthur, Switzerland) was used under fluoro-scopic guidance. The anterior part of the proximal fibular head was resected and the osteotomy was fixated with 2 staples. After surgery, a standard cylinder plaster cast was applied for 6 weeks. The opening-wedge HTO was created with the Puddu HTO instrumentation (Arthrex, Naples, FL), and performed under fluoroscopic guidance to control the correction during the surgical procedure; the osteotomy was fixated with the non-angular stable Puddu plate. If the wedge was more than 7.5 mm, the open wedge was filled with bone from the ipsilateral iliac crest. All patients were mobilized on the first postoperative day, and partial weight bearing was allowed for 6 weeks.
This analysis focuses on the radiographic effect of opposite cortical fracture on the whole-leg alignment in both techniques at the 1-year followup. Standardized radiography was performed preoperatively, on the first day postoperatively, and 12 months after surgery.
1 patient was lost to follow-up (closing-wedge), and in 3 cases (2 closing-wedge and 1 openingwedge) we were unable to retrieve the radiographs taken 1 day after surgery to determine opposite cortical fracture. In another patient (opening-wedge), the 1-year postoperative whole-leg radiograph had not been done because of emergency treatment of an unrelated condition. Thus, the study population consisted of 87 of the 92 patients originally included (Table 1).
Measurements
The grade of radiographic osteoarthritis was scored according to Ahlbäck (1968) and measured on standard short posteroanterior radiographs in standing position. The mechanical axis (hip-kneeankle (HKA) angle) was measured on a whole-leg radiograph in standing position before surgery and 1 year after surgery (Brouwer et al. 2003).
Opposite cortical fracture of the osteotomy site was scored from the posteroanterior radiographs in supine position on the first day after surgery by one assessor (TMR), who was blinded as to the 1year postoperative radiographic outcome ( Figures 1 and 2). When cortical disruption was noted in the lateral closing-wedge group, we also looked for a gap. A gap was defined as more than 2 mm width between the disrupted opposite cortex fragments ( Figure 3).
Statistics
Distribution of HKA angle was examined by the Shapiro-Wilk test. The HKA data were not normally distributed; thus, the Mann-Whitney-Wilcoxon U-test was used to analyze between-group differences. The chi-squared test was used to compare the percentages of opposite cortical fractures between the groups. We used SPSS version 10.1 and p-values of ≤ 0.05 were considered to be statistically significant.
Results 44 patients had a closing-wedge HTO and 43 patients had an opening-wedge HTO. 36 patients (0.8) in the closing wedge group, and 15 patients (0.4) in the opening wedge group were found to have an opposite cortical fracture; this difference between the two osteotomy techniques was highly significant (p < 0.001). The relative risk of an opposite cortical fracture in closing-wedge HTO compared to opening wedge HTO was 8 (95% CI: 3-23).
Closing-wedge HTO
For the closing-wedge group, the mean preoperative HKA angle was 6.7 (SD 2.9) degrees of varus in patients with a cortex fracture and 6.0 (SD 4.1) degrees of varus in those without opposite cortical disruption. At the 1-year follow-up, the mean postoperative HKA angle was 3.2 (SD 3.5) degrees of valgus in the closing-wedge group with cortex fracture and 2.5 (SD 3.7) degrees of valgus without cortex fracture; these differences were not statistically significant (Table 2). Of the 36 patients with a medial cortex fracture, 18 showed no gap.
In 18 patients, we observed a gap of more than 2 mm between the opposite cortex fragments. When a gap was seen on the radiograph from the first postoperative day, the mean 1-year postoperative HKA angle was more in valgus compared to the group with opposite cortex fracture without a gap: 4.3 (SD 3.4) and 2.1 (SD 3.4) degrees of valgus, respectively (p = 0.05) ( Table 3). In the group with a gap, 7 patients had a valgus of more than 4 degrees as compared to 6 patients without a gap; this difference was not significant. No significant deviation from the planned 4 degrees in excess of physiological valgus was seen in any of the subgroups at the 1-year follow-up.
Opening-wedge HTO
Patients in the opening-wedge group with and without opposite cortical fracture showed no significant difference in the mean preoperative HKA angle of varus (4.9 degrees (SD 3.2) and 5.5 degrees (SD 2.5), respectively). 1 year postoperatively, the mean HKA angle was 0.9 (SD 6.7) degrees of varus in patients with cortex fracture, and 2.3 (SD 4.0) degrees of valgus in patients without cortex disruption (Table 2). This difference in postop-erative alignment between the two groups almost reached statistical significance (p = 0.057) 1 year after surgery. The group of patients with opposite cortical fracture had less accurate correction, with significant deviation from the planned 4 degrees in excess of physiological valgus at the 1-year followup (p = 0.04). In patients without cortex fracture, there was no statistically significant deviation from the planned correction 1 year after surgery.
Discussion
Opposite cortical fracture in closing-and opening-wedge HTO appears to be an operative com- Opposite cortical fracture yes no yes no n 36 8 15 28 HKA angle a , degrees preoperatively 6.7 (2.9) 6.0 (4.1) 4.9 (3.2) 5.5 (2.5) postoperatively -3.2 (3.5) -2.5 (3.7) 0.9 (6.7) b -2.3 (4.0) b a positive angle represents varus alignment; negative angle represents valgus alignment. b p = 0.06 for postoperative difference in HKA angle with and without cortex disruption in opening-wedge group. plication that is not entirely preventable when correcting a large varus deformity. Pape et al. (2004) reported that the capacity for plastic deformation of the medial cortex of the proximal tibia might have been exceeded in closing osteotomies with a larger wedge size (> 8 degrees), leading to a non-displaced fracture during the operation. Böhler et al. (1999) argued that valgus correction of the tibia plateau by removal of a wedge of as much as 10 degrees is possible without fracturing the medial cortex. However, another cadaver study by Kessler et al. (2002) permitted far less maximal angular correction (7 degrees for both the closing-and the opening-wedge technique) that can be applied to human tibias without fracturing the cortex at the apex of the wedge. This was confirmed by Spahn et al. (2003) who reported results from a series of 55 patients treated with opening-wedge HTO using the Puddu spacer plate (mean wedge correction angle of 9.7 (SD 1.9) degrees), with 8 lateral cortical fractures. In our patients, the main aim was to achieve a correction of 4 degrees in excess of physiological valgus at the 1year follow-up, which led to a mean correction angle of 11 (SD 3.1) degrees in the closing wedge and 9 (SD 2.8) degrees in the opening wedge group. The magnitude of the wedge sizes may be the reason for the high rate of unintentional opposite cortical fracture seen in both the closing-wedge (four-fifths) and opening-wedge technique (one-third). Also, with the closing-wedge technique it can be difficult to remove the wedge completely, especially at its apex at the medial side. Consequently, closing the wedge may cause fracturing at the medial osteotomy site. This probably explains the significantly higher rate of opposite cortical fracture compared to the opening-wedge technique.
Limitations of the study
The study patients were part of a randomized, controlled and consecutive trial on two osteotomy techniques (Brouwer et al. 2006). In clinical studies, there have been few reports on the effect of opposite cortical fracture in HTO that may lead to loss of correction. We realize that subgroup analysis in such a trial has its statistical limitations. However, it is impossible to randomize for adverse events such as opposite cortical fracture in HTO. With this analysis, we focused on the 1-year radiological effect of intraoperative disruption of the opposite cortex in both techniques. One limitation may be that we did not take a whole-leg radiograph on the first day postoperatively because the patient could not stand on the operated leg; thus, we cannot discriminate between insufficient correction and loss of correction during the 1-year follow-up.
Closing-wedge HTO
It has been recommended that one should maintain the medial cortex of the proximal tibia in lateral closing-wedge HTO, to provide sufficient stability and avoid a reduction in cortical contact area of tibial segments (Miniaci et al. 1989, Coventry et al. 1993. Progressive movement into a varus position may otherwise occur (Myrnerts 1980, Insall et al. 1984. However, our data do not support this statement. The desired correction was achieved even more often in the closing-wedge group with medial cortex fracture and a gap between the medial cortex fragments (Table 3). With the closing-wedge technique, posteromedial bony remnants may act like a more lateral hinge when closing the wedge, and probably cause fracture and gaping at the medial osteotomy site with pronounced valgisation. Pape et al. (2004) detected no loss of valgus correction on full weight-bearing standing radiographs in patients with a fractured medial cortex of the proximal tibia after a closing-wedge procedure either; they used an L-shaped rigid plate, which is supposed to offer high primary stability. Radiostereometric findings indicate a less stable situation for closing-wedge osteotomy when bone staples with a plaster cast are used (Magyar et al. 1999). However, Harrison and Waddell (2005) did not note any change in femoral-tibial alignment with the use of staples and a long leg cast.
Opening-wedge HTO
Whether or not the lateral cortex has been harmed largely dictates the stability after opening-wedge HTO, regardless of implant design (Stoffel et al. 2004). Agneskircher et al. (2006) also stated that in composite tibias, independently of the implant, axial loading is well tolerated with an intact lateral cortical bridge. Our results seem to confirm these findings in a clinical setting. With the use of the non-angular stable Puddu plate, 1-year after surgery the mean HKA angle was 2.3 degrees of valgus in the opening-wedge group without cortex fracture (Table 2). In our study, however, the opposite tibial cortex was fractured in 15 of 43 patients, and they had less accurate correction with a mean HKA angle of 0.9 degrees of varus at the 1-year follow-up. Hernigou et al. (1987) reported lateral cortex fracture as a complication of medial opening-wedge HTO, resulting in displacement of the osteotomy and recurrent varus malalignment before osteotomy union in 12% of all their patients. In our series, 4 patients with lateral cortex disruption were reoperated within 1 year-2 patients because of nonunion and 2 patients requiring revalgisation osteotomy due to the recurrent varus deformity (Brouwer et al. 2006). Disruption of the lateral cortex causes increased micromotion at the osteotomy site, and this instability most likely contributes to the high incidence of delayed union and nonunion after medial HTO (Miller et al. 2005). The Puddu plate proved unable to oppose the instability, which was also reported by others (Spahn et al. 2003). Mechanical studies have shown that when the lateral cortex is injured, angle-stable implants provide superior primary stability compared to the Puddu plate (Stoffel et al. 2004, Agneskircher et al. 2006). The angle-stable design protects the lateral cortex and prevents lateral displacement. Furthermore, the use of a spacer with an angle-stable plate seems to increase primary stiffness even more (Spahn et al. 2006b). When unintentional opposite cortical fracture occurs, we suggest the use of an angle-stable implant, which has the best biomechanical properties in internal fixation after medial opening-wedge HTO.
Conclusion
We conclude that fracture of the opposite cortex in HTO is much more common in the lateral closing-wedge technique than in the medial openingwedge technique. Nevertheless, medial cortex fracture in closing-wedge osteotomy with the use of staples and plaster has no major consequences, and can be managed successfully in the vast majority of patients. It does not generally lead to recurrence of varus malalignment 1 year after surgery. However, lateral cortex fracture in the medial openingwedge technique is an unstable situation. The use of a non-angular stable Puddu plate seems to provide insufficient primary stability and leads more often to varus malalignment. | 2018-04-03T05:20:51.595Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "4c9e74a0b35d8f3180db10b5beadb334a9ea0cd2",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17453670710015508?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "7626c4cedb225bbeaf66134b2d0cde68a4ffa064",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232479998 | pes2o/s2orc | v3-fos-license | A theoretical framework to improve the construct for chronic pain disorders using fibromyalgia as an example
Fibromyalgia (FM) is a frequent, complex condition of chronic musculoskeletal pain with no evidence for biological correlates. For this reason, despite many efforts from the medical community, its construct still appears ill defined. Promising candidate biomarkers are critically reviewed. A research agenda is proposed for developing a clearer construct of FM. The ideal theoretical framework is one of overcoming the illness–disease dichotomy and considering reciprocal interactions between biology and behaviour. This approach may foster research in other fields of pain medicine and of medicine in general.
Introduction
Fibromyalgia (FM) is a clinical condition characterised by widespread and persistent pain with insomnia, fatigue, morning stiffness, cognitive symptoms (memory, concentration, attentional problems, mental slowness) and emotional problems (depression and anxiety). 1,2 Its estimated prevalence is estimated at 2-4% of the general population. 2 FM is a complex syndrome that most likely originates from a multi-axial interaction between psychological, neurological, endocrine and immune systems. A detailed review of these is beyond the scope of this article but a brief summary can be found in the literature (2018). [3][4][5][6] Of interest here, FM shares with many forms of chronic pain an ill-defined construct. It has been described as an 'enigma' that has been under-, over-and mis-diagnosed. 7 Hauser et al. suggest the utilisation of evidence-based interdisciplinary guidelines that include a comprehensive clinical assessment to avoid the problem of inappropriate diagnosis. 7 Despite significant research over the past 30+ years, there exist issues of legitimacy of the condition, the diagnostic usefulness of the label, classification nosology, etiology and pathophysiology. 8 New diagnostic criteria have been proposed as well as recommendations for improved diagnosis by German, 9 European, 10 Canadian and International teams. 11,12 In the context of symptoms without the presence of any universally accepted biomarker, diagnostic criteria were proposed combined with excluding diseases already known for causing chronic widespread pain. 1 We suggest that the proper identification and management requires refocusing on the construct as a unitary disease-illness condition and developing appropriate content of the diagnostic criteria. These should merge into a unique diagnostic algorithm with both clinical features and the bio-markers of pathophysiology. 13 The construct of FM: a historical perspective The criteria for FM were originally developed in 1990 by the American College of Rheumatology (ACR) to reflect the prevalence of patients who presented to physicians complaining of chronic widespread pain and tender points. 1 These criteria reflected the fact that, at that time, the consensus opinion about the pathophysiology of fibromyalgia (i.e. the underlying 'disease') was that it was primarily a musculoskeletal disorder. These criteria focussed primarily on the pain. In 2010, the criteria were modified to reflect comorbid symptoms that contribute to the global suffering (and had been neglected in the past) included secondary symptoms such as depression, poor sleep, cognitive symptoms, and a minimum of three tender regions. 2 In 2016 new criteria were introduced, which no longer required the clinician to examine the patient for tender points (the only clinically detectable feature suggesting possibly some underlying muscle abnormality or a feature of sensitisation). This was in response to the fact that most physicians in the United States (US) were making the diagnosis of FM without examining for tender points and thus inappropriately applying the 2010/2011 criteria to their patients. 14 Also, the tender point examination was not viewed as a consistent, objective test but having a subjectivity and dependent upon each individual examiner's opinions and abilities to detect them. The emphasis was on whether the patient had chronic widespread pain (using a scale) and on secondary symptom severity: a purely subjective approach. The results took into account aspects of the pain, impact of the condition on the person and severity. However, the evolution of criteria has not been 'anchored' to any pathophysiology or any biomarkers of disease mechanism. An evaluation of the validity of these criteria and the violations of validity analysis can be found in Figure 1 of Appendix A (taken from Kumbhare et al. 3 ).
Why the present construct of FM is unsatisfactory
One of the major threats to the present approach is that, when the criteria are applied, a highly heterogeneous sample can be obtained. 3 This also arises from using criteria that were based upon freely self-reported symptoms and are applied by healthcare practitioners in different countries, socioeconomic groups and, most important, the lack of a reference standard against which the criteria are developed. Kumbhare et al. 3 opined that the criteria rely too heavily upon expert opinion and, furthermore, that there was no specific technique to investigate any underlying pathophysiology. The results of Table 1 suggest that there is acceptable inter-criteria agreement between the 1990 and 2010 criteria but not with the newest 2016 criteria.
The construct of FM: cutting edge of research To our knowledge, current research suggests that central sensitisation may be an important part of the syndrome's pathophysiology. [15][16][17] The pattern of expanding pain characterised by hyperalgesia (increased pain in response to normally painful stimuli) and allodynia (pain in response to normally non-painful stimuli) strongly suggested supraspinal rather than purely spinal dysfunction. 18 Moreover, in addition to widespread pain and tenderness, patients experience other symptoms suggestive of central nervous system (CNS) involvement, including fatigue, sleep, mood and memory difficulties, and hyper-sensitivity to sensory stimuli. Specific dynamic quantitative sensory testing (QST) has demonstrated other CNS pain processing abnormalities, including an increase in facilitatory activity (increased wind-up or temporal summation) and decreased descending analgesic activity [conditioned pain modulation (CPM)] as contributory mechanisms to CNS-mediated pain amplification. 19 Ideally, the criteria should include features of this construct.
Possible biomarkers for FM
In the ideal situation, a valid biomarker correlates, both in its level and changes, with clinical outcome. Validation of an outcome measure is far from a simple issue, and proper statistical techniques should be adopted. 20 The same holds for biomarkers. Their validation can be accomplished only by performing a number of therapies on a number of independent cohorts: an enormous work. 21 To make things even more difficult, there are a number of candidate biomarkers for FM. These include blood-borne biomarkers, imaging, neurophysiology measures and polygenomics assessments. A detailed systematic review is beyond the scope of this article. Kumbhare et al. have published a scoping review of relevant biomarkers, which is summarized herein. 3 With regard to blood-borne biomarkers, the literature provides evidence for hypothalamic-pituitary axis perturbations including adrenocorticotropic releasing hormone and cortisol [21][22][23][24][25][26][27][28] ; interleukin-6, -8 and -10 29-31 ; tumor necrosis factor 30,32-34 ; brainderived neurotrophic factor 35 ; and S100β. 35 Radiological assessments also provide biomarkers, such as advanced brain imaging that has shown changes in functional connectivity and blood flow. 36 Furthermore, recent advances in quantitative ultrasound of skeletal muscle have shown discriminative ability between healthy controls and persons with myofascial pain and FM. 37-40 Kumbhare et al. used texture feature analysis to examine the B-mode ultrasound images of the trapezius muscle of subjects with myofascial pain, 41 with latent and active myofascial trigger points, and compared them with asymptomatic healthy controls, finding significant differences. A similar analysis was performed for FM. 42 Neurophysiological assessments have also been developed in chronic pain conditions like FM, headache and osteoarthritis. Current research suggests that central sensitisation may be an important part of the syndrome's pathophysiology, to be included among the features of this construct. 16 The Nociceptive Flexion Reflex (NFR) threshold has been proposed as a potential biomarker candidate that may assist uncovering the neurophysiological pain mechanism that could ultimately play an important role in a more homogeneous diagnostic categorization and an accurate treatment, 43-47 despite its well-known betweensubject variability. 48 The NFR threshold has been described to be a marker not only of pain, 49 but also of the neuroanatomical reorganization at the spinal cord segments, specifically laminae II, III and IV of the dorsal horns. 49 The NFR threshold may vary between genders (threshold is lower in women). 50 The neurophysiologic mechanisms of this difference have been investigated. 51 For sure, inclusion of unbalanced sample size of males and females may introduce a biased effect size in the NFR threshold difference between fibromyalgic and healthy individuals. 51,52 Potential methodology against current shortcomings For a better understanding of the FM construct, the key to success is perhaps overcoming the oldestablished dichotomy between disease and illness. A disease refers to an abnormal biological condition that negatively impacts an organism's structure or function, 53 whereas illness usually refers to a patient's personal experience of symptoms or disability. 54,55 In chronic pain disorders, the clinical presentation can be an individual combination of manifestations attributable to the 'disease' as well as the 'illness'. However, the prevalent causal flow is not always straightforward. Often, any evidence for a 'disease' is missing, so that pain is wrongly considered as 'all in the mind' and therefore as non-existing. Recently, however, illness and disease have been claimed to represent the two sides of the same coin: a disease is defined as such (rather than an anomaly) because soon or later, in at least some of the patients, it will lead to an unwanted status of illness; on the other side of the same coin, any psychological states is associated to a specific biological reality, to the least at the level of neural circuitries (the substrate of the recent concept of 'nociplastic' pain). 56 A spiraliform, rather than unidirectional causal flow has been advocated in all health conditions, providing the rationale for treatments acting on the biological as well as on the psycho-behavioural sides of the coin. It is left to empirical research discovering which approaches are most effective in the various conditions. 57 In agreement with this perspective, and in order to obtain diagnostic criteria based both on reliable biomarker and a reliable subjective-clinical representation, the following methodology is suggested: a) Define carefully the construct for each syndrome. This is done by first establishing the content validity required within the construct. Measures of symptoms should comply with the highest metric standards for questionnaires. 58 In any case, for FM the ACR-established clinically based subjective criteria do not adequately consider the current understanding of the pathophysiology for FM. In order to do this with methodological rigor, the development of objective, reliable and clinically feasible biomarkers is necessary. The biomarkers should be a combination of blood-borne biomarkers, neurophysiologic and imaging to appropriately reflect the complexities of the disorder. We would suggest as a start including measures of central sensitisation. b) Employ the Delphi technique using the existing literature as well as the input from recognised experts would to establish the content validity. c) Perform analyses on independent cohorts to measure the convergent and discriminant characteristics of the factors associated with the content as identified by the Delphi process described above. d) Use (a) and (b) to develop the construct. This should be performed by a group of recognised experts and should represent basic science researchers, clinician scientists, clinical epidemiologists. e) Using the newly agreed upon construct for the disorder a new diagnostic definition and criteria should be created. f) Diagnostic criteria should then be applied clinically on independent cohorts to assess their impact upon important clinical care outcomes. Once this has been achieved the researchers and academic clinicians may accept it. g) A decision-tree diagnostic algorithm should be developed, based on the established criteria, and tested with respect to its predictive capacity.
Why solving the FM puzzle might be useful for pain medicine, and medicine in general
Many of the prevalent chronic pain syndromes do not have a reference or 'gold' diagnostic standard (either clinical, biological or both) and are classified according to mostly self-standing, subjective 'criteria'. For example, the International Association for the Study of Pain (IASP)/World Health Organisation (WHO) definition for chronic pain is that of a symptom complex that has been present for at least 3 months. The pain could be characterised as nociceptive, neuropathic or nociplastic (see above). They combine together within a chronic pain patient to result in poor functioning causing disability. 59 Rigorous definition of the 'illness' syndromes and valid measures of the constituent variables would steer research efforts towards specific biomarkers, thus fostering the construction of effective diagnostic algorithms. Syndromes of 'illnesses without disease' extend well beyond the domain of pain, encompassing most psychiatric conditions, 'neurofunctional' motor disorder, 60 dizziness and visceral 'psychosomatic' disorders, and other. A better understanding of FM might thus help treating all of these conditions.
Conclusion
Chronic pain conditions are multifactorial and can involve many body systems and their manifestations blur the disease-illness distinction. Furthermore, the construct for most has yet to be appropriately defined and universally accepted. We use FM -a prevalent nociplastic pain syndrome -to demonstrate some of the shortcomings of the past and current methods of diagnosis. We have set out a potential pathway for future research that defines the construct and establishes its content and through the use of Delphi technique develops new criteria which add physiological markers to the current diagnostic methodology. The critical question remains in choosing the correct biomarker. Ideally, this should reflect underlying disease mechanism or critical pathophysiology, be objective, reliable and feasible.
Appendix B: validation of a construct
Construct validation has a number of steps: development of the content of the instrument, composition of the instrument, response characteristics, the relationship of the scores and independent measurements of the same construct and the consequences of using the instrument. Therefore, validity measures the instrument's performance and interpretations. Validity is divided into the following: content, criterion and construct validity. Cronbach defined content validity as 'the extent to which the items of an instrument are sampled adequately from a specified domain of content'. 1 Usually, demonstration of adequate content validity is the first step and deemed necessary prior to the study of other types of validity. 2 This is because, if there is poor content validity, then, according to Norbeck et al., there is 'no sense' in testing the reliability of the instrument. In developing content validity there are three steps. 3 The first is labelled 'development stage', which encompasses domain identification, item generation and instrument (or tool, questionnaire, risk score) formation. Implicit in this process is that the precise definition of the construct has been established. The second or judgement stage entails asking a group of experts to determine whether the relevant content has been included and the adequacy of these domains as well as the extent to which the instrument measures them. 2 Once content validity has been established then construct validity can be assessed. Construct validity has two parts, convergent and discriminant validity. A common methodological approach to establishing convergent and discriminant validity is to demonstrate that multiple measures of a construct are related or more related to one another than to measures that represent another construct. 4 When interval data are available there are three approaches that can be utilized: multi-trait-multimethod matrix, factor analysis or LISREL (linear structural relations). 5 The first approach uses an analysis of variance to decompose the observed data based upon person, trait and methods variables. 6 A key assumption is that the traits are uncorrelated, which is not necessarily correct. Furthermore, the method has been criticised as being qualitative. 7 For these reasons, most researchers opt not to use this technique. Another approach is to use factor analysis to assess validity. In this approach principal component analysis is performed. This methodology requires 'big data' and provides components which represent the various constructs that are embedded within the dataset. 8 Table 2. Biomarker type and purpose described by FDA/NIH BEST.
Diagnostic
To detect (or confirm the presence of) a disease or identifies an individual with a subtype of disease. Thus, when evaluating a diagnostic biomarker the key issues are: is there proof that it adds to the diagnosis? And whether the information provided by the biomarker results in a change in clinical decision making?
Monitoring
To assess the status of a disease and is designed to be measured serially. This type of biomarker is used to detect the effect of an intervention and characterise it.
Pharmacodynamic/ response
Changes in response to exposure of a medical product or environmental agent.
Predictive
Predicts the response that an individual or group of individuals may have when exposure to a medical product or environmental agent. The proof that a biomarker can achieve this is provided by an experimental protocol that randomises patients with or without the biomarker to one or more treatments and the differences in outcome as a function of treatment are compared with the presence, absence or level of the biomarker.
Prognostic
To identify the likelihood of a clinical event, disease recurrence or disease progression in patients who have the disease or medical condition of interest. Thus this biomarker is associated with different disease outcomes but a predictive biomarker would provide information about who will or will not respond to a therapy.
Safety
Provides a measure of the likelihood, presence or extent of toxicity when exposure to a medical intervention or environmental agent occurs. It is important to note that this biomarker type does not provide information about the safety and potential benefit of the therapy but only provides a measure of likelihood of occurrence. | 2021-04-02T05:32:23.074Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "5152044009cacea487d5a1e22d21bcc6f26ce31b",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1759720X20966490",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5152044009cacea487d5a1e22d21bcc6f26ce31b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220500588 | pes2o/s2orc | v3-fos-license | First Complete Genome Sequences of Janthinobacterium lividum EIF1 and EIF2 and Their Comparative Genome Analysis
Abstract We present the first two complete genomes of the Janthinobacterium lividum species, namely strains EIF1 and EIF2, which both possess the ability to synthesize violacein. The violet pigment violacein is a secondary metabolite with antibacterial, antifungal, antiviral, and antitumoral properties. Both strains were isolated from environmental oligotrophic water ponds in Göttingen. The strains were phylogenetically classified by average nucleotide identity (ANI) analysis and showed a species assignment to J. lividum with 97.72% (EIF1) and 97.66% (EIF2) identity. These are the first complete genome sequences of strains belonging to the species J. lividum. The genome of strain EIF1 consists of one circular chromosome (6,373,589 bp) with a GC-content of 61.98%. The genome contains 5,551 coding sequences, 122 rRNAs, 93 tRNAs, and 1 tm-RNA. The genome of EIF2 comprises one circular chromosome (6,399,352 bp) with a GC-content of 61.63% and a circular plasmid p356839 (356,839 bp) with a GC-content of 57.21%. The chromosome encodes 5,691 coding sequences, 122 rRNAs, 93 tRNAs, and 1 tm-RNA and the plasmid harbors 245 coding sequences. In addition to the highly conserved chromosomally encoded violacein operon, the plasmid comprises a nonribosomal peptide synthetase cluster with similarity to xenoamicin, which is a bioactive compound effective against protozoan parasites.
Introduction
Janthinobacterium lividum is a betaproteobacterium and belongs to the family of Oxalobacteraceae. This family comprises 13 genera including the genus Janthinobacterium (Baldani et al. 2014), which in turn contains the species nov., and Janthinobacterium rivuli sp. nov. (Huibin et al. 2020). Members of Janthinobacterium are Gram-negative, motile, and rod-shaped (Baldani et al. 2014). They are strictly aerobic, chemoorganotrophic, and are proposed to grow at a temperature optimum of 25-30 C (Baldani et al. 2014). In addition, psychrophilic isolates are known that are able to grow at 4 C (Suman et al. 2015). Janthinobacterium strains inhabit different environments including soil (Asencio et al. 2014;Shoemaker et al. 2015;Wu et al. 2017), various aquatic habitats such as lakes (Suman et al. 2015), water sediments (McTaggart et al. 2015), and rainwater cisterns (Haack et al. 2016). Some Janthinobacterium isolates are also known as beneficial skin microsymbionts of amphibians (Brucker et al. 2008;Harris et al. 2009) and as pathogens of rainbow trouts (Oh et al. 2019). Janthinobacterium colonies have a purpleviolet color produced by the pigment violacein. This colorful secondary metabolite (SM) is known to exhibit antimicrobial, antiviral, and antitumor properties (Andrighetti-Frö hner et al. 2003;Bromberg et al. 2010;Asencio et al. 2014) and thus bears great potential for biotechnological applications. In the current study, we assessed 1) the first complete genomes of two novel J. lividum isolates EIF1 and EIF2 and compared 2) the genetic localization of the violacein cluster within the genus Janthinobacterium. Additionally, we focused on 3) the potential for synthesis of bioactive SMs.
Isolation, Growth Conditions, and Genomic DNA Extraction
Janthinobacterium lividum EIF1 and EIF2 were obtained from environmental oligotrophic water surface and plant material including leaves and stem from opposite-leaved pondweed, Groenlandia densa. The samples were collected in Gö ttingen (Germany) on 11.09.2018 (51 33 0 58 00 N 9 56 0 22 00 E). Enrichment cultures were performed by using environmental water samples to inoculate peptone medium containing 0.001% (w/v) peptone (Carl Roth GmbH þ Co. KG, Karlsruhe, Germany). Cultures were allowed to stand undisturbed for 3 weeks at 25 C (Poindexter 2006). Both biofilm and water surface material were sampled and streaked on 0.05% peptone-containing agar medium supplemented with 1% vitamin solution No. 6 (Staley 1968). After colony formation, they were transferred to a new agar plate containing diluted peptone medium supplemented with CaCl 2 (PCa) (Poindexter 2006) and incubated for 4 days at 25 C. For singularization of isolates, restreaking was performed at least four times. Individual single colonies were cultured in liquid PCa medium and genomic DNA was extracted with the MasterPure complete DNA and RNA purification kit as recommended by the manufacturer (Epicentre, Madison, WI). After the addition of 500 ml Tissue and Cell Lysis Solution, the resuspended cells were transferred into Lysing Matrix B tubes (MP Biomedicals, Eschwege, Germany) and mechanically disrupted for 10 s at 6.5 m/s using FastPrep-24 (MP Biomedicals). The supernatant was cleared by centrifugation for 10 min at 11,000 Â g, transferred into a 2.0-ml tube and 1 ml Proteinase K (20 mg/ml) (Epicentre) was added. The procedure was performed as recommended but the MPC Protein Precipitation Reagent was modified to 300 ml. The 16S rRNA genes of purified isolates were amplified with the primer pair 27F and 1492R (Fredriksson et al. 2013). Sanger sequencing of the polymerase chain reaction products was done by Seqlab (Gö ttingen, Germany).
Genome Sequencing, Assembly, and Annotation
Illumina paired-end sequencing libraries were prepared using the Nextera XT DNA Sample Preparation kit and sequenced by employing the MiSeq-system and reagent kit version 3 (2Â 300 bp) as recommended by the manufacturer (Illumina, San Diego, CA). For Nanopore sequencing, 1.5 mg DNA was used for library preparation employing the Ligation Sequencing kit 1D (SQK-LSK109) and the Native Barcode Expansion kit EXP-NBD103 (Barcode 3) for strain EIF1 and the Native Barcode Expansion kit EXP-NBD104 (Barcodes 7 and 12) for strain EIF2 as recommended by the manufacturer (Oxford Nanopore Technologies, Oxford, UK). Sequencing was performed for 72 h using the MinION device Mk1B and a SpotON Flow Cell R9.4.1 as recommended by the manufacturer (Oxford Nanopore Technologies) using MinKNOW software v19.05.0 for sequencing (strain EIF1 and first run of EIF2) and v19.06.8 for the second run of strain EIF2. For demultiplexing, Guppy versions v3.0.3 (strain EIF1), v3.1.5 (strain EIF2, first run), and v.3.2.1 (EIF2, second run) were used. Illumina raw reads were quality filtered with fastp v0.20.0 (Chen et al. 2018) using the following parameters: base correction by overlap, base phred score !Q20, read clipping by quality score in front and tail with a sliding window size of 4, a mean quality of !20, and a required minimum length of 50 bp. Reads were additionally adapter trimmed by using cutadapt v2.5 (Martin 2011). For adapter trimming of Oxford Nanopore reads, Porechop (https://github.com/ rrwick/Porechop.git; last accessed April 29, 2019) was used with default parameters. Quality filtering with fastp v0.20.0 (Chen et al. 2018) was performed by using following parameters: base phred score !Q10, read clipping by quality score in front and tail with a sliding window size of 10, a mean quality of !10, and a required minimum length of 1,000 bp.
For J. lividum EIF2, a de novo long-read-only assembly with Nanopore reads was performed using Unicycler v0.4.8 due to repetitive transposases in the genome and to avoid misassemblies of overrepresented repetitive regions by short-reads. To increase the quality of the Nanopore assembly, additional polishing was performed with unicycler-polish (https://github. com/rrwick/Unicycler/blob/master/docs/unicycler-polish.md; last accessed February 13, 2020) by mapping Illumina shortreads with bowtie2 v2.3.5.1 (Langmead and Salzberg 2012) against the Nanopore-based assembly and base correction by Pilon 1.23 (Walker et al. 2014). This routine corrects substitutions, indels as well as larger variants such as repetitive homostretches, deletions, and large deletions. The contiguity of the assembly was manually inspected and evaluated with Tablet v1.19.09.03 (Milne et al. 2013). Quality of the assembled genomes was assessed with CheckM v1.1.2 (Parks et al. 2015) and genome annotation was performed by using the Prokaryotic Genome Annotation Pipeline v4.11 (Tatusova et al. 2016) and subsequent manual curation of the genes encoding the violacein operon.
Results and Discussion
Genomic Features of J. lividum EIF1 and EIF2 We present the first complete genomes of two J. lividum strains EIF1 and EIF2, which originate from a surface water sample and a pondweed plant in Gö ttingen, respectively. The sequencing statistics are summarized in supplementary table S1, Supplementary Material online. The complete genomes were assembled from quality-filtered Oxford Nanopore reads (EIF1: 41,395 and EIF2: 237,547) with a mean length of 8,045 bp (EIF1) and 5,954 bp (EIF2) and Illumina reads with 3,271,600 (EIF1) and 2,562,634 (EIF2) reads in total. The de novo hybrid genome assembly of J. lividum EIF1 yielded a 6,373,589-bp circular chromosome, with a GC-content of 61.98% and a coverage of 181.3-fold. Short-read polished long-read Nanopore assembly of J. lividum EIF2 resulted in a circular chromosome (6,399,352 bp) and a circular plasmid (356,839 bp) with a coverage of 298.9-fold and 343.6-fold and a GC-content of 61.63% and 57.21%, respectively. In total, short-read polishing corrected 321 variants including substitutions, insertions, homo-stretches, deletions, and large deletions. Both assemblies were evaluated manually with Bandage v0.8.1 (Wick et al. 2015) and Tablet 1.19.09.03 (Milne et al. 2013). No CRISPR regions were detected in both genomes.
Phylogeny of J. lividum EIF1 and EIF2
The quality of the assemblies was evaluated with CheckM v1.1.2 (Parks et al. 2015) and revealed high purity with a completeness of 99.6% and a contamination rate of 2.38% (EIF1) and 1.58% (EIF2), respectively. The first taxonomic assignment of GTDB-Tk v1.0.1 (Chaumeil et al. 2019) based on fastani values (97.66-EIF1 and 97.5-EIF2) demonstrated that both strains belong taxonomically to the species J. lividum. The available type and representative strains were used in the pyani analysis, which revealed that both J. lividum EIF1 and EIF2 build a cluster with the type strains J. lividum H-24 T and NCTC 9796 T (fig. 1A). In detail, J. lividum EIF1 and EIF2 cluster with 97.72% and 97.66% sequence identity, respectively, to the type strains J. lividum H-24 T and NCTC 9796 T . This is above the species boundary of $94-95% and allows a reliable classification of both isolates EIF1 and EIF2 to the species J. lividum. The genomes of strains EIF1 and EIF2 share a sequence identity of 98.48%.
In total, 186 genes in EIF1 and 187 genes in EIF2 were associated with cell signaling including quorum sensing (52 genes), biofilm formation (EIF1, 77 genes; EIF2, 78 genes), and cell motility (57 genes). This indicates flexible genomes that enable the microorganism to sense and process diverse stimuli and substrates from the environment.
Members of the genus Janthinobacterium are a promising source for novel pharmaceutical compounds, as they bear the potential to synthesize important SMs with exceptional antibacterial, antifungal, antiviral, and antiprotozoal properties (Brucker et al. 2008;Wang et al. 2012;Asencio et al. 2014;Suman et al. 2015;Dur an et al. 2016). Both isolated strains showed a purple color during growth in liquid and solid media (supplementary fig. S1, Supplementary Material online), indicating the production of bioactive pigments. Genome analysis with AntiSMASH v5.1.2 (Blin et al. 2019) revealed that EIF1 comprises six and EIF2 seven putative SM gene clusters. In both genomes, genes typical for synthesis of terpene, bacteriocins, and violacein were detected.
The genomic comparison of the violacein operon (vioABCDE EIF1 3,947,675-3,970,695 bp and EIF2 1B). The genomic surrounding indicates conservation at intraspecies level only. In addition, the J. lividum EIF2 plasmid p356839 encodes a putative nonribosomal peptide synthetase cluster. This cluster showed an overall similarity to other known clusters synthesizing xenoamicin A/xenoamicin B by comprising eight core biosynthetic genes (G8765_29435-G8765_29465, G8765_29475), five additional biosynthetic genes (G8765_29470, G8765_29490, G8765_29500, G8765_29505, and G8765_29540), and three transport-related genes (G8765_29520, G8765_29525, and G8765_29535). Xenoamicin A/B is known for its activity against Plasmodium falciparum, an unicellular protozoan parasite (Zhou et al. 2013). The comparison of the GC-content shows a difference of 4.42% between the chromosome and plasmid p356839 of strain EIF2 suggesting that the plasmid was obtained recently.
Comparative Genomics of Janthinobacterium
The two complete genomes of J. lividum (EIF1 and EIF2) presented here allow a reliable genomic structure simulation of incomplete available draft genomes of this species for the first FIG. 2.-Comparison of two new complete genomes of Janthinobacterium lividum strains EIF1 and EIF2 and three draft genomes of J. lividum NCTC 9796 T , Janthinobacterium agaricidamnosum DSM 9628 T , and Janthinobacterium svalbardensis strain PAMC 27463 R . The figure was generated using BLAST Ring Image Generator (Alikhan et al. 2011). As central reference, the J. lividum EIF1 chromosome is depicted (black ring with size, GC-content, and GC skew are indicated). BLAST matches between J. lividum EIF1 and other strains are shown as concentric colored rings on a sliding scale according to percentage identity (100%, 90%, or 70%). Regions of differences are labeled: bacteriocins (red), terpene (green), violacein (pink), prophages EIF1 (black), prophages EIF2 (gray), and regions of differences with hypothetical function (light gray).
time. The overall genome comparison between Janthinobacterium genera revealed not only broad genome similarities but also species and strain specific differences ( fig. 2) (Alikhan et al. 2011).
The chromosome organization of the species J. lividum follows a conserved structural genus blueprint, which is also detected in the species J. agaricidamnosum and J. svalbardensis. This highlights the high genomic conservation of the genus Janthinobacterium. To investigate regions of difference, the genomes were searched for putative prophage regions, which are known as drivers of genomic evolution (Casjens 2003;Brü ssow et al. 2004;Canchaya et al. 2004). PHASTER analysis revealed two putative prophage regions (region 1: 2,543,591-2,585,344 bp; region 2: 2,565,749-2,585,385 bp) in J. lividum EIF1 ( fig. 2). The regions comprise 41.7 and 19.6 kb and were classified as questionable and incomplete, respectively. However, both comprised phage attachment sites. In the genome of J. lividum EIF2, eight putative prophage regions were identified, of which seven reside within the chromosome ( fig. 2) and one within the plasmid (315,441-324,647 bp, questionable). Two regions were classified as intact (region 1: 1,746,259-1,768,581 bp and region 3: 2,093,778-2,133,389 bp) and comprised the phage typical attachment sites AttL and AttR. These results support the hypothesis that bacterial strain diversification is mainly driven by phages interacting with the host chromosome ) and extrachromosomal elements obtained by horizontal gene transfer (Gim enez et al. 2019). Several SM clusters, such as the biosynthesis of bacteriocin, terpene, and violacein, were conserved among all investigated J. lividum isolates, indicating SM production as a common feature among this species.
In conclusion, we assembled two complete genomes derived from new isolates of the species J. lividum (EIF1 and EIF2) using Illumina and Nanopore technology. These are the first complete genomes described for this species and allowed indepth genome analysis and comparisons. We have shown that both strains encode SM clusters, including the bioactive compounds violacein and xenoamicin A/B.
Supplementary Material
Supplementary data are available at Genome Biology and Evolution online. | 2020-07-09T09:10:59.261Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "6330eeabe4ae9a5d8aa8ea78ae8973c5b1772a66",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/gbe/article-pdf/12/10/1782/33862737/evaa148.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1fa96f71fceed45fdef534e8eae18409836e922",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
229336 | pes2o/s2orc | v3-fos-license | Effects of alkaline or liquid-ammonia treatment on crystalline cellulose: changes in crystalline structure and effects on enzymatic digestibility.
Background In converting biomass to bioethanol, pretreatment is a key step intended to render cellulose more amenable and accessible to cellulase enzymes and thus increase glucose yields. In this study, four cellulose samples with different degrees of polymerization and crystallinity indexes were subjected to aqueous sodium hydroxide and anhydrous liquid ammonia treatments. The effects of the treatments on cellulose crystalline structure were studied, in addition to the effects on the digestibility of the celluloses by a cellulase complex. Results From X-ray diffractograms and nuclear magnetic resonance spectra, it was revealed that treatment with liquid ammonia produced the cellulose IIII allomorph; however, crystallinity depended on treatment conditions. Treatment at a low temperature (25°C) resulted in a less crystalline product, whereas treatment at elevated temperatures (130°C or 140°C) gave a more crystalline product. Treatment of cellulose I with aqueous sodium hydroxide (16.5 percent by weight) resulted in formation of cellulose II, but also produced a much less crystalline cellulose. The relative digestibilities of the different cellulose allomorphs were tested by exposing the treated and untreated cellulose samples to a commercial enzyme mixture (Genencor-Danisco; GC 220). The digestibility results showed that the starting cellulose I samples were the least digestible (except for corn stover cellulose, which had a high amorphous content). Treatment with sodium hydroxide produced the most digestible cellulose, followed by treatment with liquid ammonia at a low temperature. Factor analysis indicated that initial rates of digestion (up to 24 hours) were most strongly correlated with amorphous content. Correlation of allomorph type with digestibility was weak, but was strongest with cellulose conversion at later times. The cellulose IIII samples produced at higher temperatures had comparable crystallinities to the initial cellulose I samples, but achieved higher levels of cellulose conversion, at longer digestion times. Conclusions Earlier studies have focused on determining which cellulose allomorph is the most digestible. In this study we have found that the chemical treatments to produce different allomorphs also changed the crystallinity of the cellulose, and this had a significant effect on the digestibility of the substrate. When determining the relative digestibilities of different cellulose allomorphs it is essential to also consider the relative crystallinities of the celluloses being tested.
Background
Cellulose is the main constituent of biomass, forming approximately 40% to 45% of the dry substance in most lignocellulosic materials and, with an estimated annual production of 10 11 to 10 12 tons, it is the world's most abundant biological material [1][2][3]. Cellulose is a linear, unbranched homopolysaccharide composed of β-D-glucopyranose units which are linked together by β-1,4-glycosidic bonds to form a crystalline material. The cellulose degree of polymerization (DP) ranges from 500 to 15,000. The basic building unit of the cellulose skeleton is an elementary fibril, which is formed from insoluble microfibrils. The microfibrils are oriented along the longitudinal axis of the fibrils and are considered to be a bundle of 36 parallel cellodextrin chains which are held together by intermolecular (interchain and intrachain) hydrogen bonds [4,5]. Cellulose contains crystalline and amorphous regions, and crystallinity, a measure of the weight fraction of the crystalline regions [6], is one of the most important measurable properties of cellulose that influences its enzymatic digestibility [7][8][9]. Many studies have shown that completely disordered or amorphous cellulose is hydrolyzed at a much faster rate than partially crystalline cellulose [7][8][9], which supports the idea that the initial degree of crystallinity is important in determining the enzymatic digestibility of a cellulose sample.
Four different crystalline allomorphs of cellulose (cellulose I, II, III and IV) have been identified by their characteristic X-ray diffraction patterns and solid-state 13 C nuclear magnetic resonance (NMR) spectra. Cellulose I is the most abundant form found in nature. It is known that the crystalline structure of cellulose I is a mixture of two distinct crystalline forms, cellulose I α (triclinic) and cellulose I β (monoclinic) [10]. Cellulose I α is the predominant form found in bacteria and algae, whereas the cellulose in higher plants is mostly I β . Cellulose II can be prepared by two distinct routes, mercerization (alkali treatment) and regeneration (solubilization and subsequent recrystallization). Cellulose III I and III II can be formed from cellulose I and II, respectively, by treatment with liquid ammonia; the reaction is, however, reversible [11]. Cellulose IV I and IV II can be obtained by heating cellulose III I and III II , respectively [12]. A thorough review of cellulose crystalline allomorphs can be found elsewhere [13][14][15].
In the conversion of biomass to bioethanol, pretreatment of biomass is a key step intended to render cellulose more amenable and accessible to cellulase enzymes, thereby increasing glucose yields. Due to its rigid structure and crystallinity, cellulose provides the basic framework for plant fibers and is resistant to enzymatic hydrolysis. The enzymatic hydrolysis of cellulose is a slow process and the extent of hydrolysis is influenced by the structural properties of the biomass substrate, such as cellulose crystallinity, surface area, degree of polymerization and porosity [8,16]. In order to increase the efficiency and efficacy of the enzymatic hydrolysis process, it is necessary to perform a chemical pretreatment of the biomass to modify its composite structure, thereby allowing better access to cellulose by cellulase enzymes. Some chemical treatments, such as with liquid ammonia and aqueous sodium hydroxide (NaOH), are known to alter the crystalline structure of cellulose, resulting in the formation of different allomorphs that have different unit cell dimensions, chain packing schemes and hydrogen bonding relationships [17][18][19][20].
Liquid ammonia treatment of cellulose has been studied for many years [21][22][23]. Ammonia is believed to swell cellulose and penetrate its crystal lattice to yield a cellulose-ammonia complex [21,24]. According to Lewin and Roldan [25], during liquid ammonia treatment the ammonia molecules penetrate cellulose and react with the hydroxyl groups after breaking the hydrogen bonds. The interaction between cellulose and liquid ammonia [21] or primary amines [26] causes changes in interplanar distances of the 101 planes, resulting in a unit cell with increased volume from 671 cubic Å for native cellulose I to 801 cubic Å for the cellulose-ammonia complex. As the ammonia is allowed to evaporate, the volume of a unit cell of dried ammonia-treated cellulose decreases to 702 cubic Å [25]. The volume of the unit cell of cellulose III I thus formed is less than that of the cellulose-ammonia complex, but is slightly larger than that of native cellulose I. It has been postulated, therefore, that cellulose III I should have enhanced cellulase accessibility and chemical reactivity [18]. The degree of conversion of cellulose I to cellulose III I depends on various operational parameters [27], such as the reaction period and the temperature used in the final stage of the treatment [25].
Recently, Igarashi et al. [28] reported that supercritical ammonia treatment of crystalline cellulose I produced an activated form of cellulose that exhibited enhanced hydrolysis by cellobiohydrolase I (Cel7A), producing cellobiose at rates more than five times higher than from cellulose I. These authors concluded that supercritical ammonia treatment activated crystalline cellulose for hydrolysis by Cel7A; however, this study was only performed on the algal cellulose from Cladophora, which consists predominately of the I α allomorph.
Another chemical treatment that has been used extensively to alter the molecular and supramolecular cellulose structure is treatment with 8% to 25% aqueous NaOH solution [1,29]. The treatment of cotton linters (cellulose I) with 12% to 18% NaOH solution has been reported to convert cellulose I to cellulose II with decreased crystallinity [29,30].
The aqueous NaOH and anhydrous liquid ammonia treatments alter the crystalline structure of cellulose, resulting in the formation of different allomorphs, in other words, cellulose II and cellulose III I , respectively. However, besides producing the different allomorphs of cellulose, these chemical treatments can also alter other physical properties of cellulose, such as the degree of crystallinity, a property considered to play a major role in the enzymatic digestibility of cellulose [8].
The purpose of this study was to investigate the changes in cellulose crystal structure caused by treatment of different cellulose samples with aqueous NaOH or anhydrous liquid ammonia. Cellulose allomorphs were prepared from four cellulose samples with differing physical properties. The treated celluloses were characterized in detail and the impact of changes in their properties was correlated with the enzymatic digestibilities of the samples.
Results
Liquid ammonia treatments Figure 1 shows the X-ray diffractograms for Avicel PH 101 treated with liquid ammonia at different temperatures and hold times. The change from cellulose I to cellulose III I can be observed by following the position of the 002 peak, which shifts from a 2θ value of 23 to 21 as cellulose I is converted to cellulose III I . The diffractograms in Figure 1 show that under all conditions cellulose I was completely converted to cellulose III I , however the treatments also affected the crystallinity of the cellulose III I product. It is well known that, in X-ray diffractograms, amorphous solids with small particle size and lattice distortion show broad maxima, whereas crystalline solids have sharper peaks indicating larger crystallites [31][32][33]. From (a) to (d), the effect of removing ammonia at higher temperatures can be clearly observed, as the increased peak height and decreased peak width of the 002 peak indicates a decrease in the disordered (amorphous) fraction [34]. The sample heated to 140°C (Figure 1a) contained cellulose III I with high crystallinity, characterized by a narrow 002 peak with high intensity, whereas the sample treated at -33°C ( Figure 1f) contained cellulose III I that was more amorphous and so had a much broader 002 peak with lower intensity. Figure 1(d) and 1(e) show the effect of hold time at a final temperature of 25°C on the crystallinity of the cellulose III I product. Comparing (d) to (e), it would appear that increasing the hold time to 90 min resulted in a more crystalline cellulose III I product.
Data on the degree of crystallinity, amorphous content and crystallite size, for Avicel PH 101 treated with liquid ammonia under different conditions is given in Table 1. It can be noted that increasing the final temperature to which the cellulose was heated during ammonia treatment resulted in cellulose III I samples with higher crystallinity index (CI). For example, liquid ammonia treatment at a final temperature of 140°C for 1 h produced cellulose III I sample (a) with a CI of 78%, which was actually higher than the CI of the starting cellulose I (74%). When the reactor was depressurized at -33°C, allowing the ammonia to boil off before the cellulose was exposed to ambient temperature, cellulose III I sample (f) had a very low degree of crystallinity (43%) and an amorphous fraction of almost twice that of the starting cellulose I.
Based on the results of preparing cellulose III I samples from Avicel, two liquid ammonia treatment conditions were selected for the treatment of three other celluloses to produce cellulose III I samples. These samples were used to study the effect of crystalline structure on enzymatic digestibility. The conditions selected were treatment at a final temperature of 130°C for 1 h and at 25°C for 5 min. In the literature [27], production of highly crystalline cellulose III I was obtained by supercritical ammonia treatment at 140°C; however, in our hands, a darkly colored product was obtained at this temperature that had a lower glucan content, indicating that degradation of the cellulose had occurred. In order to minimize cellulose degradation during liquid ammonia treatment at higher temperatures, a final temperature of 130°C was selected as this yielded a cellulose III I sample with off-white color, little apparent glucan degradation (Table 2), and crystallinity comparable to that of the starting cellulose I. X-ray diffraction (XRD) of the cellulose III I samples made from different celluloses are shown in Figure 2C.
The other liquid ammonia treatment condition selected was 25°C for 5 min, as this was relatively easily obtained and gave cellulose III I with a low crystallinity. Ammonia treatment at two different conditions resulted in formation of cellulose III I samples with contrasting crystallinities and crystallite sizes. As can be seen in Table 3, for all four cellulose samples ammonia treatment at a final temperature of 25°C for 5 min resulted in an increase in the amorphous fraction; however, there was no significant reduction in the crystallite size of these cellulose samples. Interestingly, liquid ammonia treatment at a final temperature at 130°C for 60 min resulted in the formation of cellulose III I with a higher CI and much larger crystallite sizes than even the starting cellulose I for all cellulose samples.
Aqueous NaOH treatments
Alkaline treatments were performed on cellulose I samples using 16.5 weight percent NaOH for 2 h at 25°C to convert them into the cellulose II allomorph. The change from cellulose I to cellulose II is denoted by the appearance of a doublet in the XRD for the 10Ī and 002 peaks (at 2θ values of about 20 and 22) as shown in Figure 2B. Generally, the aqueous NaOH treatment of all four cellulose samples caused a substantial reduction in both cellulose crystallinity and crystallite size. Treatment of the two most crystalline cellulose samples, Avicel and cotton linters, with NaOH resulted in two-fold increases in the amorphous fraction. This finding is consistent with results reported in the literature [29,30]. Figure 2 shows the XRD of the three allomorphs for the four different cellulose samples; (a) cotton linters, (b) Avicel PH 101, (c) α-cellulose, (d) corn stover. The XRD of the three allomorphs are essentially the same as XRD reported in the literature [25,29,32,34]. Data on the degree of crystallinity, amorphous content and crystallite size (as measured by XRD) for all untreated and treated samples are reported in Table 3. 13 C cross polarization and magic angle spinning solidstate NMR analyses The 13 C cross polarization and magic angle spinning solid-state (CPMAS) NMR spectra of cellulose I and II from Avicel PH 101, α-Cellulose, IFC cotton linters and corn stover cellulose are shown in Figure 3A and 3B, respectively. All peaks were assigned by comparison with literature spectra [35] and CI values were calculated for each cellulose allomorph (Table 4). Each spectrum of the cellulose I samples had distinct C4 crystalline and amorphous peaks. Avicel PH101, α-cellulose and corn stover celluloses have similar spectra; however, the peaks in the spectrum from cotton linter cellulose were sharper and narrower. This is due to the high crystallinity and high DP of the cellulose in cotton linters. In the NMR spectra of the aqueous NaOH treated celluloses ( Figure 3B), the shape of each peak was changed, especially for the signals assigned to the C4 carbon; the crystalline and amorphous peaks in the C4 region (80 ppm to 90 ppm) were shifted. Due to decreased separation between the peaks, the method for calculating the CI of cellulose II samples by spectral subtraction is probably less accurate. However, the NMR spectra also indicate that the CI values for the cellulose II samples were lower than for the original cellulose I samples. The NMR spectra of cellulose III I obtained after liquid ammonia treatment at 25°C and 130°C are shown in Figure 4A and 4B, respectively. All peaks were assigned by comparison with literature spectra [36] and CI values were calculated ( Table 4). The spectra of cellulose III I prepared at 25°C ( Figure 4A) from α-cellulose and corn stover cellulose appear similar. The cotton linters spectrum has sharper peaks in all regions compared to the spectra of the other celluloses, probably because cotton linters cellulose has a higher molecular weight and a more crystalline structure ( Table 5). The spectra of cellulose III I prepared at 130°C show higher C4 crystalline peaks and much lower C4 amorphous peaks ( Figure 4B), indicating that the CI of each cellulose III I sample increased at 130°C (Tables 3 and 4). The NMR results confirm that the CI of the cellulose III I varied with the treatment conditions. Comparing the CI values obtained by XRD to those obtained from the NMR spectra (Table 4), it appears that the NMR gives higher CI values for cellulose I and II samples, but very similar CI values for the cellulose III I samples.
Cellulose DP analyses
Some studies have shown that cellulose digestibility increases as DP decreases [37], whereas Sinitsyn and colleagues [38] reported no correlation between DP and digestibility. In this work, the liquid ammonia treatments caused only small changes in cellulose DP ( Figure 5). NaOH treatment resulted in solubilization of the lowest molecular weight fraction (< 40 DP). This had a greater effect on the lowest DP cellulose, Avicel, resulting in removal of about 20% of the cellulose, compared with almost no effect on the highest DP cellulose, cotton linters, which had almost no material in this low molecular weight range. It is unlikely that these small changes in cellulose DP were the cause of the large differences in digestion rates for the cellulose samples treated with NaOH or anhydrous ammonia.
Enzymatic digestions
All treated and untreated cellulose samples were enzymatically digested in shake flasks in duplicate ( Figure 6). The variability in the digestions was about 3%, calculated from triplicate digestions performed on Avicel. From Figure 6, it can be seen that the maximum cellulose hydrolysis rates were obtained for the NaOH treated cellulose samples. Generally, the NaOH and ammonia treated celluloses had higher enzymatic digestion rates than the starting cellulose I samples. The initial hydrolysis rates (digestion period from 4 to 24 h) for the treated celluloses were about two times faster compared to the starting cellulose I for all three of the commercial celluloses. The initial hydrolysis rates and maximum cellulose conversions to total glucose plus cellobiose for the three cellulose treatments were found to be greatest with NaOH, followed by ammonia at 25°C, then ammonia at 130°C cellulose, with the lowest rates for the initial cellulose I. This was true for all but the corn stover cellulose. This cellulose was distinguished from the other celluloses by its low CI and the consequently very high digestibility of the original cellulose I, which was almost as digestible as the NaOH and low temperature ammonia treated samples. For the corn stover cellulose, the high temperature ammonia treated sample was the most crystalline and the least digestible.
Discussion
Effect of ammonia treatment on cellulose crystalline structure It has been reported in the literature that temperature and duration of treatment with liquid ammonia are critical in determining the crystal lattice and degree of crystallinity [25,27,39]. In this study, after ammonia treatment complete conversion of cellulose I to cellulose III I was observed with no remnant of cellulose I in the treated cellulose samples. However, the CI of the cellulose III I varied with the treatment conditions. According to the mechanism of liquid ammonia treatment by Schuerch [24], the anhydrous ammonia penetrates both the amorphous and crystalline regions in cellulose by breaking hydrogen bonds, resulting in the formation of an ammonia-cellulose complex. Various researchers [21,40] have reported that the formation of the ammonia-cellulose complex results in a larger unit cell than native cellulose. As ammonia is allowed to evaporate, new hydrogen bonds are formed which results in a completely new pattern of hydrogen bond cross-linking [24]. This generates the unit cell of cellulose III I [40]. Lewin and Roldan [25] produced a phase diagram for the treatment of cellulose with ammonia that involved cellulose I, the ammonia-cellulose complex, disordered cellulose and cellulose III I . The transition from cellulose I to the ammonia-cellulose complex was obtained by application of liquid ammonia. The transition from the ammonia-cellulose complex to disordered cellulose or cellulose III I was possible depending on whether the treatment and ammonia removal were performed in the absence or presence of heat, and cellulose I could be reformed by the application of heat and water. The crystallinity of the cellulose III I has been reported to depend on the temperature of treatment and removal of ammonia. Yatsu and coworkers [27] reportedly obtained highly crystalline cellulose III I by supercritical ammonia treatment of cotton at 140°C, whereas Saafan and coworkers [39] reported that a highly disordered material was obtained by treating cotton with liquid ammonia at -33°C for 1 min. In our experiments with Avicel cellulose, we examined conditions that achieved transformations of the ammonia-cellulose complex over a wide range of ratios of disordered cellulose to cellulose III I . It is clear that treatment time, temperature, and the samples' moisture contents can be manipulated to obtain ammonia treated samples with almost all levels of amorphous content, from highly disordered to highly ordered cellulose III I samples; even to the extent where the crystallinity of the product is greater than that of the starting cellulose. A few preliminary experiments have also been conducted that converted completely amorphous cellulose into cellulose exhibiting a significant degree of order.
Effect of crystalline structure on enzymatic digestibility
The interactions between properties such as crystalline structure, degree of polymerization, accessible surface area and particle size are complex. Their effects on cellulose digestibility are also difficult to assess. During pretreatment, a change in one property is often accompanied by changes in other properties [8,20,41,42]. This work has tried to distinguish the effect of changes in cellulose crystalline structure from changes in the other properties that occur during treatment of biomass with aqueous NaOH or anhydrous liquid ammonia. These treatments produced samples whose XRD indicated there had been changes in their crystalline structures that have been assigned by other researchers to cellulose allomorphs II [29] and III I [18]. However, it is evident from the amorphous content data ( Table 3) that these treatments also caused significant changes in crystallinity and as much as a two-fold increase in the amorphous content of the celluloses, thus making it difficult to assess the effect of changing the allomorph from changes in crystallinity on the digestibility of the samples.
The relative digestibilities of cellulose allomorphs have been investigated in previous studies on the two designated crystalline forms of cellulose I [10], cellulose I α [28] and I β [20]. Weimer and coworkers [20] compared the digestibility of amorphous cellulose and all four crystalline allomorphs (I, II, III, and IV) of cellulose prepared from cotton fiber (predominantly cellulose I β ) digested by ruminal cellulolytic bacteria. In their work, Weimer and coworkers reported that the initial hydrolysis rates for different allomorphs digested with R. flavefaciens to be in the following order: amorphous > III I > IV I > III II > I > II. They attributed the highest digestibility rates (approximately 10-fold compared to the starting cellulose I) obtained for amorphous cellulose to its fourfold higher gross specific surface area compared to the starting cellulose I. Additionally, the enhanced digestibility rate of III I was attributed to its reduced crystallinity and the difference in the lattice structures of cellulose I and III I . In recent work, Igarashi and coworkers [28] reported on the enzymatic digestibility of different crystalline allomorphs of Cladophora cellulose (algal cellulose -predominantly cellulose I α ) by T. reesei Cel7A to be in the following order: cellulose III I > cellulose I α > cellulose I β . They also stated that the enhanced digestibility rate of III I was due to its reduced cellulose crystallinity and the difference in crystal structure of cellulose I and III I , which is believed to have a lower packing density and greater distances between hydrophobic surfaces than cellulose I. In this work we have tried to assess the relative importance of changing crystallinity to changing allomorph type in determining the digestibility of cellulose samples. In Figure 6, the very different digestibilities of the starting cellulose I samples compared to the treated celluloses can be seen. The levels of conversion of untreated Avicel, cotton linters and, to a lesser extent, α-cellulose were much lower than the treated samples. This was not the case for the corn stover cellulose I sample. As can be seen in Table 3, Avicel and cotton linters, among the starting cellulose samples, had the lowest amorphous contents (26% and 27%, respectively); whereas, α-cellulose and corn stover cellulose had high amorphous contents (42% and 50%, respectively). Although the higher amorphous contents of the α-cellulose and corn stover cellulose may have been at least partially due to their lower cellulose contents due to the presence of xylan and lignin in the samples (Table 2), these data nevertheless indicate the importance of crystallinity in determining cellulose digestibility. It is apparent that the NaOH treated celluloses were all very digestible; although NaOH treatment produced the cellulose II allomorph it also resulted in a much higher level of amorphous cellulose than the other celluloses tested ( Table 3). The enhanced enzymatic digestibility of NaOH treated celluloses over that of the untreated celluloses and ammonia treated cellulose samples is likely due to the significant increase in the disordered or amorphous fraction in the NaOH treated cellulose samples. It is also important to note from Table 3 that there was a significant reduction in crystallite size for the NaOH treated cellulose samples.
The samples obtained by treatment with ammonia at a final temperature of 25°C were also relatively digestible, although not quite as digestible as the NaOH treated celluloses, and their amorphous contents were mostly lower than the NaOH treated celluloses ( Table 3). The low temperature ammonia treatment resulted in samples containing the cellulose III I allomorph, but also a higher level of amorphous cellulose compared to the starting celluloses. The digestibility of the low temperature ammonia treated celluloses was less than the NaOH treated celluloses and higher than the starting celluloses.
The samples obtained by treatment with ammonia with a final temperature of 130°C were less digestible than the samples treated at 25°C (except for the cotton linter samples) and had lower levels of amorphous cellulose compared to the starting celluloses. In the case of samples made from cotton linters, the digestibility of the 130°C treated cellulose was the same as the lower temperature treatment despite their lower amorphous contents (24% versus 48%). The corn stover cellulose treated at 130°C was much less digestible than all of the other samples obtained from corn stover, which all had higher amorphous contents (approximately 50% versus 32%). In assessing the high temperature ammonia treated samples, it is clear that they are generally less digestible than the low temperature ammonia treated and NaOH treated samples. In three of these cases, the high temperature ammonia treated samples were more digestible than the starting celluloses.
In Figure 7, the effect of amorphous content on the digestibility of the cellulose samples is presented. It appears that the samples with higher amorphous content were more rapidly digested than the more crystalline samples, especially early in the digestions. NaOH treated samples had the highest amorphous contents and the highest cellulose conversions, whereas the untreated samples with lower amorphous contents had the lowest conversions. Both the corn stover cellulose treated with ammonia at 25°C and the untreated corn stover cellulose had high amorphous contents and appeared as digestible as the NaOH treated samples, adding further support for the idea that amorphous content has a strong influence on digestibility.
The samples treated with ammonia at 130°C behaved somewhat differently to the other samples in that they had low levels of conversion early in the digestions, similar to the untreated celluloses that had similar high crystallinities. At later times in the digestions, however, they achieved much higher cellulose conversions than the more crystalline untreated celluloses (Figure 7) This appears to be evidence that the cellulose III I allomorph is more digestible than the cellulose I allomorph when the CI of the celluloses are similar. The higher digestibility of the cellulose III I allomorph is in agreement with earlier findings that this allomorph is more digestible than cellulose I.
To further elucidate the effects of crystalline structure on cellulose digestibility, factor analysis was used on the whole sample set to discover the underlying trends in the data and to determine if amorphous content or allomorph type had the most influence on digestibility. The factor analysis ( Figure 8) indicated that digestibility was strongly correlated with amorphous content, particularly in the early stages of digestion. A strong correlation of cellulose conversion with allomorph type was, however, not observed, although the correlation between allomorph type and cellulose conversion improved slightly as the digestion progressed.
This work has shown that, except for corn stover, all treated samples were more digestible than the original cellulose I samples. The results of this study indicate that there is a strong correlation between amorphous content and digestibility throughout the samples tested. Under selective conditions, cellulose was converted into a more disordered, accessible and reactive form that was amenable to enzymatic hydrolysis. Sinitsyn and coworkers [38] have reported a linear relationship between the CI of cellulose samples and the specific surface area accessible to proteins. The enhanced enzymatic digestibility of the cellulose samples may therefore be attributed to a greater accessibility of the celluloses to cellulase proteins. Further studies using purified single enzymes (such as Cel7A, and others) and cellulose I, II and III I allomorphs prepared with comparable degrees of crystallinity will be required to elucidate the true difference in digestibility caused by the structural differences of these allomorphs.
Conclusions
Treatment of pure celluloses with alkali and ammonia produced changes in their crystalline structures, which impacted their digestibilities when tested with a commercial enzyme complex (GC 220). Both XRD and NMR spectra show that while different allomorphs were successfully prepared, the treatments resulted in the formation of materials with differing levels of crystallinity. Treatment with NaOH converted cellulose I to cellulose II, but also produced much less crystalline cellulose. Treatment with liquid ammonia produced the cellulose III I allomorph; however, crystallinity depended on treatment conditions. Treatment at a final temperature of 25°C resulted in a less crystalline product, whereas treatment at 130°C or 140°C gave a more crystalline product. Treatment with NaOH produced the most digestible cellulose, followed by treatment with liquid ammonia at low temperature. The starting cellulose I samples were the least digestible (except for corn stover cellulose, which had a high amorphous content). The cellulose III I samples produced at higher temperatures had comparable crystallinities to the initial cellulose I samples, but achieved higher levels of cellulose conversion, particularly at longer digestion times. This is in agreement with earlier findings that the cellulose III I allomorph is more digestible than cellulose I. Factor analysis indicated that initial rates of digestion (up to 24 h) were most strongly correlated with amorphous content. Correlation of allomorph type with digestibility was weak but was strongest with cellulose conversion at later times.
Earlier studies have focused on determining which cellulose allomorph is the most digestible. In this study we have found that the chemical treatments to produce different allomorphs can also change the crystallinity of the cellulose, which has a significant effect on the digestibility of the substrate. When determining the relative digestibilities of different cellulose allomorphs it is essential to also consider the relative crystallinities of the celluloses being tested.
Materials
In this study, aqueous NaOH and liquid ammonia treatments were conducted on four celluloses: Avicel PH-101 (Sigma-Aldrich, St. Louis, MO, cat. no. 11365), α-cellulose (Sigma-Aldrich, cat. no. C8002), cotton linters (International Fiber Corporation, North Tonawanda, NY; cat. no. C10CL) and cellulose extracted from corn stover (obtained by the method described in the section on the preparation of cellulose from corn stover). These cellulose samples were selected because of their different structural properties such as DP and CI. The DP and CI of the celluloses used in this study are given in Table 5. The lower CI of the α-cellulose and corn stover cellulose is at least partially due to the lower cellulose contents of these samples, because of their contamination with xylan and lignin ( Table 2). Liquid ammonia was purchased from General Air, Denver, CO. The amorphous cellulose used in this work for CI measurement using XRD was prepared from Wiley milled filter paper (Whatman No. 1) using a slight modification of the procedure developed by Schroeder and coworkers [43] described elsewhere [44]. Briefly, cellulose was heated (125°C) with paraformaldehyde in dimethyl sulfoxide (DMSO) so that the hydroxyl groups in cellulose reacted with formaldehyde and became methylolated, causing the cellulose to dissolve in DMSO. The methylol groups were removed and the cellulose regenerated by slowly adding the DMSO solution to a 0.2 M solution of sodium methoxide in methanol and 2-propanol (1:1 by volume). The amorphous cellulose precipitated and was then thoroughly washed, first with an alkoxide solution in methanol, followed by 0.1 M hydrogen chloride, and then deionized water.
Preparation of cellulose from corn stover
Cellulose was extracted from corn stover harvested in Kearney, NE. The feedstock was made from corn stover internodes that were cut into approximately 2 cm long pieces and then cut in half length-ways. The corn stover cellulose was prepared by organosolv pretreatment using conditions similar to those previously described [45]. The corn stover internodes were pretreated at 130°C, for 56 min, using 0.57 weight percent sulfuric acid. The pretreatment resulted in hydrolysis of a majority of the xylan and solubilization of a majority of the lignin. The yield of solid after pretreatment was 53 weight percent. After pretreatment, the solid residue was washed exhaustively with water and then pressed to approximately 25% solids. Further delignification was achieved by treatment of the solid residue with acid chlorite according to the method of Saarnio et al. [46]. Sodium chlorite (0.67 g per gram of biomass) was mixed with the pretreated biomass in a large zip-top plastic bag at a 10% consistency in water. After thorough wetting of the biomass, acetic acid was added at a concentration of 1 mL glacial acetic acid/per gram of biomass. The bag containing the bleaching mixture was sealed and placed in a 60°C water bath for 2 h with regular mixing. After the chlorine dioxide gas was generated it was periodically vented from the bag.
After acid chlorite delignification, the mixture was filtered on a Buchner funnel and washed with deionized water until all traces of the bleaching mixture had been removed. Water was pressed from the filter cake to an approximately 25% solids content.
Liquid ammonia treatment
The conventional method of preparing cellulose III from cellulose I was used based upon the methods described by Barry et al. [21] and Isogai et al. [47]. About 3 g to 5 g of cellulose (oven dry equivalent weight) was placed in a stainless steel reaction vessel (Model 4714, Parr Instrument Co., Moline, IL). The reaction vessel was clamped shut, weighed, and then chilled in a dry ice/acetone bath (-75°C) to facilitate transfer of liquid ammonia into the reactor at atmospheric pressure. Anhydrous liquid ammonia was added slowly to the reaction vessel in the ratio of 2 g to 3 g per gram of cellulose using a stainless steel transfer tube from a liquid ammonia cylinder equipped with an eductor tube. After adding ammonia, the vessel was immediately weighed and then cooled in the dry ice/acetone bath for 15 min. For a -33°C treatment, the vessel was opened after 90 min treatment at -75°C and allowed to vent into a hood overnight until all the ammonia had evaporated. For a final treatment temperature of 25°C, the cellulose was initially treated at -75°C for 15 min, and then the vessel was immersed in a water bath maintained at 25°C for a reaction period of either 5 min or 90 min. After completion of the desired reaction period and prior to its removal from the water bath, the treatment was terminated by immediately depressurizing the vessel in a ventilated hood. The ammonia treated cellulose was removed from the vessel and left in the hood overnight until all ammonia had evaporated. A temperature profile for the liquid ammonia treatment conducted at 25°C is shown in Figure 9. For treatment at elevated temperatures, in other words 75°C to 140°C, the cellulose was initially treated at -75°C for 15 min then the vessel was maintained at 25°C for 30 min prior to subjecting the vessel to a final heat treatment of 1 h in a preheated fluidized sand bath (Techne Inc., Burlington, NJ) maintained at the desired reaction temperature (75°C, 105°C, 130°C or 140°C). After concluding the final heat treatment and prior to its removal from the sand bath, the reaction vessel was depressurized by allowing the ammonia to leak out in a ventilated hood. After releasing the ammonia the vessel was cooled in a water bath maintained at 25°C. The ammonia treated cellulose was removed from the vessel and left in the hood overnight until all the ammonia had evaporated.
Aqueous NaOH treatment
The aqueous NaOH treatment of cellulose samples was carried out according to the method described by Fengel et al. [29]. Powdered cellulose samples were soaked and stirred in 16.5 weight percent NaOH solution (2.142 g cellulose plus 250 mL NaOH) at 25°C for 2 h with air displaced by nitrogen. The NaOH solution was filtered from the sample and then the filter cake was extensively washed with distilled water until the wash was neutral. The washed solid was then freeze-dried. The aqueous NaOH treatment of all four cellulose samples was conducted with 16.5 weight percent aqueous NaOH because treatment at this concentration has been reported to cause complete conversion of cellulose I to II [29].
Degree of polymerization measurements
The distribution of molecular weight (or DP) of cellulose samples was characterized by size exclusion chromatography (SEC). The procedure was modified from prior published methods [48][49][50]. To measure the molecular weight distribution of cellulose, it was necessary to dissolve the cellulose in the SEC eluent, tetrahydrofuran (THF), which was achieved by forming the tricarbanilate derivative of the celluloses. Cellulose carbanilation was achieved by the following procedure. Cellulose samples (10 mg) that had been dried overnight at 40°C in a vacuum oven were placed in 5 mL Reactivials (Pierce, Rockford, IL), and dry pyridine (2 mL) plus phenyl isocyanate (0.4 mL) were added. Phenyl isocyanate was present in a 20-fold molar excess relative to the available hydroxyl groups in the celluloses. The reaction vials were heated to 70°C and stirred vigorously in a Pierce Reacti-Therm Heating/Stirring Module for 24 h. Reactions were quenched by adding 0.4 mL of methanol to the mixture to react with the excess phenyl isocyanate. The derivatives were precipitated in 26 mL of methanol and water (7:3, v/v) and then washed two times in the methanol and water mixture. The cellulose carbanilates were then dissolved in THF (20 mL). The solutions were filtered through 0.45 μm porosity Nylaflo (PALL, Port Washington, NY) filters prior to analysis. SEC was performed at room temperature using five columns (PLgel 10 μm 10 3 , 10 4 , 10 5 , and 10 6 Å, plus a larger 10 7 Å Shodex column) to cover the broad molecular weight range of the carbanilated celluloses. Polystyrene standards were used to calibrate retention time for molecular weight. HPLC grade THF was used as eluent at a flow rate of 1.0 mL/min. Chromatograms were obtained by monitoring the signal from a diode-array detector at a wavelength of 235 nm with a bandwidth of 10 nm. To calculate cellulose DP, molecular weights were divided by 519, the molecular weight of a repeating unit of carbanilated cellulose with the degree of substitution of 3.0.
Enzymatic digestion
Enzymatic digestions were performed on the untreated, aqueous NaOH and liquid ammonia treated cellulose samples in duplicates. Enzymatic digestions were performed in 125 ml Erlenmeyer shake flasks at 1% solids loading at 40°C, and 130 rpm for 120 h according to National Renewable Energy Laboratory (NREL) procedure NREL/TP-510-42629 [51]. Genencor GC220 (Lot 4900759448; 121 FPU/mL, 202 mg protein/mL) cellulase enzyme preparation was added at the level of 20 mg protein per gram of cellulose (corresponding to about 12 filter paper units/g of cellulose). No additional ß-glucosidase or Multifect Xylanase was added. The total volume of the saccharification slurries after adding the enzyme and 50 mM citrate buffer (pH 4.8) was 50 mL.
To determine the progress of cellulose conversion, a 2 mL aliquot of the well mixed slurries was taken at predetermined time points starting with 4 h, 16 h, and 24 h. Thereafter samples were removed every 24 h until 120 h. The samples taken were immediately filtered through PALL Acrodisc Nylon (PALL) 0.2 μm syringe filters and then refrigerated until subjected to glucose analysis. The glucose and cellobiose yields were measured by HPLC (Agilent 1100 series) using a Shodex sugar SP0810 column maintained at 85°C according to the NREL procedure NREL/TP-510-42623 [52]. The mobile phase was 0.2 μm filtered nanopure water at a flow rate of 0.6 mL/min. The sample injection volume was 10 μL and the run time 23 min. Cellulose conversion was calculated by adding the total glucose and cellobiose yields (both glucose and cellobiose were converted to glucan equivalent) for each hydrolysis time point.
The source slits were 2.0 mm and 4.0 mm at a 290 mm goniometer radius and the detector slits were 1.0 mm and 0.5 mm at the same radius. Dried cellulose samples were mounted onto a quartz substrate using a few drops of diluted glue. This diluted glue is amorphous when dry and adds almost no background signal. Scans were obtained from 8 to 42 degrees two-theta in 0.05 degree steps. The CI of cellulose was calculated from the XRD spectra according to the amorphous subtraction method described elsewhere [44]. Briefly, a diffractogram of amorphous cellulose was subtracted from the diffractograms of the test samples so as to remove the influence of the amorphous component in the diffractograms. Then the ratio of the integrated areas of the subtracted diffractogram to the original diffractogram was calculated to give the CI of the sample. Scherrer's equation [31,32] was used for estimating crystallite size: where l is the wavelength of the incident X-ray (1.5418 Å), θ the Bragg angle corresponding to the (002) plane, b the full-width at half maximum (FWHM) of the X-ray peak corresponding to the (002) plane, τ is the Xray crystallite size, and k is a constant with a value of 0.89 [25,53].
C CPMAS solid-state NMR analysis
The high-resolution 13 C CPMAS NMR spectra were recorded on a Bruker Avance 200 MHz spectrometer (Bruker BioSpin Corporation, Billerica, MA) with a 7 mm probe, operating at 50.13 MHz for 13 C, at room temperature. The spinning speed was 7000 Hz, contact pulse 2 ms, acquisition time 51.3 ms and delay between pulses 4 s, with 20000 scans. The adamantane peak was used as an external reference (δ C 38.3 ppm). The samples were hydrated with deionized water (around 43%) before recording the spectra. The 13 C chemical shifts were given in δ values (ppm) and each peak in the cellulose I, II and III I spectra were assigned according to literature values [35,36,54,55] , C5 (72.9 ppm), C2 (73.5 ppm), C3 (75.9 ppm), C4 (87.9 ppm), C1 (104.9 ppm). The CI was determined by applying the amorphous subtraction method to the NMR spectra [44].
Factor Analysis
Principal factor analysis was conducted on the cellulose conversion data obtained after 16 h, 24 h, 48 h and 72 h digestion, combined with the amorphous content data and the categorical variables cellulose type and allomorph type. The categorical variable cellulose type was coded (-2 = avicel; -1 = α-cellulose; 1 = cotton linters; 2 = corn stover) as was the allomorph variable (-1 = cellulose I; 0 = cellulose II; 1 = cellulose III). The factor analysis was performed with Pearson correlation and Varimax rotation using XLSTAT (Addinsoft USA, New York, NY), an add-in program to Microsoft Excel 2007 (Microsoft Corporation, Redmond, WA). | 2017-06-24T21:41:46.568Z | 2011-10-19T00:00:00.000 | {
"year": 2011,
"sha1": "f558b97b83c6d2feac91140f925d6cffcf43e302",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/1754-6834-4-41",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6553aacfdfd921b2bae596dab74da06dc07a656",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.